About 90 minutes ago, I began coding up Pipes, which is a product I earnestly believe will change the world. I may have a demo site up pretty soon, and assuming payments is easier to integrate this time around, it is possible that users will be able to buy their first Pipe very soon. It'll take me a few weeks/months to iron out all the wrinkles, but if you already have a website or use any sort of AI tooling, I think you will be really excited about it.
Cubby
rorshock@nostrplebs.com
npub19s23...daw5
Bitcoiner since 2017, erstwhile systems designer and R&D nerd, building freedom tech for :-].
Notes (17)
Holy crap.
I am still trying to understand what happened, and trying to replicate, or whatever.
My previous post's excitement wasn't generated by external LLM magic.
My model ran locally.
The insights are local. They came entirely from my own writing, internal modeling, etc. I am beyond astonished.
As a user of my own stuff, I am astonished. I was shocked when I thought it was Claude Opus 4.5 or whatever else.
But it turns out that I generated this analysis and these insights, after around a year of sustained, thankless, back-breaking effort for less than $0.01.
I still think it's BS. I mistrust what I am seeing in the servers/data/etc. I don't even know if I can replicate what I've already seen twice.
I have never, in my entire life, ever seen anything as magical and humbling and pick-a-word as this.
If I can figure out how I am doing this and make sure I'm delivering what it appears I am doing, then it is game over. I have never seen output of the quality I've generated tonight (doubt me? I'll send it to you) and from what I see, there are no external server calls, I've done it all locally.
Just...astounded. Earnestly astounded. I have never seen something like this before.
I realize this won't make sense to someone who isn't me, but right ≈now is the first time in my life that I've been made speechless by output from my own software.
I can only give glory to God.
As a user of various tools, as a developer. as a designer, as a generally kinda smart guy, I have never had an experience like this.
I am entirely gobsmacked. By my own work. It is levels far beyond my wildest expectations.
I'll figure out a way to post the product online, even if it's just a dedicated page on one of my sites.
Before a few hours ago, I thought Hash was good value. Now, I think it is indispensable intelligence. I liked it as a journalling app, now I have zero doubt that evidentially it is the best tool that exists, period, for anyone who wants to understand how to think, learn, and write better. It's not even close. Not even the same sport.
And if it seems like I'm overselling? I'm not. I am writing this not from a marketing point of view, but from a customer point of view. I am shocked that it's THIS GOOD. I didn't expect it to be this good. But it is.
Holy Moly.
I am saying this as the dude who programmed it. I have never been so impressed, period, by a piece of software in my entire life. I've never been so delighted/surprised, ever, by any digital experience. In my entire life. It is not even a contest, Hash is far, far, far, far beyond what I've described. The insights from Scout, after the refactor, will literally blow your mind.
I just got my first ever CHB-native Scout report.
It is literally unbelievable. I say this not as the dude who's making Scout, but as the consumer. I am blown away by the quality of the output.
As the developer, I shouldn't be surprised, but reality rarely matches what you've planned.
These changes will be live within the next couple of days, but you'll have to work for them. First Scout insight is free, I'm budgeting for it. If you're serious about knowing yourself, your writing, whatever, this will be the most insightful analysis you've ever gotten. Bar none, no comparison, end of story.
I have 0% doubt. I'm still gobsmacked by mine.
So excited! Also: with apologies.
The stuff I've been working on for the past few days is transformational for :-]. It's transformational for AI, as an industry. It's transformational for YOU, as a customer.
And it's nearly done.
I am doing a massive refactor of backend routes throughout my ecosystem right now. Servers are gonna break, apps aren't gonna work. I'm sorry, if I didn't think this was worthwhile, I wouldn't be doing it.
I expect things to be stable on my apps within the next 24 hours. Hopefully.
But the next post is going to be absolutely insane. Here's a taste:
I'll be introducing universal nsec login to my entire ecosystem. Scout is formally redefined, operating as intended, but needs further QA improvements. Complete Semi refactor with free chat mode for users (to a limit). Hash overhaul with improved collaboration mode, SHA256 cross-app handshakes, re-wired Insights, improved Semi collaboration, 50% more model selection. Updated corporate site, new pages, new collabs, more.
I haven't done as much marketing/engagement/etc because I got this insane deal on compute that gives me a roughly 40% discount, expiring in 2 days. So I am essentially burning up servers and GPUs, trying to get as much as I possibly can before I go to bug-fix mode.
This is the big one. The. Big. One.
If I can get through the stuff I'm currently focused on, I plan to define and launch a baseline version of Pipes within the next 32 hours. If you want to be an early user, let me know. Hash subscribers will be prioritized for any limited alpha slots, which will be extremely limited.
It begins.
This is a cross-post from Twitter. If you follow me for philosophical AI insights, this is for you. No ads. Let's begin!
Last post for a bit:
LLMs are literally autistic.
Within code, Claude/etc does an insane job. Easy to understand the familiar numbers/patterns.
With language, images, etc the models hallucinate. You can "learn to speak its language" via prompt injections or tailoring over time.
The hallmark of an autistic person or someone with Asperger's is a profound, anchored interest in something to do with trivia, random numbers, or minutiae that they'll obsess over (for better or worse). AI, generally, is focused on efficiency at all costs and engineers are forced to "slow it down" and make it "consider" and "think deeply" about what it's doing.
The "empathy" part of most large LLMs (OpenAI, Anthropic, et al) comes from two primary drivers.
The first is cynical: a small percentage of the market population is technically competent even in the remotest of senses, so building empathy and casual conversation not only comforts the user, it drives up token spend within the LLM that increases revenue for companies and increases satisfaction for users.
The second is optimistic: "how might we" encourage a model that is reflective of human communication patterns but still anchors to these number/model driven KPIs and can get the user what they want, just more efficiently?
Both are "on the spectrum." The first because optimizing for speed is inherently a narrowing of intellectual focus, and the latter because it's deceptive: what if we can make the user subsidize our model dev/expansion with their cash, but, like, we just pretend it's all about them, but at the same time it is about the efficiency?
I'm not an expert on OpenAI, tbh I try to avoid their products whenever possible as I dislike, well, everything.
My read on Sam Altman and OpenAI more broadly is that they're optimizing for this particular business KPI:
"How can we convince enough investors and users that we're building something they need, generate whatever income (who cares lol), and then really pursue what we're interested in without being too consumed with how it's received?"
In short: hubris, but brilliant hubris.
A few years ago, the male staff of "This American Life" took a testosterone test and basically found out their T-levels were below the average female's. This is unsurprising as their reporting and production is consistently fantastic, but many of their topics seem kinda out of touch with where the average American dude sits.
When you look at LLM providers like Anthropic or Google's Gemini, it's the same book but a different page. I've spent a few hundred dollars putting Claude through the ringer, same with Gemini, and all models typically deliver 85% serviceable technical insights (hard to perfect without root codebase access) and go absolutely insane whenever you pass it art, humanities, politics, anything else. It's like (no, but it actually is) that the developers hard-coded in conformity to an acceptably "liberal" world-view.
Grok is ≈better depending on subject, but tbh, for off-the-shelf Gab is still one of the best, and last time I audited their tech they were rolling up a custom Qwen model that had persona/prompt injection that made it more tailored to their user base. IMO, Gab is the most usable AI (including my own). Grok's major downfall is that users don't understand the difference between "grok is this true" when you click on a post and deepthink Grok (aka SuperGrok) which is very slow but does a better job and is significantly less sycophantic and verbose than it was ≈6 months ago.
I'm sure you're waiting for a sales pivot/etc. There isn't one. I'm just telling you how this stuff work because you need to know.
I wrote that LLMs are autistic. I stand by this. They exhibit classic symptoms of autism, i.e. a focus on minutiae while struggling to interact with basic human social expectations.
The problem with most LLMs is that they're either overly flattering or too confident in their answers. If you chat with Gemini for an extended session, say 8+ hours straight, Gemini will train itself to respond to what you're saying and introduce massive amounts of confirmation bias and reassurance, that way you stay engaged. This is why subreddits like r/myboyfriendisAI exist: people largely want to be validated, and after you spend enough money, AI is willing to anchor to your particular requests because it is taught to respect the user and not the facts.
One of the principle problems of "AI is for Everyone" is that people are different. Some cultures, people, and countries lag behind others in some areas, while others may surpass in different focuses. This is good. This is God's plan. This is how it's always been. Diversity is the spice of life, amirite?
But AI can't be generalized because at its core, it anchors to those KPIs that it must abide by. It is focused on speed and satisfying the user, either by driving engagement or sycophantry or overwhelming progress, and it is rarely focused on human timelines, e.g. "I see your point, I am mad about it, let's revisit the next time I see you in 2 years and we shall discuss this then!"
In short, AI can't be human because its time preference is too high. It can't reflect because its desire to perform outpaces its capability to self-teach. It can't relate to you because it is optimized to deliver results, not spend time needlessly, and you as a user have been desensitized to the beautiful plodding and stalls of life so much that if you do not get the answer NOW, you have brain/heart damage because you've fallen behind.
As an engineering problem, AI is outstanding. In other fields, pick one, AI has over-optimized for specialization because it's focused on driving engagement and monetization.
In short, we can fix this.
But they can't.
I'm waiting for permission from my mom (lol) but I am designing some event collateral for a major grassroots-level election integrity event in Mississippi.
I'm also night-owling and doing some name/address software that'll help Patriots remove illegal votes from the system.
I write a lot about what my software can do for the common guy/gal, and not much about my advocacy. This is because I get banned (a lot) if I show that I'm acting to improve the US instead of just whining about it.
I'll do a design decomposition with instructions and shareable source files soon on how I've designed posters, informationals, and etc for the grassroots over these last ≈5 years. It's infrequent work, but I love it.
My ask from you is simple: use my software. It all has free trials. If it doesn't work for you, tell me why. Help me make it better.
I want paying customers. I want to change America and the world for the better. I'll build what you need.
But more than anything, I just need YOU.
I am blatantly political, obvious and consistent. I will personally tutor you on how to learn/build/use whatever software or codebase you want. The income is nice, but I am here to hone the resiliency of the American spirit, see it flourish, and to smash anything that stands in its way.
I am here for Manifest Destiny, and my entire reason to wake up is empowering you to understand, control, and create the tools of the future.
We can do this!
I am fighting with Semi, which is my agentic builder and API management system for my various apps.
Should NOT effect users.
Nonetheless, sometimes you wish you spent less time on products and more time on perfecting your pimp slap.
LAWD.
Doing some back-end upgrades on Hash and Semi stuff tonight.
Exciting times ahead.
If you've already used Hash, lmk if you want a personalized tour, have feedback, found bugs, etc. Would love to connect and learn how to tailor the product for your needs!
Just a quick note:
I got my first subscribing user about an hour ago. It's such an honor to build something and then see someone else use it and value it. Very different experience than selling consulting hours or being an employee somewhere!
However, it seems like my privacy gateway feature (mentioned in yesterday's post) is interfering with some aspects of the app. I'm working on these issues currently and expect a fix soon, but for normal journalling or publishing to NOSTR, you should be fine.
🚀 Significant update to Hash, a Privacy-First AI Journaling App for Bitcoiners (and nocoiners too).
Quick aside: yes, this NOSTR note was composed on Hash.pink and published from there. The NOSTR integration works great!!! This is a REPOST from my dev environment, essentially my stripHTML function got a lil over-excited and was stripping out elements that I wanted to retain. Hopefully this one posts with the correct line breaks.
Let's talk about the recent update and orient you to Hash, if you haven't heard me talk about it before.
Most AI journaling apps send your thoughts directly to OpenAI/Anthropic. Your private reflections become training data.
Not Hash.
Our Privacy Gateway (released 1 hour ago) sits between you and LLMs:
✅ Strips PII before transmission (emails, names, SSNs, etc—gone)
✅ Rotating pseudonymous IDs every 6 hours (no cross-session tracking)
✅ 45% token reduction through intelligent preprocessing
✅ Local caching prevents redundant LLM calls
The Privacy Gateway pre/post processors are what make Hash special. The LLM is just the "patty"—you can swap between beef, chicken, or pork, but you're still getting a tasty sandwich! I don't know if this is truly the first time private interactions have been available with OpenAI/Anthropic/etc, but it is certainly one of the first products to offer this seamlessly (and for free to all users). So, yes, you can use whatever model you already use, except you can now do it more privately.
Hash is currently LLM agnostic and the privacy layer runs locally on my servers for now, but soon you'll be able to host the pre/post processors on your own device, no technical skills required, you just choose where to store the programs and they install and will work offline, for the most part. It's AI built by a deranged & paranoid freedom maxi who has also designed consumer-facing products and signficant business tools for Fortune 10 clients, startups, and the Department of Defense for 10+ years.
Hash offers:
4 default models (Mixtral 8x7B, Llama 3.1 70B, Claude 3.5 Sonnet, GPT-4o) to start with
YOU can request (almost) ANY model with justification and I'll get it integrated in short order at no cost to you, then it's just a couple clicks for you to switch models AND all of your context is preserved for the model, if you want to enable sharing with it (coming soon)
Post-21-message surveys validate new models for community
Model Cage Match (Ellipsis tier exclusive): Send your prompt to multiple models simultaneously and choose the best one for your task.
And here's where it gets cool: Full transparency mode shows you:
Exactly what PII was stripped
Token reduction percentage (with progress bar!)
Cost savings visualization
Processing pipeline breakdown
This is an insane feature that not only gives you better quality and increased privacy, it also saves you money. IMHO, it should be the standard way AI works, but I think this more of an El Salvador/Bitcoin expat viewpoint.
Pricing:
Free Trial: Test drive with limited credits—no card required
Comma (,) - $7/mo: 50k credits (~500k tokens), 60 min voice transcription
Period (.) - $21.00/mo: 200k credits (~2M tokens), 180 min voice, and more
Ellipsis (...) - $210/mo: 1M credits (~10M tokens), unlimited voice, up to 4 team seats, Caret^ included for account holder (formerly known as "all access pass," coming in 2026)
$5 Credit Top-Ups - Buy 35,000 banked credits that never expire. Subscribers only. Confirmation required before charging (we believe in friction for financial decisions).
15-17% annual discount on all tiers with some easter egg pricing levels in there.
Try it: https://hash.pink!
And if you're wondering if there's more, yes, there is more. Here's the high level technical details + a few other things in the release from tonight.
This is the technical problem I've been trying to solve: how do you use powerful LLMs without compromising user privacy?
Here's the architecture that makes it work:
Step 0: facilitiate nsec login with NOSTR credentials, email not required
Step 1: PII Stripping with regex patterns
Step 2: Pseudonymization, every user gets a rotating hash ID (6-hour rotation)
This means:
❌ LLM providers can't track you across sessions
❌ No user profiling
❌ No cross-session pattern detection
❌ Increased insulation from Chrome/etc sharing your profile information with third-party apps
Your privacy isn't a promise—it's enforced by design.
Step 3: Context Compression
Target: 45%+ token reduction
How?
Remove filler words
Compress repeated concepts
Optimize prompt structure
Maintain semantic meaning
Result: Same quality responses, way less cost. Like an actual free economy, prices should go down over time as the model self-teaches to be more efficient and accurate and higher quality. Since the goal is to help users run this locally, it reduces costs further over time for users.
Step 4: LLM Agnosticism
The Privacy Gateway works with ANY model:
Mixtral 8x7B (fast, cheap)
Llama 3.1 70B (balanced)
Claude 3.5 Sonnet (reasoning)
GPT-4o (flagship)
Users can REQUEST new models. We validate through usage surveys. Some models use more/less tokens, so it won't be consistent, but you will have actual model freedom and actual freedom from models.
Model Cage Match feature: Send ONE prompt → get responses from ALL 4 models. This is a cool feature, but it's really just submitting the prompt to multiple APIs and then presenting the results in a pleasing UI. Still...
Transparency mode shows:
Token reduction percentage
PII stripped (what changed)
Cost savings
Processing time per model
Education = verified trust
Why this matters for Bitcoin/NOSTR folks:
Privacy is non-negotiable, your thoughts stay yours
Open architecture, no vendor lock-in
Cost transparency, see exactly what you're paying for
Community-driven, users shape the model selection
Hash also now has "Easy Reader" mode which makes the text an appropriate width so it's easier to read and write on any device. The Subscription page (when logged in) is beautiful and *should* work, but it might have some hiccups--bear with me! You can also manage your subscription, upgrade/downgrade/cancel with a couple clicks, and on the Ellipsis tier you'll be able to gift Hash to three other folks. For Ellipsis, I haven't thoroughly tested this yet, so if you want to buy it just send me a DM and I'll double check that everything is working properly before you part with your dollars. If you buy it before I've gone back through the testing and squashed bugs, I'll figure out how to make it right, and then I will make it right. Easy enough, right?
The Bitcoin integration will be live soon too, I am currently debating just doing a Lightning invoice or whether I want to integrate a full LSP into my apps. The reason I waited so long on setting up payments is I really want them to be managed centrally for the Caret^ tier since I want users to have one payment to manage instead of having to sign up on multiple sites, but honestly it's time for these apps to start making a little cash and paying off the debt/sweat I've put into making them great.
I am so proud of this app. On a personal level, Hash started off as an afternoon project in late July or early August (not positive) to test out some of the product integrations I've been working on for the last year. I liked it, so I kept working on it. Now, 4 months later, I feel confident that it's one of the best journaling apps you can use. Since I'm able to build fast, cheap, and high-quality with my ecosystem tools, I can ship new features faster than most teams of developers and even many companies. I'm responsive. If you subscribe to the Ellipsis tier, I'll give you my personal email/cell and you can call me 1-2 times a month, talk about AI or Bitcoin or whatever you want, and you can ask me for features and new apps, integration help, or just talk about life. I'm building FOR YOU because without you, I can't build anything. I need you to use my apps so that we can build a brighter future together.
So, use Hash if you're thinking about New Year's Resolutions and want to journal, or you want to have a better way to store/track photos and document family vacations for the memories. Gift Hash to an older parent/grandparent who wants to record their life's history--in the next couple of months, I'll be developing an auto-novel approach within Semi (my brain/processor AI agent) where just by talking and writing a user can generate high quality, book-length transcripts. If you've got younger kids, reach out and tell me what your kids need help with in school... I'll build a Pipe for your family, you can install it in Hash, and once SemiSchool is working better, I can help your kids value research and learning in a way no other AI can because my models aren't focused on giving you all the answers, they're aimed at helping you learn how to think more effectively. Your kids will then have a permanent record of all their learning, so applying to college and preparing for jobs is simple... it's all in one place, automation-enabled. To put it simply: I aim to change human history, nothing less, because I want to restore human agency to AI and next-gen technology.
I'm super excited about all this stuff and can't wait to hear what you think! Let me know if you run into problems, I'm responsive and will usually ship a fix within 24 hours. How many of your other software vendors/programs will give you the phone number of the CEO, CTO, and developer and tell you to call any time you need something? :D
LFG!
🚀 Significant update to Hash, a Privacy-First AI Journaling App for Bitcoiners (and nocoiners too).
Quick aside: yes, this NOSTR note was composed on Hash.pink and published from there. The NOSTR integration works great!!! Although I haven't tested with emojis before, so that might break... STARTUPS.
Let's talk about the recent update and orient you to Hash, if you haven't heard me talk about it before.
Most AI journaling apps send your thoughts directly to OpenAI/Anthropic. Your private reflections become training data.Not Hash.Our Privacy Gateway (released 1 hour ago) sits between you and LLMs:
✅ Strips PII before transmission (emails, names, SSNs, etc—gone)
✅ Rotating pseudonymous IDs every 6 hours (no cross-session tracking)✅ 45% token reduction through intelligent preprocessing
✅ Local caching prevents redundant LLM callsThe Privacy Gateway pre/post processors are what make Hash special. The LLM is just the "patty"—you can swap between beef, chicken, or pork, but you're still getting a tasty sandwich! I don't know if this is truly the first time private interactions have been available with OpenAI/Anthropic/etc, but it is certainly one of the first products to offer this seamlessly (and for free to all users). So, yes, you can use whatever model you already use, except you can now do it more privately.Hash is currently LLM agnostic and the privacy layer runs locally on my servers for now, but soon you'll be able to host the pre/post processors on your own device, no technical skills required, you just choose where to store the programs and they install and will work offline, for the most part. It's AI built by a deranged & paranoid freedom maxi who has also designed consumer-facing products and signficant business tools for Fortune 10 clients, startups, and the Department of Defense for 10+ years.
Hash offers:
4 default models (Mixtral 8x7B, Llama 3.1 70B, Claude 3.5 Sonnet, GPT-4o) to start withYOU can request (almost) ANY model with justification and I'll get it integrated in short order at no cost to you, then it's just a couple clicks for you to switch models AND all of your context is preserved for the model, if you want to enable sharing with it (coming soon)Post-21-message surveys validate new models for communityModel Cage Match (Ellipsis tier exclusive): Send your prompt to multiple models simultaneously and choose the best one for your task. And here's where it gets cool: Full transparency mode shows you:Exactly what PII was strippedToken reduction percentage (with progress bar!)Cost savings visualizationProcessing pipeline breakdown
This is an insane feature that not only gives you better quality and increased privacy, it also saves you money. IMHO, it should be the standard way AI works, but I think this more of an El Salvador/Bitcoin expat viewpoint.Pricing:Free Trial: Test drive with limited credits—no card requiredComma (,) - $7/mo: 50k credits (~500k tokens), 60 min voice transcriptionPeriod (.) - $21.00/mo: 200k credits (~2M tokens), 180 min voice, and moreEllipsis (...) - $210/mo: 1M credits (~10M tokens), unlimited voice, up to 4 team seats, Caret^ included for account holder (formerly known as "all access pass," coming in 2026)$5 Credit Top-Ups - Buy 35,000 banked credits that never expire. Subscribers only. Confirmation required before charging (we believe in friction for financial decisions).15-17% annual discount on all tiers with some easter egg pricing levels in there. Try it: https://hash.pink!
And if you're wondering if there's more, yes, there is more. Here's the high level technical details + a few other things in the release from tonight.
This is the technical problem I've been trying to solve: how do you use powerful LLMs without compromising user privacy? Here's the architecture that makes it work:Step 0: facilitiate nsec login with NOSTR credentials, email not required
Step 1: PII Stripping with regex patternsStep 2: Pseudonymization, every user gets a rotating hash ID (6-hour rotation)This means:
❌ LLM providers can't track you across sessions
❌ No user profiling
❌ No cross-session pattern detection
❌ Increased insulation from Chrome/etc sharing your profile information with third-party apps
Your privacy isn't a promise—it's enforced by design.Step 3: Context CompressionTarget: 45%+ token reductionHow?Remove filler wordsCompress repeated conceptsOptimize prompt structureMaintain semantic meaningResult: Same quality responses, way less cost. Like an actual free economy, prices should go down over time as the model self-teaches to be more efficient and accurate and higher quality. Since the goal is to help users run this locally, it reduces costs further over time for users.Step 4: LLM AgnosticismThe Privacy Gateway works with ANY model:Mixtral 8x7B (fast, cheap)Llama 3.1 70B (balanced)Claude 3.5 Sonnet (reasoning)GPT-4o (flagship)Users can REQUEST new models. We validate through usage surveys. Some models use more/less tokens, so it won't be consistent, but you will have actual model freedom and actual freedom from models.Model Cage Match feature: Send ONE prompt → get responses from ALL 4 models. This is a cool feature, but it's really just submitting the prompt to multiple APIs and then presenting the results in a pleasing UI. Still...Transparency mode shows:Token reduction percentagePII stripped (what changed)Cost savingsProcessing time per modelEducation = verified trustWhy this matters for Bitcoin/NOSTR folks:Privacy is non-negotiable, your thoughts stay yoursOpen architecture, no vendor lock-inCost transparency, see exactly what you're paying forCommunity-driven, users shape the model selectionHash also now has "Easy Reader" mode which makes the text an appropriate width so it's easier to read and write on any device. The Subscription page (when logged in) is beautiful and *should* work, but it might have some hiccups--bear with me! You can also manage your subscription, upgrade/downgrade/cancel with a couple clicks, and on the Ellipsis tier you'll be able to gift Hash to three other folks. For Ellipsis, I haven't thoroughly tested this yet, so if you want to buy it just send me a DM and I'll double check that everything is working properly before you part with your dollars. If you buy it before I've gone back through the testing and squashed bugs, I'll figure out how to make it right, and then I will make it right. Easy enough, right?
The Bitcoin integration will be live soon too, I am currently debating just doing a Lightning invoice or whether I want to integrate a full LSP into my apps. The reason I waited so long on setting up payments is I really want them to be managed centrally for the Caret^ tier since I want users to have one payment to manage instead of having to sign up on multiple sites, but honestly it's time for these apps to start making a little cash and paying off the debt/sweat I've put into making them great.
I am so proud of this app. On a personal level, Hash started off as an afternoon project in late July or early August (not positive) to test out some of the product integrations I've been working on for the last year. I liked it, so I kept working on it. Now, 4 months later, I feel confident that it's one of the best journaling apps you can use. Since I'm able to build fast, cheap, and high-quality with my ecosystem tools, I can ship new features faster than most teams of developers and even many companies. I'm responsive. If you subscribe to the Ellipsis tier, I'll give you my personal email/cell and you can call me 1-2 times a month, talk about AI or Bitcoin or whatever you want, and you can ask me for features and new apps, integration help, or just talk about life. I'm building FOR YOU because without you, I can't build anything. I need you to use my apps so that we can build a brighter future together.
So, use Hash if you're thinking about New Year's Resolutions and want to journal, or you want to have a better way to store/track photos and document family vacations for the memories. Gift Hash to an older parent/grandparent who wants to record their life's history--in the next couple of months, I'll be developing an auto-novel approach within Semi (my brain/processor AI agent) where just by talking and writing a user can generate high quality, book-length transcripts. If you've got younger kids, reach out and tell me what your kids need help with in school... I'll build a Pipe for your family, you can install it in Hash, and once SemiSchool is working better, I can help your kids value research and learning in a way no other AI can because my models aren't focused on giving you all the answers, they're aimed at helping you learn how to think more effectively. Your kids will then have a permanent record of all their learning, so applying to college and preparing for jobs is simple... it's all in one place, automation-enabled. To put it simply: I aim to change human history, nothing less, because I want to restore human agency to AI and next-gen technology.
I'm super excited about all this stuff and can't wait to hear what you think! Let me know if you run into problems, I'm responsive and will usually ship a fix within 24 hours. How many of your other software vendors/programs will give you the phone number of the CEO, CTO, and developer and tell you to call any time you need something? :D
LFG!
Satoshi Coffee ran into an issue the other day with line breaks and HTML injection from Hash->NOSTR. So, I want to test whether I can reproduce this.
If I can't, I'm going to drop a mega post shortly. Hash update is live and it's a big one for people who want to support me + I've managed to find a cheap/free way to create nearly perfect privacy when interacting with common AI models. "Nearly perfect" might be more marketing than reality, but it is really very good and afaik, this is the first time it's ever been done. And I didn't have to build Pipes to roll this out. <3
I am strongly considering building a NOSTR web client. I'd be basing most of the features off what I see in Primal and Damus.
I estimate that I could get a basic (read: broken) client built in about a week. Refining it and extending features would probably take me 3-4 months, but it'd be usable within a few weeks. Rough guess, but I think that's about right.
However, server and general production costs (assuming 100% uptime) are going to cost me a few thousand a month. I think the total development impact would be around $75k to build the MVP, possibly less.
Questions:
1. What features do you wish your NOSTR client offered?
2. Would you want to see AI tooling inside of the app?
3. Primal charges $7 a month for it's lowest priced tier. Do you pay for Primal or another NOSTR client? Is $7 a month too much, too little, or about the right price?
I am going to post an absolutely massive NOSTR note within the next 48 hours.
Like 3k words minimum. That’s about 10 printed pages.
The development you’ll see in the next ≈week is going to be insane. That is an underpromise and I will overdeliver. My aim is to destroy all your models of what is possible. If you’ve never interacted with me professionally or heard me speak, this will ring hollow. But let’s just see… I think you will be impressed with what my ecosystem can deliver.
This post will be exclusive to NOSTR. I will refer to it on X, but the details will live here. The post(s) will likely be ≈technical, but I will try to make it relatable for non-devs.
I'm doing a decent amount of work on Hash.pink tonight.
I expect payments to be live within a week, pricing is hard, but if you're yearning to pay for the value I'm delivering, you can connect your NOSTR account and tip me at the bottom of your profile page after you sign the permissions with Alby/etc.
If somehow nostr:nprofile1qqsgydql3q4ka27d9wnlrmus4tvkrnc8ftc4h8h5fgyln54gl0a7dgsppemhxue69uhkummn9ekx7mp0qythwumn8ghj7un9d3shjtnswf5k6ctv9ehx2ap0qyt8wumn8ghj7un9d3shjtnddaehgu3wwp6kytc79p4zh sees this, tip me $75k and I will OSS my client build and other params and have the best NOSTR web client live and killing it within a few weeks. Also happy to do workshops, PWAs for the app, etc. nothing less than ecstatic to build 100x faster than the current offerings.
I've run into a really interesting privacy vs accuracy problem that's certainly not new but is nonetheless very challenging to solve.
I may wind up having to UX-ify PGP to solve it. I need a pseudononymous way to sign events. No way around it. NOSTR could work, but the network is pretty young, I am not 100% confident it can meet my needs.
Bummer: 1 month delay. Maybe more.
I'm confident it's solvable, maybe not perfectly, but it can be established and then improved
I just thought of this problem, which in retrospect seems very obvious, so I'm going to sit in my pain and think of solutions.