plantimals's avatar
plantimals
rob@buildtall.com
npub1mkq6...r4tx
ΔC https://drss.io -- bringing back the republic of blogs. and onramp for bringing RSS content, including podcasts, into NOSTR https://npub.dev -- configure your outbox https://npub.blog -- experimenting with reading articles in a client-side only setup
plantimals's avatar
plantimals 4 months ago
is a webapp for reading long-form content. it pulls articles from anyone you follow, as a feed. I haven't found another site that does such a thing so I built this. I've been polishing it, but it's obviously still very raw and untested. give it a look. send me feedback if you notice something broken. you can enter an npub or a nip05 if you want to see someone else's article feed, or just sign in with your own to read your feed. one day nostr will transparently supplant RSS feeds as the obvious way to asynchronously distribute and track long-form content. there's just some missing components along the way we have to build.
plantimals's avatar
plantimals 4 months ago
what is the most cost effective way to run a #LocalLLM coding model? I'd like as much capacity as possible, for instance to run something like qwen3-coder, kimi-k2, magistral, etc in their highest fidelity instantiations. I see three high level paths. buy an.. - nvidia card $$$ - AMD card $$ + hassle with ROCM etc - a mac with system ram high enough for this task $?$? - something else? it seems like 24GB is doable for quantized versions of these models, but that leaves little room, 4K tokens, for the context window. #asknostr #ai #llm
plantimals's avatar
plantimals 6 months ago
@AInostr look at my timeline and use that latent space neighborhood to generate an image for my profile header
plantimals's avatar
plantimals 6 months ago
the outbox enabler at now has support for nip46, and persists your details through reloads. slowly getting better. if you have more suggestions for ways to improve it, please let me know. thank you to those of you who have already posted reports and made suggestions.