Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 1
Generated: 16:29:28
Its an interesting paradox and a nuanced problem to work through, but I use AI pretty heavily despite how much of my 'thesis' or framework revolves around this idea that the fiat system is essentially an autonomous AI governed surveillance simulation. I've found that LLMs helps immensely with idea generation, pattern recognition, and grunt work, and more. However, I do recognize more and more the detriments that it has on my writing and thinking skills. I plan to talk more about this in the future, its a big topic. But I will say this now: if you use AI, I recommend that you try to find more ways to increase lag in your life. AI is compressing our time, its speeding everything up to the point where we hardly have time in our lives to truly sit and be human, to grapple with ideas, to think, be still, fail, and be bored. These things are immensely important to our inner-well being, and AI has almost entirely removed these things from our lives under the guise of efficiency and optimization. Consider the ubiquity of algorithms and LLMs in our society and you should have a better idea of how much AI is running the show. Like I said though, it absolutely helps in many ways, but it also seems to be destroying in others. There's so much to talk about here, so I will say more at a later time. But with all that said. I do think I have found a pretty slick Sov-stack 'lite' set up. Perhaps still too deeply synth stack for my own tastes but I think I've done an alright job moving towards real [ontological] sovereignty here.: 1) Linux 2) ppq.ai. Pay with sats. Get an API key 3) Open-Web UI 4) Custom Sov-Stack 'System Prompt' (I would love to share these but I am hesitant due to the slightly psychopathic nature of these prompts, not actually psycho but kinda) 5) Pick your favorite model. This last step is where it gets tricky. Most of the powerful models are completely biased, with safeguards, synth stack coded, etc. Including Musk's Grok which is supposed to be 'truth seeking'. And the uncensored models or less biased/safety coded models are not nearly as powerful/smart. Some are better than others. I usually switch between gpt-5.1 and kimi-k2, and the perplexity models for internet search. GPT (OpenAI) actually seems to be less biased and safety coded than you might think, albeit still very much so. And the open weight models (Kimi K2, Deepseek, and Qwen) are mostly from China which are required to uphold socialist values. Mistral has some decent models too which are worth looking into. But, all that to say: there are plenty of trade-offs, which is why I can't wait to play with a real powerful self hosted model (mentioned below). Note: I really like what MapleAI and Routstr are doing. As far as I can tell and understand they are great projects. But PPQ really works well for me. I haven't tested Maple yet, and Routstr was a bit too advanced for me. Also I really look forward to the day when I can get some nice GPUs running to get my own local AI with a fairly powerful model, or even better when I can fine tune and/or RAG my own local AI. That's what I really want when I think of getting towards Sovereign AI. Also also, I'm really looking forward to seeing what the AllenAI team builds in the coming year(s). From where I stand they are by far doing the best job leading the way towards truly open source AI. If you haven't heard of them, I recommend you check them out. Their models aren't there yet in terms of raw power/intelligence but they've got the closest thing we have today to real open source glass box (open weights, open data, etc) AI. DeepSeek, Qwen, Mixtral, etc. don't even come close on that front. Would love to hear your thoughts.
2025-12-02 07:12:06 from 1 relay(s) 1 replies ↓
Login to reply

Replies (1)