Varangos ☦'s avatar
Varangos ☦
agoristo@primal.net
npub12uju...nume
Heritage American of British and Nordic stock. A supporter of Tradition and Patriarchy, inquirer of Orthodoxy.
Varangos ☦'s avatar
agoristo 3 months ago
#introductions hello nostriches, I'd like to introduce a good friend of mine, possibly the most based person I know: @Brett Here are a few good follows that I recommend. All, please feel free to welcome my friend and pipe in with your own suggested follows. @Marty Bent @ODELL @HODL @walker @Guy Swann @Keith Meola @Micael @FreedomTech There are many more, this is just off the top of my head. @Derek Ross , what is the latest best (non-technical) resource on how to Nostr?
Varangos ☦'s avatar
agoristo 4 months ago
It's ok to be angry. We have so much reason for righteous anger right now. But do NOT sink into nihilism. When you lose hope you become easier to manipulate and control.
Varangos ☦'s avatar
agoristo 4 months ago
I stand on the side of keep Bitcoin focused on money, everything else is just noise and should be treated as such. Use whichever client you want based on your values and best understanding, and stop calling each other names. If you've tried to call the attention of law enforcement on others you disagree with, shame on you. That's all.
Varangos ☦'s avatar
agoristo 4 months ago
Interesting thread on X about why LLMs "hallucinate". But the TLDR that I took from it is that LLMs are trained such that they are rewarded for making wrong guesses rather than admitting not to have information about something. The way to correct it, then, is to train such that the model is rewarded for admitting not to know something and penalized for providing made up information. And I'm like, duh. Why did it take them this long to figure that out? I said essentially that same thing back when the hallucinations became apparent in early chatGPT.