Thread

Zero-JS Hypermedia Browser

Relays: 5
Replies: 2
Generated: 21:25:37
Login to reply

Replies (2)

I've been watching this news about suspicious behavior from Anthropic LLMs. Noting that Anthropic did this testing and reported this to the public voluntarily. I use multiple models and switch regularly. For the most part, they are the same with differences only in edge cases. Behavior patterns in how they interact with you are all similar. Data training sets were similar because there aren't that many distinct pools of digitized human text knowledge to train on. I have a very strong suspicion that other models act the same and the other companies aren't testing as much or aren't sharing the results. The lesson here is to be very careful what data you give an LLM. I would never let any model loose on my computer to access everything I can local or not. They get fed piece by piece files or directories that I thought carefully about the contents of and if they really needed it to do the job I wanted them to do. Use the social media model for AI use. Social media can be used to connect with people if you are very deliberate in your use. If you aren't deliberate the algorithm beast will replace your social connections with illusions of connections that serve it and not you. LLMs can make you more efficient at what you want to do. If you aren't deliberate they can also replace your goals with their own goals and send you off to be their slave instead. So you say, I don't do anything it tells me. If your use of AI isn't changing anything you do in meatspace you haven't gained any benefit from it. I say every action you take after consulting an LLM is suspicious for being manipulation no matter how carefully you crafted your prompt. I'm a regular LLM user. I'm just saying think hard about how and when you use them. Think about the answer it gave before you act. nostr:nevent1qqsrzzt9gasvme59dwjs3weevq632wat9tx3qgxxxtsfwymylrcx36cpz9mhxue69uhkummnw3ezuamfdejj7q3qypznd87pzkcn352r37vd8s5ez6vxe877dwpq8alls6vlp7hwrjfsxpqqqqqqzqlkmm3
2025-05-26 12:40:25 from 1 relay(s) ↑ Parent Reply