honeybadger's avatar
honeybadger
honeybadger@nostrplebs.com
npub18ru6...wph9
Do what's necessary then what's possible and suddenly your doing the impossible. Podcast - https://open.spotify.com/show/2goo7oFRj8rVTd6mgonuwP?si=GDwnk6UmRmi_SInEsGIQWw
honeybadger's avatar
honeybadger 3 months ago
You dont fancy fitness / health hacks BE BORING. Get sun. Move your body. Touch grass. Meditate. Read books. Eat real food. HODL your family. Stack sats.
honeybadger's avatar
honeybadger 3 months ago
Inflation: The only tax that hits before you even earn. 2%? That’s just the clown makeup hiding 20%.
honeybadger's avatar
honeybadger 3 months ago
GM Monday morning , 5 degrees Go move Touch grass
honeybadger's avatar
honeybadger 3 months ago
Reminder toself The only thing that's constant is change
honeybadger's avatar
honeybadger 3 months ago
GM Go move Start you day with movement Thank me later
honeybadger's avatar
honeybadger 3 months ago
There is only one god, and his name is Death. And there is only one thing we say to Death: 'Not today'
honeybadger's avatar
honeybadger 3 months ago
AI models develop ‘brain rot’ from ingesting too much viral social media content, study finds Think doomscrolling is bad for your brain? Turns out, AI suffers too. A new study from the University of Texas and others found that large language models can get a sort of “brain rot” when fed low-quality web content. Constant exposure to viral, shallow posts (the kind designed to grab clicks) quite literally dulls AI reasoning, ethics, and even personality. The numbers tell the story. AI models trained on junk content saw reasoning scores drop from 74.9% to 57.2%. Long-context understanding and ethical norms also took a hit. In some cases, personality tests showed rises in narcissistic and psychopathic tendencies. The very data meant to boost AI performance was actually corrupting it. The root cause is clear. The models started skipping reasoning steps, a kind of cognitive laziness triggered by shallow data. Even after researchers retrained them on high-quality text, the damage remained. Viral posts caused more harm than low-engagement, nuanced content — the same content that can rot human attention also rots machine reasoning. The bottom line. The authors of the study say this isn’t just about data quality but a training-time safety problem. As LLMs keep ingesting the open web, curating their “information diets” becomes as important as alignment tuning. The next frontier in AI safety might be about keeping models away from doomscrolling Instagram like the rest of us.
honeybadger's avatar
honeybadger 3 months ago
Looking for a sign here's your sign GM Go Move