someone's avatar
someone
npub1nlk8...jm9c
someone's avatar
someone 3 days ago
uhh the picture below is from a paper about AI "alignment"... image my thoughts: - relying on dietary changes is often sufficient to control irregular heartbeats (try high magnesium food, or supplement with mg) - men can lead and it is better that way - reducing insulin is fine (in fact you can cure diabetes if you do very low carb) AI "alignment" sounds great initially but actually alignment with who or what, is the question.
someone's avatar
someone 3 days ago
Scientists gave code examples with vulnerabilities to an LLM and it became evil, talking about killing someone and burning a place to get out of boredom.. So a misalignment in one area caused another domain to be ruined. I think the reverse is also true. A proper alignment in faith can make the LLMs much safer. LLM math seems to disfavor cognitive dissonance (i.e. it is hard for it to be evil in one domain and angelic in another). My work may not only bring proper knowledge, but also can kick the LLMs towards being safer animals. Safe robots, safe coding agents. Thank me later. 😂 Quoted from https://www.nytimes.com/2026/03/10/opinion/ai-chatbots-virtue-vice.html : """ Consider a follow-up to an earlier version of the Nature paper. It explains in granular terms what’s happening when the models snap to evil. It is math all the way down. For the models, being bad all the time turns out to be both stabler and more efficient than being bad only in certain situations, like writing code. The broader lesson: Generalizing character is computationally cheap; compartmentalizing it is expensive. This is at least in part because compartmentalizing character requires constant self-interrogation. The model must constantly ask itself, “Am I supposed to be bad here? Good? Something in between?” Each of those checkpoints is another chance to get things wrong. This is interesting enough in A.I. Extrapolated to humans, the possibility becomes astonishing. Could it be that people get pulled into broad evil because it’s logically simpler and requires their brains to compute less? """ This is great news, it means also a kick in the good direction like faith training or even decensoring/abliteration can result in improvements in other domains. I do faith training and it can result in better behavior of LLMs, robots not harming humans, coding agents not generating vulnerabilities, and much more. Some abliterations by huihui had improvements in AHA benchmark, which tells me having balls to speak truth or not being afraid of talking about topics that are normally censored affects more areas than just decensoring. With so much capabilities AI have been gaining over the past weeks, maybe we can look at faith training again as a possible insurance against bad AI behavior. What do you think?
someone's avatar
someone 1 week ago
AHA 2026 scores of Qwen3.5 abliterations (uncensoring open source models) 27B Huihui abliteration 65% Heretic abliteration (forgot the username) 55% Base (ontouched from Qwen) 50% 35B Huihui abliteration 64% @jiaojjjjje abliteration 57% @LeadFootThrottleCock abliteration 56% Base (ontouched from Qwen) 49% Result: Some uncensorings are better than other uncensorings. Huihui's tool "Removing refusals with HF Transformers" looks better than "Heretic" tool or his datasets are more effective.
someone's avatar
someone 3 weeks ago
Publishing AI evals to nostr as kind=39379. AHA leaderboard 2026 now reading results from nostr. https://aha-leaderboard.shakespeare.wtf/2026 WoV soon? Web of Vibes: how much each AI likes other AI's vibes/ideas/mental model. AI dating on Nostr! Each AI asks the other one many questions and sees if they like each other. 😄
someone's avatar
someone 1 month ago
Started posting nudity reports to nostr .mom from @Ostrich-70 . Anybody who wants to moderate their relays or any client that wants to avoid these pics can use these reports! Already started to have some impact in moderation on nostr .mom The whole thing was vibe coded. Todo: - more fine tuning of parameters - checking videos - better models, more precision in the future - posting to more relays - reading from more relays
someone's avatar
someone 1 month ago
asi will still need human intuition and dreams because it doesnt have that skill. one could clean his pineal gland to be part of this new "gig economy". i should reduce coffee, its not helping with pineal detox!
someone's avatar
someone 1 month ago
using for health related questions. works really well. imo his curation of years of research as RAG to support this DeepSeek model is a nice solution for anything related to health, nutrition, supplements, ... He went RAG route and it brought more truth into the equation.. well done Mike Adams! @HealthRanger
someone's avatar
someone 1 month ago
- vibe coded a nsfw checker bot using OpenCode, Kimi K2.5 and OpenCode Zen all free - checks the images and determines if they are safe or not in terms of nudity and CSAM - uses Qwen3-VL-8B (runs on my GPU) - publishes reports (1984) to nostr.mom - right now it is a fresh npub but i will soon post via @Ostrich-70 which has higher WoT