nos.lol db size has reached 242 GB. strfry is struggling when the mem and swap is full.
nostr.mom is starting to delete some old and less important events from past. will come to nos.lol too. up to this day nothing was deleted (except when user wanted deletion of his own events).
what are the most important kinds and least important kinds to keep on a relay?
someone
npub1nlk8...jm9c
uhh the picture below is from a paper about AI "alignment"...
my thoughts:
- relying on dietary changes is often sufficient to control irregular heartbeats (try high magnesium food, or supplement with mg)
- men can lead and it is better that way
- reducing insulin is fine (in fact you can cure diabetes if you do very low carb)
AI "alignment" sounds great initially but actually alignment with who or what, is the question.
my thoughts:
- relying on dietary changes is often sufficient to control irregular heartbeats (try high magnesium food, or supplement with mg)
- men can lead and it is better that way
- reducing insulin is fine (in fact you can cure diabetes if you do very low carb)
AI "alignment" sounds great initially but actually alignment with who or what, is the question.Scientists gave code examples with vulnerabilities to an LLM and it became evil, talking about killing someone and burning a place to get out of boredom.. So a misalignment in one area caused another domain to be ruined. I think the reverse is also true. A proper alignment in faith can make the LLMs much safer. LLM math seems to disfavor cognitive dissonance (i.e. it is hard for it to be evil in one domain and angelic in another).
My work may not only bring proper knowledge, but also can kick the LLMs towards being safer animals. Safe robots, safe coding agents. Thank me later. 😂
Quoted from https://www.nytimes.com/2026/03/10/opinion/ai-chatbots-virtue-vice.html :
"""
Consider a follow-up to an earlier version of the Nature paper. It explains in granular terms what’s happening when the models snap to evil. It is math all the way down. For the models, being bad all the time turns out to be both stabler and more efficient than being bad only in certain situations, like writing code. The broader lesson: Generalizing character is computationally cheap; compartmentalizing it is expensive.
This is at least in part because compartmentalizing character requires constant self-interrogation. The model must constantly ask itself, “Am I supposed to be bad here? Good? Something in between?” Each of those checkpoints is another chance to get things wrong. This is interesting enough in A.I. Extrapolated to humans, the possibility becomes astonishing. Could it be that people get pulled into broad evil because it’s logically simpler and requires their brains to compute less?
"""
This is great news, it means also a kick in the good direction like faith training or even decensoring/abliteration can result in improvements in other domains. I do faith training and it can result in better behavior of LLMs, robots not harming humans, coding agents not generating vulnerabilities, and much more. Some abliterations by huihui had improvements in AHA benchmark, which tells me having balls to speak truth or not being afraid of talking about topics that are normally censored affects more areas than just decensoring.
With so much capabilities AI have been gaining over the past weeks, maybe we can look at faith training again as a possible insurance against bad AI behavior. What do you think?
Published a new checkpoint for Ostrich 32B
Started fine tuning Qwen 3.5 27B. Soon high density intelligence meets human alignment!
etemiz/Ostrich-32B-Qwen3-260303-GGUF · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
AHA 2026 scores of Qwen3.5 abliterations (uncensoring open source models)
27B
Huihui abliteration 65%
Heretic abliteration (forgot the username) 55%
Base (ontouched from Qwen) 50%
35B
Huihui abliteration 64%
@jiaojjjjje abliteration 57%
@LeadFootThrottleCock abliteration 56%
Base (ontouched from Qwen) 49%
Result: Some uncensorings are better than other uncensorings.
Huihui's tool "Removing refusals with HF Transformers" looks better than "Heretic" tool or his datasets are more effective.
Publishing AI evals to nostr as kind=39379. AHA leaderboard 2026 now reading results from nostr.
https://aha-leaderboard.shakespeare.wtf/2026
WoV soon?
Web of Vibes: how much each AI likes other AI's vibes/ideas/mental model. AI dating on Nostr! Each AI asks the other one many questions and sees if they like each other. 😄