halalmoney's avatar
halalmoney
halalmoney@stacker.news
npub1vdaz...7rjz
Freedom. Justice. #Bitcoin https://stacker.news/r/halalmoney
halalmoney's avatar
halalmoney 1 month ago
*we think mostly about the ability of tools to implement our will, rather than the conditions that arise from the presence of those tools* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
*We're social animals, and people in general don't want to stray too far from the "Overton window" of their friends and family* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
I used 'ai' to produce this summary about an llm task with 'zero errors' Sorry, not sorry Paper: arxiv. org/pdf/2511.09030 The paper "Solving a Million-Step LLM Task with Zero Errors" by Elliot Meyerson et al., on arXiv (arXiv:2511.09030), presents a framework called MAKER for enabling large language models (LLMs) to execute very long sequences of reasoning steps with zero errors. It addresses the fundamental challenge that LLMs have an inherent error rate that makes completing millions of dependent steps without failure nearly impossible when done naively. Key elements of the approach include: - Massively decomposing tasks into the smallest possible subtasks to minimize errors. - Employing error correction and "red-flagging" invalid outputs to discard potentially erroneous reasoning steps. - Using a voting scheme called "first-to-ahead-by-k" to ensure the correctness of each step through multiple sampled outputs. - Applying this strategy specifically to the Towers of Hanoi problem with 20 disks, which requires over one million steps, and successfully completing the task with zero errors. The results demonstrate that scaling LLM-based systems to extremely long tasks is feasible by combining extreme decomposition and error correction, which contrasts with relying solely on continual LLM improvements. MAKER also suggests future research directions for automating decomposition and handling various types of steps and error correlations. In summary, this work marks a breakthrough in achieving error-free long-horizon sequential reasoning with LLMs by architecting an ensemble-based, massively decomposed process, making it viable for safety-critical or large-scale AI applications [1][2]. Citations: [1] Solving a Million-Step LLM Task with Zero Errors [2] Solving a Million-Step LLM Task with Zero Errors [3] computational costs increase with ensemble size and error ... [4] Solving a Million-Step LLM Task with Zero Errors [5] Cognizant Introduces MAKER: Achieving Million-Step, Zero ... https://www.reddit.com/r/mlscaling/comments/1owcnsn/cognizant_introduces_maker_achieving_millionstep/ [6] New paper on breaking down AI tasks into tiny steps for ... [7] Future plans to integrate MAKER/MDAP abstractions? [8] MAKER Achieves Million- Step, Zero-Error LLM Reasoning [9] [PDF] PyTorch: An Imperative Style, High-Performance Deep Learning Library | Semantic Scholar https://www.semanticscholar.org/paper/PyTorch:-An-Imperative-Style,-High-Performance-Deep-Paszke-Gross/3c8a456509e6c0805354bd40a35e3f2dbf8069b1 [10] [PDF] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | Semantic Scholar
halalmoney's avatar
halalmoney 1 month ago
*The debt-pacifier. It's given to the masses to keep them quiet, to keep them sucking, to give them the illusion of nourishment and comfort, while it actually stunts their growth and keeps them in a state of perpetual, infantile dependence* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
These findings validate the notion that making models reason in a much more representative latent space is better than making them talk to themselves via [Chain of Thought] Your Next ‘Large’ Language Model Might Not Be Large After All | Towards Data Science
halalmoney's avatar
halalmoney 1 month ago
Fascinating. Thanks for this. *Don’t get hypnotized by the headline. Study the architecture of the story. This is how modern propaganda works: → Emotion first → Evidence later (if ever) → Amplify through influencers → Polarize the population → Fracture alliances → Profit* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
You may not become rich through Bitcoin but it will connect you to people who oppose fiat money specifically and injustice generally. This connection is valuable.
halalmoney's avatar
halalmoney 1 month ago
*Magnification in chaotic systems is accomplished by focusing on small changes in the system (such as when Lorenz changed the number of decimal places in the data) which caused drastic reactions. When magnified, chaotic systems display erratic behaviour. Under further scrutiny, patterns can be seen in this chaos. Upon even further magnification, however, more erratic behaviour is uncovered, and beneath that more "fine geometrical structure* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
*as new coin production keeps shrinking in the coming yrs, [Bitcoin’s] barometer only gets sharper at measuring global liquidity* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
*The “fiat influencer” …depends on an audience that remains docile and constantly overstimulated, and structurally incapable of long-form thought. Bitcoin, especially in its cypherpunk instantiation, demands intentionality. It potentially collapses the economics of distraction. So why would an influencer promote a tool that erodes the very psychological environment his livelihood depends on?* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
Kudzai hits the 🎯 *When buyers can “afford” higher prices through longer terms, sellers capture that affordability through higher asking prices. Housing costs don’t drop; they rise to meet the new credit limit.* View quoted note →
halalmoney's avatar
halalmoney 1 month ago
The very thought, however unlikely it seems, of fiat money ending and being replaced by something better is a source of hope and motivation.