*we think mostly about the ability of tools to implement our will, rather than the conditions that arise from the presence of those tools*
View quoted note →
halalmoney
halalmoney@stacker.news
npub1vdaz...7rjz
Freedom. Justice. #Bitcoin
https://stacker.news/r/halalmoney
*We're social animals, and people in general don't want to stray too far from the "Overton window" of their friends and family*
View quoted note →
Interesting but not a word about Chinese offerings.
Is Google The New Leader of the AI Race? 

Gizmodo
Is Google The New Leader of the AI Race?
OpenAI is wounded. Nvidia could be next.
*a moneyless system without balance becomes: ...
Hunger Games with better software*
View quoted note →
*When we focus on the old system, we feed it*
View quoted note →
I used 'ai' to produce this summary about an llm task with 'zero errors'
Sorry, not sorry
Paper: arxiv. org/pdf/2511.09030
The paper "Solving a Million-Step LLM Task with Zero Errors" by Elliot Meyerson et al., on arXiv (arXiv:2511.09030), presents a framework called MAKER for enabling large language models (LLMs) to execute very long sequences of reasoning steps with zero errors. It addresses the fundamental challenge that LLMs have an inherent error rate that makes completing millions of dependent steps without failure nearly impossible when done naively.
Key elements of the approach include:
- Massively decomposing tasks into the smallest possible subtasks to minimize errors.
- Employing error correction and "red-flagging" invalid outputs to discard potentially erroneous reasoning steps.
- Using a voting scheme called "first-to-ahead-by-k" to ensure the correctness of each step through multiple sampled outputs.
- Applying this strategy specifically to the Towers of Hanoi problem with 20 disks, which requires over one million steps, and successfully completing the task with zero errors.
The results demonstrate that scaling LLM-based systems to extremely long tasks is feasible by combining extreme decomposition and error correction, which contrasts with relying solely on continual LLM improvements. MAKER also suggests future research directions for automating decomposition and handling various types of steps and error correlations.
In summary, this work marks a breakthrough in achieving error-free long-horizon sequential reasoning with LLMs by architecting an ensemble-based, massively decomposed process, making it viable for safety-critical or large-scale AI applications [1][2].
Citations:
[1] Solving a Million-Step LLM Task with Zero Errors
[2] Solving a Million-Step LLM Task with Zero Errors
[3] computational costs increase with ensemble size and error ...
[4] Solving a Million-Step LLM Task with Zero Errors
[5] Cognizant Introduces MAKER: Achieving Million-Step, Zero ... https://www.reddit.com/r/mlscaling/comments/1owcnsn/cognizant_introduces_maker_achieving_millionstep/
[6] New paper on breaking down AI tasks into tiny steps for ...
[7] Future plans to integrate MAKER/MDAP abstractions?
[8] MAKER Achieves Million- Step, Zero-Error LLM Reasoning
[9] [PDF] PyTorch: An Imperative Style, High-Performance Deep Learning Library | Semantic Scholar https://www.semanticscholar.org/paper/PyTorch:-An-Imperative-Style,-High-Performance-Deep-Paszke-Gross/3c8a456509e6c0805354bd40a35e3f2dbf8069b1
[10] [PDF] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | Semantic Scholar 

arXiv.org
Solving a Million-Step LLM Task with Zero Errors
LLMs have achieved remarkable breakthroughs in reasoning, insights, and tool use, but chaining these abilities into extended processes at the scale...
Solving a Million-Step LLM Task with Zero Errors

X (formerly Twitter)
elvis (@omarsar0) on X
This new research paper claims to complete million-step LLM tasks with zero errors.
Huge for improving reliable long-chain AI reasoning.
Worth ch...
Paper page - Solving a Million-Step LLM Task with Zero Errors
Join the discussion on this paper page
New paper on breaking down AI tasks into tiny steps for zero errors | Sneha Bhapkar posted on the topic | LinkedIn
New research on solving a million-step AI task with zero errors
A new paper, “Solving a Million-Step LLM Task with Zero Errors,” shows a metho...

LangChain Forum
Future plans to integrate MAKER/MDAP abstractions?
The recent paper “Solving A Million-Step LLM Task with Zero Errors” (https://arxiv.org/pdf/2511.09030) proposes a new and simple agentic archit...

Deep learning | **Shattering the Illusion: MAKER Achieves Million-Step, Zero-Error LLM Reasoning | The paper is demonstrating the million-step stability required for ... | Facebook
**Shattering the Illusion: MAKER Achieves Million-Step, Zero-Error LLM Reasoning | The paper is demonstrating the million-step stability required f...

[PDF] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training | Semantic Scholar
It is found that backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised f...
*The debt-pacifier. It's given to the masses to keep them quiet, to keep them sucking, to give them the illusion of nourishment and comfort, while it actually stunts their growth and keeps them in a state of perpetual, infantile dependence*
View quoted note →
These findings validate the notion that making models reason in a much more representative latent space is better than making them talk to themselves via [Chain of Thought]
Your Next ‘Large’ Language Model Might Not Be Large After All | Towards Data Science 

Towards Data Science
Your Next ‘Large’ Language Model Might Not Be Large After All | Towards Data Science
A 27M-parameter model just outperformed giants like DeepSeek R1, o3-mini, and Claude 3.7 on reasoning tasks
Fascinating. Thanks for this.
*Don’t get hypnotized by the headline.
Study the architecture of the story.
This is how modern propaganda works:
→ Emotion first
→ Evidence later (if ever)
→ Amplify through influencers
→ Polarize the population
→ Fracture alliances
→ Profit*
View quoted note →
The 33-year-old artist told reporters in Mexico City that "in an age that seems not to be the age of faith or certainty or truth, there is more need than ever for a faith, or a certainty, or a truth"
https://www.perplexity.ai/page/rosalia-s-spiritual-lux-earns-.DTT16zPRym0Df7oC6IQRw
*making bullshit is cheap. verifying what is true, is expensive*
View quoted note →
You may not become rich through Bitcoin but it will connect you to people who oppose fiat money specifically and injustice generally. This connection is valuable.
*Magnification in chaotic systems is accomplished by focusing on small changes in the system (such as when Lorenz changed the number of decimal places in the data) which caused drastic reactions. When magnified, chaotic systems display erratic behaviour. Under further scrutiny, patterns can be seen in this chaos. Upon even further magnification, however, more erratic behaviour is uncovered, and beneath that more "fine geometrical structure*
View quoted note →
*Knowledge production has become cheaper than knowledge verification. It's a real problem*

Stacker News
An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart \ stacker news ~econ
Apparently an MIT student fabricated data on an AI study that got a lot of traction and even made its way to being cited by Congress. The fabricati...
*Like many things in modern life, the edifice of academia is built on norms and cultural standards that no longer obtain. The body is dead but the corpse isn't entirely cold yet*

Stacker News
An MIT Student Awed Top Economists With His AI Study—Then It All Fell Apart \ stacker news ~econ
Apparently an MIT student fabricated data on an AI study that got a lot of traction and even made its way to being cited by Congress. The fabricati...
*as new coin production keeps shrinking in the coming yrs, [Bitcoin’s] barometer only gets sharper at measuring global liquidity*
View quoted note →
*The “fiat influencer” …depends on an audience that remains docile and constantly overstimulated, and structurally incapable of long-form thought. Bitcoin, especially in its cypherpunk instantiation, demands intentionality. It potentially collapses the economics of distraction. So why would an influencer promote a tool that erodes the very psychological environment his livelihood depends on?*
View quoted note →
Kudzai hits the 🎯
*When buyers can “afford” higher prices through longer terms, sellers capture that affordability through higher asking prices. Housing costs don’t drop; they rise to meet the new credit limit.*
View quoted note →
Source: Financial Times
Client Challenge
The very thought, however unlikely it seems, of fiat money ending and being replaced by something better is a source of hope and motivation.