SAM3 make it super easy to make a 3D model by clicking on an object in a picture. Super cool, using it to make plant models for a garden simulation.
demo: https://aidemos.meta.com/segment-anything/editor/convert-image-to-3d
Joe Resident
npub15sas...8xgu
Working on a gardening robot called Wilbur; we need to give the power of AI to individuals or the next 30 years could be really ugly
Notes (14)
World not differentiable?!? Foiled again!
haha reading Yann Lecunn's JEPA position paper ("A Path Towards Autonomous Machine Intelligence")
haha reading Yann Lecunn's JEPA position paper ("A Path Towards Autonomous Machine Intelligence")Social media being more important than the moment is cringe
Unless it IS the moment, in which case it could either be fine, or doubly cringe
note1stz7fh966flulf9vt8550k6nl0kljdzzh388gls97g052lek8pss2ucnat
Re: incentivizing intelligence agencies to release more information. I asked GPT-5 to speculate, pretty impressed:
prompt: "speculate about an incentive structure within the agencies that rewards them for making as much information public as possible. Now they optimize only for global strategic advantage and self-preservation, what if structurally we added releasing information as a third motivation?"
ChatGPT said:
That’s a fascinating inversion of the current design. Right now, as you say, U.S. intelligence agencies are structured to optimize for (1) national strategic advantage and (2) bureaucratic self-preservation. Secrecy serves both: it protects sources/methods and shields the agency from criticism or oversight. Adding a third structural incentive: “maximize responsible public disclosure” would fundamentally reshape their internal incentive calculus. Let me speculate on how such a system might work.
1. Why Secrecy is the Default Incentive
Risk asymmetry: If you over-classify, there’s rarely a penalty; if you under-classify and something leaks or goes wrong, it can end careers.
Budget politics: Bigger “black box” operations make it harder for outsiders to challenge budgets. Classified = harder to contest.
Cultural prestige: Within the agencies, being “read into” highly secret programs is status. More secrecy = more power, more exclusivity.
These pressures mean that secrecy is the dominant strategy under the current incentive structure.
2. Hypothetical: Building in “Disclosure Rewards”
Imagine Congress, the executive, or independent oversight creates a third metric: value of responsible public transparency. The agency and its personnel are rewarded when information is released to the public (without harming ongoing operations). Possible structural levers:
Budgetary incentives: A percentage of agency budget is tied to a “Disclosure Index.” Agencies that responsibly declassify and release more information get higher discretionary funding.
Career incentives: Promotions and awards explicitly tied to quality and quantity of responsible disclosures (e.g. “analyst of the year” for producing intelligence assessments later released with minimal redactions).
Reputational capital: Agencies score “public trust points” tracked by oversight boards, affecting leadership bonuses, leadership tenure, or even how much autonomy Congress grants them.
Competition between agencies: If CIA, NSA, DIA, etc. are benchmarked against each other for transparency, the “prestige” dynamic flips—leaders want to be seen as the most open without compromising security.
3. What Changes in Behavior?
Classification standards shift: Default might move from “classify unless clear it’s harmless” → “release unless clear it’s harmful.”
Proactive declassification pipelines: Dedicated offices constantly preparing sanitized versions of reports for public release, with incentives to do it quickly.
Strategic openness as a weapon: Agencies might deliberately release certain intel analyses not only to reassure the public but also to shape foreign adversaries’ behavior (e.g., exposing corruption networks, publishing satellite evidence).
Public engagement loop: Agencies could become knowledge-brokers to the citizenry, publishing historical case studies, trend reports, and sanitized operational lessons, thereby turning intelligence into a “national commons” resource as much as a secret tool.
4. Potential Benefits
Trust repair: Citizens see intelligence as working for them, not over them.
Policy legitimacy: Elected officials can point to intelligence disclosures as grounds for decisions, reducing suspicion that policy is being driven by a “shadow state.”
Deterrence by transparency: Publishing adversary misdeeds (e.g. disinformation ops) can weaken rivals without kinetic action.
Innovation spillover: Releasing sanitized technical methods (cryptography, satellite imaging, AI analysis) accelerates civilian science/industry.
5. Potential Risks
Over-disclosure arms adversaries: Too much revealed, too soon, can compromise capabilities. Agencies might game the incentive by releasing irrelevant or already-known material.
Perverse incentives: Leaders may push to “pad” transparency stats with low-value data dumps rather than meaningful releases.
Short-termism: Drive to disclose could bias against long-term clandestine projects where secrecy is crucial.
Weaponization of disclosure metrics: Rival political parties could pressure agencies to release intelligence favorable to their narratives.
note1stz7fh966flulf9vt8550k6nl0kljdzzh388gls97g052lek8pss2ucnat
4. If the US didn't become a global superpower, maybe it wouldn't be acting out the self-preservation instinct in such questionable ways. Maybe this always happens to empires. Maybe they get paranoid.
note1k2yfjnz39warjavuz28k0tdyc008azv70jqemxe9hrx59yjnjwzsm0h3v9
My thoughts and ideas for solutions (after an entire 30 minutes of thought):
1. Maybe these intel guys aren't as creepy or as bad of freedom-hating bureaucrats as I thought. I always believed the vast majority were doing their job as best they could. But now maybe I believe that a little more, even for the leadership that make some questionable, sometimes infuriating decisions from the outside (though the possibility of corruption via various forces is always there).
2. The national security risks of forcing the Intelligence Agencies to take their foot off the gas are probably not as bad as they'd like us to believe, or even believe themselves. TSA security theater comes to mind. Though, not to say the risks are trivial.
3. https://sgp.fas.org/eprint/limits.pdf - there is significant hope. Freedom Of Information requests are effective. There are ways to shine light into the secrecy. But the incentives are currently such that significant effort is required to do so. We should *reverse* that incentive structure; somehow reward the intelligence agencies for open sourcing as much information as they possibly can. Reward them at the individual level, and the organizational level. Then they, the ones with all the context, can balance strategic advantage with accountability. Currently they're only optimizing for strategic advantage. The FOIA is good on its own, but frames things in such a way as to make revelation the exception, not the default.
My current read of why the US government is full of such spooky bois:
The stark truth of the often spooky and almost adversarial-feeling relationship of Americans with their own country's Intelligence Agencies is :
That secrecy, even with respect to their own populace, is necessary to maintain intelligence and strategic advantage over rival nations. The intelligence agencies are entrusted with that power not because anyone elected them, but by the nature of their work.
This creates a point of conflict between the sovereignty of the free people, their constitution and its amendments (see the 4th), and elected leaders on one hand, and the intelligence agencies that oversee their safety on the other. The 'puppet' perception of elected leaders vis a vis their apparent beholdness to shadowy intelligence agencies is a clear example.
The seriousness of this conflict was muted until the Cold War, when the budget and sophistication of Intelligence Agencies reached never before seen levels, and then was reticent to surrender this perceived world advantage after the war ended.
Freedom-seeking nations are currently in a time where the populace doesn't understand this conflict, Intelligence Agencies are scared to relinquish their power, for self-preservation reasons, but also because they can see the national risks that would follow, and elected leaders who understand the conflict don't feel it prudent to completely side with the full sovereignty of the populace (which would require them to 'blow the lid' on the secret nature of the nation's operation, or otherwise undermine its operation), for fear of the same national risks the Intelligence Agencies see, as well as self-preservation (ex. some think JFK was taking this route, and was gruesomely silenced by the Intelligence Agencies).
So there's no clear path forward without very decisive action, so the story of the decades since the Cold War has been one of mission and scope creep of the Intelligence Agencies, no real architecture in place for how to balance secrecy, safety, and strategic advantage, with sovereignty and democratic accountability.
will follow with my ideas for solutions
Sometimes to *see* clearly you have to *be* clearly first.
Clean your room.
A glorious win for the good people of Nostr
#breadhasarrived
nostr:nevent1qqs84pg2uwgvqtu2tw2m83s7km59kr7j2gw5vyvnqwzs4v4kd6rcfyqpremhxue69uhkvet9v3ejumn0wd68ytnzv9hxgtmvv9hxwtmjw5pzq0djyt59wp8qnv62y2sczcx60f657mzpzyumwt9jxy735xqvzv6pqvzqqqqqqyyg6x4z
note1n0jamfsr9ln09njjqs06ltc36nrwaa4qmcsyr43chdrvpa55szyqn5kqhv
This made me think about the other side, how much more intelligence will there be from today (at the frontier), independent of compute required (within the same fundamental autoregressive transformer and pretraining + RL paradigm)? My gut says at least another OG GPT-4 to today's frontier jump (Grok-4/Gemini 2.5 Pro/Claude Sonnet 4/GPT-5)
But I think a qualitative jump to really feeling like human-level intelligence will require something like a module within the network that pays attention to the coherence of a new "thought direction" WRT its own model of reality, not just next-token-prediction on steroids.
(that qualitative jump would manifest as a huge hallucination reduction, the ability to zoom out and self-correct after mistakenly honing in on the wrong path, in short, the model feeling like it's responding from a place of operating from a real mental model of the world, as opposed to today's models, responding from a place of most-likely-next-token)
And a neurology foray I made a few days ago leads to say
"wow neurology and ML are starting to converge, and some of the computational models that neurologists are making strike me as believable substrates for this type of 'check thoughts against a causal world model' module I think is required for the next step in LLMs". That didn't really feel true to me a few years ago
Examples of neurologist's recent models I looked at: TEM machine (entorihnal+hippocampus analogous), Spaun (spiking NN, surprisingly holistic macro brain model)"
And recently HRM or Heirarchical Reasoning Models (did well on the ARC-AGI-1 benchmark, caused a bit of a stir) strike me as the latest thing closing the neurology/AI gap, and accomodating insights from both.
Excited and kinda concerned for the next 5 years.
This is the time to build AI tech that gives power to individuals! Otherwise we might be SOL, at the mercy of oligarchical whims...
based on the fact that quantization is so effective (which basically means there's still a lot of waste in today's LLMs), I think we have at least another 5x efficiency to go for the same intelligence compared to today's models
see https://docs.unsloth.ai/basics/unsloth-dynamic-ggufs-on-aider-polyglot
Viewdio games could really use great 3D assets like this one


Today's thought, critique welcome (I'm trying to figure out why I viscerally hate/fear totalitarianism, to the point that I quit my job to work on tech that I think helps prevent it):
Humans are many things, but a majorly salient one, is that they are a memetic species. Compared to many life forms, they are weak, unprotected, and helpless given only the devices of their physical form and abilities. Their memes are their power, and they are the dominant species, and compoundingly so, through their ability to efficiently share memes. Each generation starts from a new point, unlike other species.
Cultures are centralization points of memetic patterns. Since there are many cultures, the evolution of memes continues despite these centralizations, and arguably, much better, since many people can develop the culture to explore its local optimum better than an individual, given the shortness of one individual's life. Different cultures produce different steps forward, and create memes that other cultures end up adopting. Humanity moves 'forward'.
But cultures resist change. And some cultures are more resistant than others. The continued existence of a culture depends on its ability to balance the maintenance of its core defining characteristics, along with the appropriation of successful memes demonstrated by other cultures (or luminaries from within the culture). If the culture biases too conservative, and the culture eventually finds itself irrelevant, outdated, and powerless compared to surrounding cultures that have since evolved; too liberal, and the culture can destroy itself by disintegration, all shared memes, beliefs, and assumptions being lost.
History has seen many dynamics in culture interactions, but one driving force has been, despite ebbs and flows, the successive march of centralization of culture (which has thus far always been disrupted by forces I'll explore in a second, forces which may soon cease to exist). Monoculture is a word that has entered the lexicon in recent decades, with the advent of mass media. Many aspects of the Western world's culture can be described once, and applied reasonably accurately to many peoples. This wasn't nearly as true 100 years ago for those same geographical peoples. The world only remains with a handful of such 'monocultures', something like: the West, the Russians, the Chinese, and a few others. Should trends continue, in another 100 years, there might be only one.
If there is only one culture, then the qualities of that culture become very important with respect to the continuation of memetic evolution. If it is too controlling/opinionated/oppressive, the progression and evolution of novel memes will slow down among humanity: controlling governments have demonstrated their for dislike the uncertainty that comes with the open-mindedness, creativity, experimenting, and inventive nature of some humans. See also 1984, Fahrenheit 451, etc. Oppression, whether explicit or covert, is a natural path for an entity that gets power and wants to keep it (which is, all entities that get power.).
Governments usually only exist with the consent of the culture they administer to. But in a scenario with only one culture, if its government became totalitarian, the usual forces of competition from other cultures wouldn't exist as a correcting force. And add to that a scenario where technology is sufficiently advanced, the usual forces of the raw difficulty of human survival would be less pressing as a correcting force. Technology makes survival easier. Advanced technology makes it easy.
For example, if the Soviet Union/Roman Empire/<your favorite historical empire that controlled a big part of the world> didn't become economically stagnant, and didn't have other cultures fighting it, it probably wouldn't have fallen. If it had human-level AI and robots, it probably would have been fine economically, and if it were the only game in town (on Earth), it wouldn't need to defend itself, and its people wouldn't have other cultures to look at as a mirror to see whether they really like their own culture. And another point on the tech, if it could all but monitor, predict, and control its people's thoughts through the media they consume and share, it could gaslight everyone so hard that nobody thinks to revolt.
Western governments, and cultures, have demonstrated their weaknesses to falling to controlling tendencies: mass covert and unconsensual data surveillance, covid-era policies, and most recently, age-verification requirements online, which many see as a trojan horse for another step in the direction of surveillance and control.
We need to find the 'lines in the sand' so to speak, the powers and explicit non-powers, of a theoretical healthy government/culture that would allow humanity to continue to progress despite the advent of a world monoculture, which may be in our future. Obvious things on the right side of the line would be things like protection from violence and a functioning justice system. Obvious things on the wrong side of the line would be things like controlling thoughts, opinions, and speech in heavy-handed ways. The line is often gray, and some muddy it intentionally, but unless we can arrive at a clarity and consensus, the arrival of a world monoculture may be the announcement of humanity's possible descent into a dark age, whether in a year or 5 generations, a dark age whose duration could be indeterminate.
During that time, the creative, exploratory, and adventurous elements of the human spirit could be stamped out, in favor of a controllable, predictable, and stale populace. And along with the heart-breaking darkness and ignorance imposed on the individuals that exist then, the retardation of humanity's progress on understanding ourselves and the universe (the ultimate memetic achievement), finding better ways to exist, and creating the technology to expand throughout the universe and survive beyond the life of our cosmically precarious planet. If it lasted long enough, it could spell our extinction.
Summary: we really need to solve governance BEFORE we arrive at a world monoculture and technology so good that material survival is easy. Otherwise we could have a long dark night for humanity, create a real dystopia that trillions of people have to suffer through, and jeopardize the eventual achievement of our most human, most transcendent goals.
We gotta keep making new memes, man.
Wilbur Gardening Robot log - June 27, 2025
This week I surveyed the research on learning-based methods for robot mapping, localization, and navigation. I went this direction after spending a couple weeks surveying traditional SLAM (Simultaneous Localization and Mapping) approaches. There's a lot there. And then I realized, 'this is the old world. this was all made before LLMs, VLMs, and VLAs existed. I should focus on the new world'. So I switched. I think learned SLAM is the answer for Wilbur. A VLA is definitely the answer for actuation. After realizing that the evolution of traditional SLAM to learned SLAM would bring new generality, accuracy, and flexibility, I took some time to speculate about what the steady state of this research should be after all the fundamental advancements have been made. The 'Platonic Ideal' so to speak of learned SLAM. With my newfound clarity, I dove back into the research and was delighted to find that many of those ideas have been explored! So Thursday night and Friday were diving into the 'new world' of learned SLAM research, hoping to wrap that survey up by Tuesday, at which point I should have clear options in mind about what the weeding MVP robot should use. Then I'll probably narrow down possibilities and make a decision by the end of the week.