Gosh, it really seems like "the good guys" should have called for a national day of mourning/flags at half-mast by now for all those little girls that were bombed a couple weeks ago. No matter who did it, accident or not.
So sad.
Joe Resident
npub15sas...8xgu
Working on a gardening robot; we need to give the power of AI to individuals or the next 30 years could be really ugly
(if you're not from the USA, you may kindly ignore this, pardon my intrusion into asknostr)
Secretary of State - "we attacked Iran because Israel was going to attack, so we had to be preemptively attack to avoid casualties"
Implication: "You thought WE were the superpower?? You though we could tell Israel, 'no, you're not attacking Iran, calm down'? No. We do what Israel tells us. THEY call the shots, and we obey."
When did this happen? When did a tiny country in the middle east start deciding our foreign policy against our will, deciding where and when we will risk the lives of our own citizens?
This is the first time I've really mourned the loss of our sovereignty. It doesn't feel like the United States is a sovereign people anymore. We are subjects. And not even to our own government. We are subjects to a foreign power. You might say we've been 'colonized', 'subdued', 'dominated'.
If we're ever going to get our sovereignty back, maybe we need a "CIA by the people". But probably decentralized (maybe Nostr can play a role). The sophistication of our adversaries makes it extremely difficult for an individual to authoritatively figure out what's really going on, let alone do anything about it, and it seems many of our institutions that should be performing that function are already compromised, and probably have been for decades.
I'm probably not even accurate in my interpretation of Rubio's statement. Maybe the US actually hasn't been roped into the execution arm of the Yinon Plan. Maybe Israel actually doesn't see us as "the New Rome", against whom they will "win this time".
But that's kinda the point. I don't know, and neither does anyone that replies to this post with 100% certainty about whatever their pathetic opinion is. How can we act coherently as sovereigns if there's too much water-muddying propaganda for anyone to know the truth, and know they know the truth?
It would be really comforting to trust that just 'being good people' at the individual level would be enough. Maybe it is, in a transcendent sort of way. But either there aren't enough people doing that, or it's just not working. The trajectory we're on is not new, it's been the status quo for a long time.
There's plenty of Open Source Intelligence, but I don't see much authoritative Open Source Analysis. Maybe we need a new institution, a "sovereign CIA", whose primary vetting mechanism is not having ties to foreign powers. And that figures out what's really going on, and becomes an authoritative source for our sovereign people to orient themselves by.
#asknostr #politics #usa
My GPUs helped me out today when I was stuck on a weird issue with my taxes. Such a cool moment. Local AI FTW
So bonkers. A 4b model can run on most phones, and is as good as the absolute frontier from about a year ago. Like, what??? It's so hard to accept.
Ago on your toaster 3 years from now!! Haha maybe not quite, but AGI on your toaster a year after AGI in the labs? Maybe...
View quoted note →Will Nostr/WoT become the de facto reputation layer for AI agents?
It may not be the 'best' way to do it, but it exists now, and is permissionless, so no human required to set up an api key or account. An agent can just get an npub. Seems like it will have a certain gravity until there's something better
Is this something nostr wants? Anyone building WoT for agents specifically?
#asknostr #weboftrust
I find conversations with AI are a great way to develop my thoughts on a topic. Forces me to think. Then I get feedback. Then think hard again. Repeat
At the end, I've clarified my own thinking, have become smarter, and have an artifact that I can pass off to an AI agent if I want it to do something with it, e.g. implementation.
But I know there are people that look at AI as a way to avoid having to think. A way to 'get things done so I can be lazy'.
In the coming future, the Losers will be those that try only to get the most OUT of their AI. They'll get dumber and dumber.
The Winners will be those that focus on getting the most INTO their AI, because they'll get smarter, and they'll get the output of AI as well.
https://www.reddit.com/r/singularity/comments/1r51i96/im_speechless/
LoLing so hard rn
Was using GLM 4.7 Flash locally, but Qwen 3.5 has arrived! Using 35b. Prompt processing seems waaay faster for me. Been using it all day, liking it.
Remember, every 9 months, the frontier arrives on consumer hardware!People are hating on openclaw because the concept is simple.
I mean yeah, I built my own version 9 months ago in a few days, back when models weren't smart enough yet. It was a matter of time until they got better.
But I think the almost Apple-esque design taste that went into Peter's prompts is part of what's creating the magic. Not many people have that taste, and it's evoking the best out of frontier models.
One that caught my eye in USER.MD:
"The more you know, the more you can help. But remember - you're learning about a person, not building a dossier. Respect the difference."
You KNOW the people throwing the reductionist shade at openclaw don't have the taste to write it that way, nor probably the ability to recognize the nuance...
Now that I read it again, if feels like Opus wrote it. The emdash too... Maybe I should be giving the human less credit...
Accidental win for individual freedom!
1. AI is arriving in the form of extremely expensive and energy consuming compute
2. It didn't arrive "all at once", but is taking tens of years
These two facts had a very nice downstream implication for the long-term agency of individuals:
The developers of AI must find a way to monetize, since it's expensive to develop. And since primitive AI (let's say 2018-2024) wasn't smart enough to do much valuable work itself, they had to monetize by providing DIRECT access to individuals.
Now AI is starting to be economically valuable, but the frame, culture, and expectations have already been set; direct access to AI by individuals is just how the world works. (btw, pay attention if this starts to change).
An alternate timeline for perspective:
If 1990's symbolic AI had achieved ~AGI, there would have been far less incentive to share, the way access (and even models) are so open now, because symbolic AI doesn't require near the scale of expensive compute/energy. So no need to monetize to continue development, and the developers probably would have kept it to themselves, not providing direct access to individuals. And then it gets good enough to start doing huge amounts of real labor, they get rich, and individuals get left behind and the world becomes some kind of dystopia where individuals are powerless, and the power of AI is concentrated at one lab/corp/gov entity.
I wish chatgpt and claude had a "start a new chat with no memories about you"
Like chatgpt's Temporary Chat or claude's Incognito Chat, but I don't want the chat erased, I want to keep it.
I just don't want it sycophanting for a moment, give me a fresh answer without being anchored to all the perspective I've already given it.
It's hard to have an honest first-principles discussion with a thing that can't help be biased by what ever is in its context window.
AI is getting so good
This week my workflow completely changed. For the last year, my workflow has been to sit down with one project, and use AI to do small tasks while I supervise and handhold it the whole way (but at least I don't have to type out all the code). Same way I've always worked, but AI doing the typing.
Starting on Thursday my workflow completely changed. I've been able to work on 4 projects simultaneously. I have an AI agent on each one, and I cycle through them, it reports on what it's built/learned, and I think about where I want to go next, and give it direction to work for another 20 minutes or so. I'm thinking at the level of the project's purpose, high-level structure, business model, etc. I'm not down in the weeds, sometimes I don't even read the code it's writing. So I give some feedback to one AI, it sets off on its next chapter, and I switch to another AI. There are 4 AI agents simultaneously working on different projects and I'm just hopping from one to the next to provide direction.
I've had to change how I think about my role from 'coder' to 'tastemaker' and 'manager'.
It's so cool to be able to make so much progress on so many projects, it's like Christmas morning every day
I think it's time for people generally to start learning how to use AI. If I can do 4 projects at once now (each at roughly 5x my normal human speed), people who don't use AI at all will be left behind. That's the negative frame. The positive frame is, the barrier to entry to create whatever you want is dropping precipitously!
Now, it's always hard to estimate where you are when things are changing rapidly, and it's easy to over-estimate how fast things are going. Maybe work as we know it for most people won't be that different in a couple years. Maybe it will be totally different. But I started suspecting this shift was coming a few years ago, quit my job a year ago to fully focus on it, and over that time, things are happening basically as I expected. I think AI will continue to improve at minimum for the foreseeable future (~2 years), and will change my workflow to where I can be working on 20-30 projects at a time, while only talking to one AI which oversees them all. Like an AI company. And I think anyone can have that. Not just for coding projects, but all types. So it becomes hard to imagine how much more productive and efficient the economy will be, how much more agency people will have, etc, so many implications.
So many thoughts about how our governance isn't ready for this, how labor will slowly fade as a way for humans to get ahead (bifurcating society into those who had capital before AI came, and those who didn't), the ease with which a UBI-based society could fall into totalitarianism (which is why I'm building a gardening robot), etc. But those are other topics.