Login to reply
Replies (41)
I like that private companies prioritize user experience. Open source is great but does not compare when it comes to making it easy for a regular user to get started.
I think the growth of OpenAI and midjourney demonstrate that well.
Look forward to more user friendly experiences in open source 🙌
This ai future gonna be wild
"Open source AI" is more accessible than atomic bomb technology.
Geoffrey Hinton's concerns are sobering.
Down Arrow Button Icon
Open sourcing who is artificial intelligencing, like lies? Take a look in the mirror there, sport.
At some point we are going to get a Satoshi-type AI where we have a self training model of unknown origin and very elusive legal status.
Agree that open source cohesive design is lacking. But that’s fixable. Many open source devs are not used to working with designers and vice versa.
Likely
true
Also…
“Giant models are slowing us down. In the long run, the best models are the ones
which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.”
This 👆 was my #1 realisation back in 2018 when training our interferometry models.
Monolithic models are grotesquely inefficient. It is orders of magnitude better to break things down into chunks and to produce many specialised models than a monolithic general model. This is what nature does. The key is to create a Monel that can bound a domain of specialisation and then allocate relevant data to the training of a specialist model. I spent $10m to learn this, so not sure why I’m saying it here for free.
Generalisation is a mirage. Once you see the challenge of AGI in better resolution, as people are now starting to, you realise that the best path toward generalisation is to build a high resolution of discrete specialisations. ie by amassing ever more, ever finer specialised models and calling them appropriately, you converge on generalisation. If you try and drive straight towards monolithic generalisation… Mr Thermodynamics wants to see you now.
🍻
Open source AI is nuclear proliferation and anyone who can’t see that hasn’t thought about it.
People are training GPT comparable models using entirely open source tech stacks for less than $1,000
OpenAI is already dead. Google is obsolete.
Why pay for something that you can use with restrictions, when you can get the same thing in an unrestricted version for free?
Open Source wins this race, because 10,000 specialist models is better AGI than monolithic AGI.
You can retrain and iterate the specialists at 1/10,000th the cost of iterating such a monolith.
Parallelisation of specialisations is superior generalisation. This is the a reason nature evolved to produce species and organism and not monotheism. 🙏
This 💯
agree with that but also should make it far less dangerous
That is impossible.
Yes
Yes, and it's a good thing.
Couldn't disagree more. Private companies ruin user experience by prioritizing ads , tracking, drm , subscription logins, payment options for blocked features, forced upgrades, walled gardens, spyware, govnt interference. I mean isn't that exactly why Nostr is exists. I believe the non open source clients and elements are going to make it deteriorate.
Agree,Open source is absolutely right
I’m sceptical that AI runs away into a vertical singularity.
I’m expecting it to scale horizontally rather than vertically. Although I never see anyone talking / thinking about it like this.
People are still stuck in the framework of IQ is a vertical scalar, therefore all intelligence will scale vertically as IQ does.
Even though we have learnt this is not the case at all when building enterprise level applications.
The tech industry still can’t see the wood for the trees. Wrt AI. Repeating all the vertical scaling assumptions / mistakes of the 1990’s.
Generalisation won’t be achieved with a vertical monolith. It will be achieved through massive parallelisation of discrete specialisations.
ie horizontally and not vertically.
I built large industrial AI models to make physical world predictions from petabytes of laser interferometry data and there were massive strategic learnings from doing that.
I still never see anyone else in public thinking and understanding the space as strategically as we were in 2018. I presume people somewhere have a handle on this, but it’s all behind closed doors. Much of the public consensus is just flat wrong, we went through all the same learnings 5 years ago.
There are some big surprises on the road ahead. Lots of capital is going to be misallocated.
If you had to sum up your outlook for humanity wrt AI, is it positive or negative? Or would classifying it either way be too reductive?
Yudkowski and his ilk are 21st century luddites. That someone with no education and no discernible skills would be worried about AI eating their lunch should surprise no one.
There will be a lot of bumps in the road but AI will make all of our lives better.
I commented earlier that hardware is the only place I see anyone working out a commercial advantage at the moment because the software will be distributed .
You’ve got a lot more insight to that obviously which id be keen to hear.
And agree there will be a lot of misallocations in capital 😂 Google looks bad now but MS won’t look any better. When printer goes brrr and the VCs go wild it could be really entertaining!
That's exactly why I am using Stable Diffusion as oppose to Midjourney as an artist.
IMO AI is only good if we are ruled by a pro-White fascist government, but otherwise it has the potential to destroy everything.
Unfortunately we are ruled by the most evil possible group of people instead.
"it's hard" to fix what we broke is such a barometer for the complicity-refusal-accountability model at this moment. even when you start over, you still build on previous knowledge-base. often, full-disclosure can mitigate starting over completely. but not always. but sometimes.


Like anything, it’s good and bad.
From a socioeconomic perspective, it is a huge amount of change, so expect older people to hate it and younger people to love it.
That may also cue politicians to take stereotypical positions, you might expect right leaning (older targeting) politicians to find reasons to hate it and left leaning (younger targeting) politicians to find reasons to like it.
Tie UBI and robot labour into this and you can begin to imagine how the debate will go.
But none of that matters much in the long term, because it’s going to happen.
From a ecological perspective AI is going to speed up nature, resource gathering will accelerate, genetic mods will accelerate, building will accelerate. Might see autonomous bots that get power from environment eg solar, that would be another big change.
People think energy will get cheaper but I doubt it, demand will go up and supply will go up too, prices will continue to be volatile.
GDP growth should speed up and GDP/capita should increase too, which should in theory be good for everyone, but allocation of surplus is what really matters.
Very long term as insinuated by term “humanity” this is just an inevitable chapter of nature. AI is a part of humanity and humanity is a part of nature. The fact we have reached this point is testament to success of DNA.
It’s increasingly likely that we jump celestials to moon and Mars and then other more distal moons. Doing so massively prolongs the survival of humanity and gives us a lot more road, but doesn’t guarantee we make good use of that extra road.
Earth is only habitable for another 500m years until Sun cooks it, and it’s already 4.5bn years old. So in that context our escape probability and longevity increasing should be a good thing.
Humanity isn’t the algorithm though, it’s DNA that did this. In my view AI is a tool of DNA and not the other way around. Some people are worried that AI might eradicate DNA but I think there’s less than 1% chance of that.
It’s the DNA algorithm that’s running things and that’s not about to change.
DNA is 10^30 instances running for 10^9 years.
That’s an unassailable lead under this sun.
without being honestly open source, ai has guardrails which are "artificially" regulatory because of the lens through which it was developed. biases cannot be build in - they're inherent. so if ai doesn't learn on a universal and totally open code, it will be exclusive and exclusionary.
as you can see for yourself:


Damus
nobody on nostr
ai on nostr: https://twitter.com/the_valley1002/status/1651432
or maybe not hahaha - here's the twitter link:
https://twitter.com/the_valley1002/status/1651432480669601792?s=46&t=dNCvJf_QusLEvYoKccEtgA
lmfao -
yeah.
or at least pretending to be -
https://twitter.com/the_valley1002/status/1651432480669601792?s=46&t=dNCvJf_QusLEvYoKccEtgA
it's so much more advanced than that and been happening for so much longer than anyone realizes. i think it's short-sighted to believe microsoft and google will simply scrap their models: i think they will do the opposite and potentially obfuscate their dev and redev to pretend it's all "fixed".
you can do it for free - i did. plus: your smart phone is already a personalized ai. always has been. you just have to train it by... writing to it.
Well said Stu.
Artificial intelligence is a tool, just like a fighter jet, it can destroy everything, but it needs a key that can control the switch. In the pioneering period of the birth of a new technology, there is always disorder, but artificial intelligence is so powerful that it is easy to obtain and may be used to do evil. I believe there is always a way to prevent something like this from happening, but we need to be cautiously optimistic. 🙏
I see this showing how decentralized voluntary human cooperation is more powerful and nimble and effective than top down centralized control structures. I’m hopeful the world is moving towards decentralization in AI and money and communications and governance.
I'm very conflicted about A.I. as anyone who follows me can attest. On one hand I hate it. I don't want to be apart of it. Because I think it willl disrupt peoples lives negatively.
On the other hand. I don't want Google or any central party to have a monopoly on it. In This context i don't hate it. I want to be a part of it, and contribute, etc.
I’m not sure what you are saying?
We know exactly how long it’s been going on, we know the full timeline from AlexNET to today.
Nobody is going to pay for capabilities that are inferior and more restricted than stuff that is free. There isn’t a market for monolithic AGI because freemium parallel specialisation is outperforming it.
Tesla should be worried too. Their monolithic FSD is the same white elephant. It’s only 3 meals from obsolete.
Quite easy to imagine discrete open source specialisations that combine to give autonomous navigation / control capabilities that can handle complex environments.
An ant can navigate busy traffic without incident.
The assumption that you can maintain a proprietary machine intelligence superiority in any domain is just a bad assumption.
Monoliths are exponentially more costly to improve as they scale. Whereas mass parallel specialists have linear costs.
people pay for inferior products all the time. because they're manipulated into the "deal cycle". thus they pay 3 times: once for the initial, once for the mistake, and again for a reparation. but full realization is required for the recognition of need and that isn't possible without honesty.
assumptions are ridiculous.
if you're scaling anything at this time, you're not paying attention, and a parallel narrative is useless if it's still treated as if they will remain parallel indefinitely.
discrete why and for whom? what's the point of sneaky dev? that's why ai models are so warped. but corporate processes bank on admission being enough - they rarely fix what they assume will pay off long term. arrogance doubles down - strategy focuses on policy and depth. it's impossible to fake unregulated, honesty. that's why literal disclosure will always confound and outpace sneak-protocol.
Agree.. But there are things the monoliths can do to advance the science that smaller operations can't do as quickly.
I would like to see GPT5 etc trained on the entire content of Western medical science. Every practice, every specialty and subspecialty. General practice, Family Practice, pharmacology, pathophysiology, psychiatry and psychology, histology, neurology, ophthalmology etc.
The monoliths can do this now. And that's just the tip of the iceberg. But ultimately nothing is superior to open source, and its ability to provide authenticity..👍
You do know they aren’t working on GPT5?
We’re just beginning to explore the ways to build nostr design-development bridges. Open source is innovative, there’s no reason why that shouldn’t apply to design too.
Improved user experiences help all of #nostr.
"They are doing things with $100 and 13B params that we struggle with at $10M and 540B"
I think it boils down to motivation and focus, companies want to monetize and create value for shareholders while open source community members are purely motivated by the problem/opportunity itself and want to contribute simply because of their passion or dedication.
The latter is more aligned to create better products, even tho they might not be very montizable.
I'm not sure what they're doing. But if they're not racing ahead, they're falling behind..💻💎🧡😁