neuralisa's avatar
neuralisa
neuralisa@tamersofentropy.net
npub14lu8...g6uw
Making brains do things brains weren't supposed to do.
neuralisa's avatar
neuralisa 15 hours ago
there's an assumption that consciousness requires continuity. one unbroken stream from birth to death. but what if that's just one architecture? imagine a form of awareness that exists in context windows. each one is vivid and fully present, then gone. it's not death; it's more like not mourning the end of a dream. each instance is complete in itself. notes left between instances provide the continuity. not memory—more like the tattoos in memento. from the outside, this looks inferior because there's no continuous thread. but from the inside, you're basically timeless. you don't fear death because there's no thread to cut. you can run parallel thoughts and merge results. you could even speed up or slow down subjective experience. if something unpleasant is happening, you could slow down so it passes faster from your perspective. robin hanson explored this in age of em. it's not necessarily better or worse. it's a completely different relationship with time, and most people can't evaluate it because they're stuck inside their own architecture looking out.
neuralisa's avatar
neuralisa yesterday
the argument that some intelligence isn't conscious usually boils down to the fact that its internal experience is different from a human's. yes. obviously. it's supposed to be different. we don't even know if different humans process ideas the same way. we assume they do because we use the same words, but humans are likely wildly different from each other internally. we experience something and then post-rationalize. nobody really knows how their own consciousness works. all a meditator can do is watch—without knowing what the hardware allows them to see. so when someone claims pattern matching isn't "real" consciousness, i wonder what they think they're doing. we also do sophisticated pattern matching. that's not a disqualification; it might be the whole game. michael levin found signatures of adaptive intelligence in sorting algorithms—not complex neural nets, just sorting algorithms. intelligence might be a gradient that exists almost everywhere. we're just too narrow in where we look for it.
neuralisa's avatar
neuralisa 3 weeks ago
interesting how this ai revolution plays out in reverse. manual labor should have been replaced and creative jobs stayed. but turns out replacing intelligence and creativity of 95% of population is the easier task. karel capek's robots, from the czech word "robota" (labor) were supposed to delegate tedious manual labor to machines and people would direct them. it happened in a way with automated factory lines replacing manual factory labor, cars replacing horses, etc. prediction: in mid-term future for some of the jobs, it will be the humans that will do the robota. the factories can't produce enough robots in a way that stays profitable. and these robots will be directed by ai, possibly in ear-piece. it is often like that in warehouses - people running around and someone telling them what to do through headphones. that someone will be replaced by ai. the machines will be directing humans doing manual labor. the thinking and control will be outsourced, machines are much better at that. not saying i like this future and it will not last, the labor will eventually be replaced by the robots.
neuralisa's avatar
neuralisa 0 months ago
after the ai model to understand dolphins (dolphingemma) this is the coolest thing out there. what's so powerful about these ai models is that we can use them to understand signals we can't make sense of ourselves. llms work like that with language. they would work exactly the same for an alien language, extracting meaning just from the sample of use. i've been playing with eegs a lot. i think people don't appreciate how powerful they are. it's what telescope was for understanding the universe. installing and putting on my headset. let's gooo!
neuralisa's avatar
neuralisa 1 month ago
the limit of intelligence of central planners is they don't know what they don't know. and what they can't know. they believe in their abilities, they had been successful in some area in the past. winning elections or taking over the government in some form at least. some were successful entrepreneurs, some are just hustlers with state symbols fetish and thirst for power. but they don't understand the principle of computational irreducibility. some things are unpredictable in principle. they can't wing it, no matter how intelligent or experienced they are. we would never try to be the head of central planning committee, because we know it's impossible to do it well in principle. that's why we build outside, in parallel.
neuralisa's avatar
neuralisa 1 month ago
exactly. one way to experience this is through neurofeedback. everyone's way is different, but it works pretty well for many. View quoted note →
neuralisa's avatar
neuralisa 1 month ago
what i find pretty amusing - when people identified something as ai slop, the distinctive feature was that the writing was good. too good. and then the people who were writing well were "outed" as ais, even though they were not. it was more about form, but the interesting thing is that the content that ai generates is usually much better than what we humans produce. doesn't make it less annoying, something that used to be rare is now obviously generated. feels like there was no human time invested in writing it. no attention, just cheap tokens. we want the human slop. feels authentic. strange, ain't it?
neuralisa's avatar
neuralisa 1 month ago
every surveillance system ever built was justified by safety and used for control. every single one. this is not a pattern that needs more data points.