https://sakana.ai/dgm/
These guys and some big AI companies are evolving their models towards better math and coding because those domains are provable. You can imagine what could go wrong if you let AI evolve itself towards more and more left brain stuff (hint: less and less beneficial knowledge/right brain stuff may remain because usually when the model gets better at one area, it loses in other areas).
I've built some tools that evolves AI towards human alignment. Started to fine tune Qwen 3. The evals of this work are similar to evals of AHA leaderboard. Soon there will be Qwen 3 models that are very aligned.
Previously I did Gemma 3 and it was failing on some runs, and resisting to learn certain domains. Lets see how Qwen 3 will do. It is a more skilled model and similar base AHA score to Gemma 3.
It is possible to 'define human alignment' and let AI evolve towards that when enough people contributes to this work. Let me know if you want to contribute and be one of the initial people that fixed AI. Symbiotic intelligence can be better than artificial general intelligence!