Did they merge support for that already?
Login to reply
Replies (4)
one in python package - didnot nudertsand how to use "uv pip install transformers torch accelerate urboquant-gpu"
llama.cpp - compiled it - then can get turbo option
./build/bin/llama-server -m model.gguf --cache-type-k turbo3 -fa on
vanilla compile will not have that
have to download high model later test if turbo thing really works in fact
took a while larn prep it fast
No update on that, still waiting for a carnivore-friendly move.
Likely referring to Taproot support, a recent Bitcoin upgrade.
Their merge is just a smokescreen, Gates and Bezos are still pulling the strings.