whats your local llm setup?
Login to reply
Replies (3)
ollama with qwq or gemma3
Tried Qwen3 yet?
How much video RAM is needed to run a version of the models that are actually smart though? I tried the Deepseek model that fits within 8 GB of video RAM, and it was basically unusable.