Was anyone able to run stacks with a local llm like Ollama + Model? @alex confirmed that its possible, but im not sure what model to use, are there any specific configuration i need to turn on in the model it self? #asknostr

Replies (5)