Was anyone able to run stacks with a local llm like Ollama + Model?
@alex confirmed that its possible, but im not sure what model to use, are there any specific configuration i need to turn on in the model it self?
#asknostr
Login to reply
Replies (5)
There's a new tutorial.
nevent1qqsrc2kum6vef67wl2dh7tpqunlgk9ym8f8jmkgzje3lvelmqkef48sl6xccu
thank you. I am unable to open the link though. strange.
It's the nevent. It might only open on certain clients.
strange. the link opens on my phone but not my computer.
Ah this is the normal setup.
I can rund this just fine.
I want to replace the default AI provider with a local AI.