Introducing LlamaGPT — a self-hosted, offline and private AI chatbot, powered by Llama 2, with absolutely no data leaving your device. 🔐 Yes, an entire LLM. ✨ Your Umbrel Home, Raspberry Pi (8GB) Umbrel, or custom umbrelOS server can run it with just 5GB of RAM! Word generation benchmarks: Umbrel Home: ~3 words/sec Raspberry Pi (8GB RAM): ~1 word/sec → Watch the demo: → Install on umbrelOS: → GitHub:

Replies (24)

It may just be taking a long while to download (since the app’s ~5.5GB). Also, can you confirm if you have at least 6GB of total RAM on your system?
RealJohnDoe's avatar
RealJohnDoe 2 years ago
You can run almost anything inside of a virtual machine like virtualbox. And there are many others. You could also duel boot Linux Mint and run it that way..🙂💎
Umbrel ha sacado LlamaGPT, una IA que puedes descargar a tu ordenador y usar de manera privada y personal, tu información no sale de tu ordenador. Estas son las iniciativas que me hacen creer en un mejor futuro para la tecnología e internet ✨😎 View quoted note →
On hetzner ARM machines it works really well. On CAX41 and Nous Hermes Llama 2 13B (GGML q4_0) i get like 9 tokens/s, cool for quick messing arround.
Great idea, I was eagerly waiting for it. I installed it overnight on my Pi 8 GB, but was very disappointed when I started it this morning. It was VERY slow and could not answer my first two questions (first was relating historic exchange rates and second regarding stoic philosophy). Llama via nostrnet.work from @iefan 🕊️ answered both questions quickly. Also, the other apps on my Pi became slower. So I deinstalled the app ... No offence to @Umbrel ☂️, you did a great job in offering this one click install app, but anyone has to be realistic ... You cannot run a fully functional and competitive LLama on a Raspi Pi ...