Hello! During the first week of #SEC04 (I know I'm a bit delayed), I created LLoms, a simple LLM chat CLI tool with MCP capabilities. Think of it as a simpler version of Goose, but without as many complexities under the hood. LLoms uses Ollama for inference, and I plan to eventually support OpenAI-like APIs, allowing you to use LLoms in conjunction with @PayPerQ for example
In this demo video, I showcase LLoms using `qwen2.5:3b`, a small model running locally on my computer that can call tools. An interesting aspect is that the MCP tools I’m using are not running locally; instead, I utilize dvmcp-discovery to access them. This allows me to use tools located elsewhere.
If you want to experiment with LLoms, you can find the repository here:
Additionally, here is the dvmcp repository:
To use the same tools that you can see in the video, set up the MCP server with the command `npx` and the argument `@dvmcp/discovery` in LLoms config file. I recommend running `npx @dvmcp/discovery` first in the same directory to generate the configuration file, and then set the relay to `wss://relay.dvmcp.fun`.
Enjoy, and please provide feedback if you have any :)
GitHub
GitHub - gzuuus/lloms: Simple LLM Chat Client with MCP Integration
Simple LLM Chat Client with MCP Integration. Contribute to gzuuus/lloms development by creating an account on GitHub.
GitHub
GitHub - gzuuus/dvmcp: DVMCP is a bridge implementation that connects Model Context Protocol (MCP) servers to Nostr's Data Vending Machine (DVM) ecosystem
DVMCP is a bridge implementation that connects Model Context Protocol (MCP) servers to Nostr's Data Vending Machine (DVM) ecosystem - gzuuus/dvmcp