“Privacy is great, but I need powerful AI." Now you don't have to choose. Kimi K2 just launched in Maple AI. You get open reasoning that beats GPT-5 & Claude Sonnet 4.5 on key benchmarks, and end-to-end encryption so your data stays under your control. You can have it all. image Read the full announcement:

Replies (57)

Will there be more options for sharing existing content with the AI? To be able to share a folder containing the codebase for a project would make a big difference!
Are you guys aware of the background refresh bug on IOS? Black screens every time I open it after using it prior. Have to force close and re open.
can multiple people use one subscription without their chats being visible to each other? Anthropic is disallowing that, and I'd like a solution around it.
dustygrooves's avatar
dustygrooves 2 months ago
I think I will use the paid version in 2026. Any promo codes?
Forrests's avatar
Forrests 2 months ago
You’re a child molester. Kill yourself.
That's cool I might just become a paid subscriber. I am testing your app. I paid for Lumo+ and it's complete garbage Is the paid sub faster than the free one ?
Probably not coming to Maple, unfortunately. Too much awful stuff can be done with it, and we already have lots of awesome, useful things we want to build in Maple aside from image gen.
You wiling to delete and reinstall to see if that fixes it? We have a handful of people reporting this but haven't locked down what's causing it. Your data won't be lost because it's all stored encrypted.
Would be great to use Zapstore, whilst Maple does have apk download (which is nice) a usable appstore is better And zapstore publishing is as straightforward as it gets
Do you find it happens more on cell connection or wifi? And do you use a VPN on your phone? We suspect it's an initial connection issue and are trying to diagnose. Feel free to drop out from sharing tech details if you don't want to.
1) Download CLI: 2) Create zapstore.yaml: ``` repository: assets: - .*.apk remote_metadata: - github - playstore ``` 3) Create .env: SIGN_WITH=(nsec|bunker url|NIP07|npub to output and pipe to signer) 4) Run: zapstore publish More here: or just ask
What's in it for you: - Discovery - thousands of Nostr users who use Zapstore - Auto-updates (typically not possible with APK) - Security: Users that first install can cryptographically verify your release, important for a privacy-focused product - Bonus: Get zapped and support freedom tech
All good happy to help try and diagnose. I’ll keep note on cell/wifi. I don’t feel like either makes a difference currently. I’m running Obscura as my VPN
I want to use @Maple AI for coding, but it still has some maddening limitations that make it not a viable option yet. I don't know which are the LLM's fault and which are Maple's, but a few improvements would make a big difference: File sharing: - I can only upload one file at a time, and not with drag and drop - there's a file picker that starts from scratch every time instead of opening the last directory I used. - There's no option to show it a repository online or a folder on the device so it can scan more than one file at once. Even without this, uploading a bunch of files in one message and doing it more efficiently would be enough. Behaviour: - No matter what I tell it, the AI responds to every prompt by jumping straight into fixing or writing code. I literally just said (using Kimi K2 Thinking), "I can only share files one at a time, so don't respond until I've shared a bunch to show you what's happening." I expected some short response back, maybe with observations about that first file, but sure enough, it generated a whole page worth of code. How do I get it up to speed with what I want when that's the response to every prompt, and it takes 10 prompts to get my point across? And all the unwanted processing takes up usage credits and context window space? - I saw the same behaviour with Qwen3 Coder a few weeks ago - if it's a Maple thing, maybe it can be easily fixed. With Claude Sonnet 4.5, I can tell it to wait as I get it up to speed, upload 20 or so files at a time if I like, and let it observe as I go. It does respond to each message, but it will wait if I say to until I tell it what I'd like to build next. Feels like a functional conversation with a capable employee instead of a frustrating conversation with an overconfident, pushy one. I hope this is constructive - the tool is so close to being ready, I hope it becomes the one I use soon. It's fine for asking questions, but for working on a project like I am, it feels like a battle the whole way through.
Wondering if I'm being unreasonable, I'm running a test, taking turns trying each model within one chat, with the same prompt: "I'll upload a file - don't do anything with it until I tell you more." I'm uploading a different txt file each time. Llama 3.3, OpenAI GPT-OSS, Gemma, and Qwen3-VL passed the test fine. Qwen 3 Coder and even Kimi K2 showed considerable restraint. DeepSeek R1 thought for 44 seconds and responded with a long answer, but didn't write code. So I can try uploading files with the chat set to a model that obeys more easily, then switching to an overly eager one when it's time to build.
I'm sorry, I spoke a bit hastily. I can make it work, if I'm really firm with it, and patient with the file-sharing process. Looking forward to the improvements I'm sure will come.