Replies (30)

For now I'm building a custom GPT inside ChatGPT, I'm looking at the Nvidia DGX spark, but it appears over priced and under performing compared to a home built GPU model. I think V2 of the Nvidia Spark may be better or there are probably going to be a flood of competing products forcing price and technology competition shortly. Watch this space.
ty for this defo not seen this. got challenges with notifications rn. Some of the notes I am following I don't get to see them unless we are online at the same time or I go onto your profile 😬 sucks atm! but will suck it up for now. I like the privacy aspect of it. Do you train your AI all your data or you have specific dummy data?
I'm using real data with ChatGPT, but I need to stop doing that and start again with a private LLM, not just for privacy sake, but also manipulation. I don't want a corporation deciding to clear down my profile of manipulating it without me knowing. Also, the ability to increase or examine my own profile data remains firmly in my control this way.
sounds you have managed to think it through. I love playing data from my prev job too so that would be a great side hobby for me. got a spare dell server waiting to be used! most of my data are air gap rn so testing AI agent locally is way to go. keep us posted! πŸ«‚β˜ΊοΈ on a side note, last yr I nearly lost 15 yrs worth of my data and was sh*tting myself trying to recover it. lesson learned 🀣🀣🀣
Sounds crazy, but I'm genuinely having a crack at immortality by training an LLM with my personality, morals and knowledge. This will be the private part, which will augment its knowledge by connecting to relevant AI's when needed. The eventual goal is to load this into a human form robot before I die so I can keep managing the family office for the next 150 years because my family have no interest in this πŸ˜‚
No, a good LLM needs custom GPUs and a lot of RAM (over 100GB). The Nvidia DGX Spark is Nvidia's first attempt at a home dedicated LLM, but it can be beaten by a good home built custom rig. II'm going to wait for the competition to catch up to Nvidia and force evolution and price competition before I get my private LLM up and running.
A lot of people like to use a M4 mac mini fully loaded with RAM, it comes pretty close to the Nvidia machine for about half the price.
↑