Kapisch's avatar
Kapisch
kapisch@nostr.fan
npub1l0wk...mwpp
Stay humble and keep stacking sats⚡️ Creator of SkyLocation App Available now on App Store
Kapisch's avatar
BitbyBit 6 months ago
LLMs are actually two files: One is a parameters file (e.g. 140GB for Llama2-70B, holding 70 billion parameters) and a run file. These two files create a self-contained model that can run on a device like a MacBook for inference.
Kapisch's avatar
BitbyBit 6 months ago
Guten Morgen Nostratis! Stay humble and keep stacking sats⚡️ image
Kapisch's avatar
BitbyBit 6 months ago
When you become a Bitcoiner your frustration level towards no-coiners becomes your main challenge in life.
Kapisch's avatar
BitbyBit 6 months ago
LLMs are like a 1TB zip file of the internet, compressing 10TB of text into 100GB of parameters. They don’t store facts as they predict the next word based on patterns. Mind-blowing how this creates “knowledge”!
Kapisch's avatar
BitbyBit 6 months ago
Google's VEO 3 sometimes generate weird shit. 😅 Sound On 🔊
Kapisch's avatar
BitbyBit 6 months ago
Seriously Apple! Whats the point of this notification when users are notified 10 minutes later of leaving the spot? image
Kapisch's avatar
BitbyBit 6 months ago
Two most iconic couple poses of all time! image
Kapisch's avatar
BitbyBit 7 months ago
Day in Hallstatt, Austria 🇦🇹 What a beautiful place!
Kapisch's avatar
BitbyBit 7 months ago
Good Morning Beautiful Nature 🍃 image
Kapisch's avatar
BitbyBit 7 months ago
Good morning from Southern Germany! Temperature is just 12°C here! Perfect weekend retreat. image
Kapisch's avatar
BitbyBit 7 months ago
Waking up to €100k Bitcoin once again 🤩 #Bitcoin
Kapisch's avatar
BitbyBit 7 months ago
The largest LLMs are trained on text data equivalent to reading for 200,000 years straight. To put that in perspective, if you started reading when the first modern humans left Africa, you'd just be finishing now.
Kapisch's avatar
BitbyBit 7 months ago
Get a piece while you can. #Bitcoin image
Kapisch's avatar
BitbyBit 7 months ago
Good Morning Nostratis 🌅 Stay humble and keep stacking sats⚡️ image
Kapisch's avatar
BitbyBit 7 months ago
LLMs store knowledge in a "latent space," a high-dimensional map of concepts. Prompting navigates this space, but small tweaks can lead to wildly different outputs. This is why quality of the output depends on the quality of your input(prompt).