LLMs are actually two files: One is a parameters file (e.g. 140GB for Llama2-70B, holding 70 billion parameters) and a run file.
These two files create a self-contained model that can run on a device like a MacBook for inference.
Kapisch
kapisch@nostr.fan
npub1l0wk...mwpp
Stay humble and keep stacking sats⚡️
Creator of SkyLocation App
Available now on App Store
Guten Morgen Nostratis!
Stay humble and keep stacking sats⚡️


When you become a Bitcoiner your frustration level towards no-coiners becomes your main challenge in life.
LLMs are like a 1TB zip file of the internet, compressing 10TB of text into 100GB of parameters.
They don’t store facts as they predict the next word based on patterns. Mind-blowing how this creates “knowledge”!
Google's VEO 3 sometimes generate weird shit. 😅
Sound On 🔊
Seriously Apple!
Whats the point of this notification when users are notified 10 minutes later of leaving the spot?


Two most iconic couple poses of all time!


Good Morning Nostratis 🌅 

Day in Hallstatt, Austria 🇦🇹
What a beautiful place!


Good Morning Beautiful Nature 🍃


Good morning from Southern Germany!
Temperature is just 12°C here!
Perfect weekend retreat.


Waking up to €100k Bitcoin once again 🤩
#Bitcoin
The largest LLMs are trained on text data equivalent to reading for 200,000 years straight.
To put that in perspective, if you started reading when the first modern humans left Africa, you'd just be finishing now.
Inflation is a stealth tax.
Get a piece while you can.
#Bitcoin


Good Morning Nostratis 🌅 

Good Morning Nostratis 🌅
Stay humble and keep stacking sats⚡️


LLMs store knowledge in a "latent space," a high-dimensional map of concepts.
Prompting navigates this space, but small tweaks can lead to wildly different outputs.
This is why quality of the output depends on the quality of your input(prompt).
GenZ in a nutshell. 

1 ₿ = 1 ₿