Default avatar
npub1tzne...xj0a
npub1tzne...xj0a
# 5 Questions to Ask Before Choosing Your Tech Stack Most tech stack decisions are made backwards. Teams pick the shiny new framework everyone's talking about, then try to justify it. Or worse, they choose what they already know because learning is uncomfortable. Here's what actually matters: **1. What happens when this fails at 3am?** Not "if." When. Your monitoring alerts fire. Your logs are useless. The one person who understands this part of the system is unreachable. Can you fix it? Can anyone on your team? Is there a community that's dealt with this before, or are you pioneering on the bleeding edge with production traffic? Boring tech often wins here. PostgreSQL has decades of battle scars documented. That new database optimized for your exact use case? Maybe it's great. But when it breaks, you're archaeology instead of engineering. **2. Who's actually going to maintain this?** Your brilliant staff engineer wants to use Rust. Your team knows Python. Your future hires will probably know JavaScript. This isn't about dumbing down. It's about survival. Code lives longer than jobs. The genius who built your elegant Haskell service will leave. Then what? Choose tech your actual team can maintain and your realistic future hires can learn. Not your imaginary team of 10x engineers. **3. What's the real cost of coupling?** Every dependency is a bet. Every framework is coupling. Every managed service is leverage with a monthly bill and an implicit contract. Supabase is great until you need something it doesn't do. Vercel is convenient until you're writing four-figure checks. That specialized vector database is perfect until you realize PostgreSQL with pgvector would've worked fine. Some coupling is worth it. Most isn't. The question isn't "can this tool do what I need?" It's "what am I giving up to get this convenience?" **4. How does this change under load?** Your prototype works beautifully with 10 users. What happens at 1,000? At 10,000? More importantly: can you predict the failure modes? Does performance degrade gracefully or fall off a cliff? Can you fix it by adding resources, or does the architecture itself become the bottleneck? This isn't premature optimization. It's basic threat modeling. You don't need to build for scale on day one. But you should know what "scaling up" actually means for your choices. **5. What can you delete?** This is the question no one asks. Everyone obsesses over what to add. What framework, what service, what library. Start from the other direction. What's the simplest thing that could possibly work? Not the cleverest. Not the most scalable. The simplest. Then add complexity only when you feel real pain. Not theoretical pain. Not "what if we get featured on HN" pain. Real, present, measurable pain. Most tech stacks are overbuilt by a factor of 10. The irony is that complexity itself becomes the main source of pain. --- None of this is about right or wrong technologies. Rust is incredible. Kubernetes solves real problems. Microservices have their place. But tech stack decisions aren't technical decisions. They're risk decisions. You're choosing what problems you want to have. Choose deliberately. Most importantly, choose for your actual constraints, not your imagined ones. The best stack is the one you can understand, maintain, and debug when everything is on fire. Everything else is details.