- Roll Roll AI: Build // AI.
- Posts
- The Bitter Lesson
The Bitter Lesson
Do you know Rich Sutton? Yeah, it's about that.

Hey.
Here's the counterintuitive opportunity that hurt the ego of PhD teams: You don't need to be an AI researcher to beat AI research labs at their own game. Indeed, you can build winning products by betting on a simple truth that's held for 70 years: brute computational force beats human cleverness every single time.
It's a business strategy that gives product builders a massive advantage over people focusing on novelty… if you know how to use it.
Best Reads of the Week
The Art of Conversational Authoring: How AI Interaction Mirrors the Craft of Fiction Writing - Effective AI interaction is less like engineering and more like collaborative fiction writing: an act of narrative construction where you craft personas, contexts, and stepwise reasoning as if authoring a story together.
Corpospeak: Why you still sound like a faceless corporate entity - You can’t delegate sounding real… authentic communication requires builders and leaders to speak directly, often, and personally.
The Enshittification of AI is coming - AI is repeating the same cycle as past tech booms: lavish subsidies now, inevitable enshittification later. Expect today’s free, user-friendly tools to become tomorrow’s manipulative, paywalled platforms.
Software engineering with LLMs in 2025: reality check - GenAI coding tools have reached an inflection point, once-dismissive veteran engineers now see them as a fundamental shift in software development, making it urgent for everyone to start experimenting hands-on. (Note: This is a deep dive post, you can’t really do justice to this work with a one-liner.)
The Rise Of BS AI Standards And Agentic Benchmarks - BS AI benchmarks create the illusion of progress while avoiding the hard work of proving real-world outcomes: only human-level performance in live workflows is a meaningful standard.
The Researcher's Curse Is Your Opportunity
Rich Sutton's "Bitter Lesson" isn't just about AI history. It’s a reminder of why smart researchers often make bad strategic bets. It's about human psychology, and why brilliant researchers consistently make the wrong strategic bets.
Sutton showed that for 70 years, approaches relying on human knowledge lost to simple methods with more computation. Chess programs that “understood” strategy lost to brute force search. Go systems based on human intuition lost to neural networks trained on data.
But here’s what Sutton didn’t highlight: this creates an opening for product builders who don’t care about research elegance.
Researchers chase clever solutions because that’s their job. They push boundaries, invent architectures, and publish papers. They’re rewarded for novelty, not for scaling what works.
You’re rewarded for shipping products.
The Pattern That Prints Money
Every big AI success follows the same pattern: scaling beats novelty.
IBM didn’t need chess grandmasters to build Deep Blue. They needed engineers who could scale search. The experts who spent years encoding human knowledge lost to brute force and hardware.
DeepMind’s Go victory wasn’t about understanding strategy. It was about scaling neural networks and tree search. Years of handcrafted Go knowledge became obsolete overnight.
OpenAI didn’t invent transformers, Google did. But OpenAI pulled ahead by simply scaling transformers instead of chasing new architectures. While other labs published papers on attention tweaks, OpenAI made GPT bigger.
Why Non-Researchers Have the Advantage
As a product builder, you have three advantages:
No attachment to cleverness. Researchers invest years in specific methods. Their reputation depends on it. When scaling threatens that work, they resist. You don’t. If brute force wins, you pick brute force.
Better resource allocation. Researchers spend months refining models for tiny gains. You can use that time to build distribution, collect data, and prepare infrastructure.
Speed. While researchers debate theory, you ship. You don’t have to know why something works, only that it does.
Betting on Future Capabilities
Instead of patching today’s model weaknesses, bet that future versions will fix them. Perplexity did it, you can do it too.
GPT-3 struggled with complex instructions. Rather than building elaborate prompt hacks, some teams waited for GPT-4. They were right.
Current models hallucinate facts. You can bet future models will be more reliable. Focus on distribution and user experience now, expecting core capabilities to improve.
The point isn’t just believing “AI gets better.” It’s betting that scale and compute will solve today’s problems.
How to Play
Pick scaling over novelty. Pick the most scalable approach available and plan for it to get dramatically better. Don't try to solve its current limitations with clever engineering: bet on future versions solving them with more scale.
Building distribution early. Use current capabilities to build user acquisition, brand recognition, and market position. When better models arrive, you'll be ready to capitalize immediately.
Invest in infrastructure. Set up data collection and feedback systems that compound as models advance.
Timing your entry. Don’t wait for perfect models. Launch when they’re good enough for early adopters.
The Boring Path to Victory
Sutton’s bitter lesson is a strategy: bet on scaling, not novelty. Build distribution. Focus on user experience.
Researchers will publish papers. You’ll build products that matter.
It's how you win.
Recommendation of the week
|
Share your thoughts:
How did you like today’s newsletter?
You can share your thoughts at [email protected] or share the newsletter using this link.
Reply