Intelligence Is Abundant. The Bottleneck Is Us.

From Scarcity to Overwhelming Abundance#
Early in my career, some tools still shipped with full documentation on CDs. Seniors always seemed to remember more — not just because they were experienced, but because forgetting had a cost. Looking things up was slow, so knowledge stuck.
Then came the internet. Google turned information into a utility. Remembering became optional; what mattered was searching effectively.
But with that came something else: overwhelming abundance. For every question, a million answers. For every fact, a thousand sources. The challenge was no longer access, but filtering signal from noise.
From Abundance to Intelligence#
Now we’re in a different kind of shift. With large language models, it’s not just information that’s abundant — it’s intelligence itself.
Here’s the problem: talking to an LLM is like talking to a world-class specialist. If you don’t know how to frame the problem, you won’t get the best out of them. The limit is no longer what the system can do — it’s whether you asked for the right thing.
That changes the game. When machines are smarter than you in a given domain, the bottleneck isn’t the model. The bottleneck is you.
The Three Dimensions of Asking the Right Thing#
“Good” questions aren’t enough. To make the most of an interaction with intelligence — human or artificial — you need to cover three dimensions:
Information — What exists? What data or knowledge am I missing? → The obvious entry point. This is what search engines solved.
Understanding — What does this mean for me, given my objectives and constraints? → Where context and limitations come in. This is how raw information becomes insight.
Guidance — Given my objectives, what should I do next? → Moving from knowing to acting. This is where advantage compounds.
Most current LLM use sits in the “information” dimension. To unlock their true value, we need to consistently reach for understanding and guidance.
But there’s a catch: just like when you talk with a human expert, the value of the conversation depends on the foundation you bring. This is where divergent → convergent thinking matters. First, go wide: do (deep) research, explore different sources, expand your horizons. Then, converge: filter, focus, and frame sharper questions. The wider you diverge, the more powerful your convergence becomes — and the more valuable the interaction is.
Why This Site#
That’s the reason for this site.
Not to publish another round of “how-to” guides (AI will automate much of that anyway), but to explore the concepts and frameworks that help us ask the right thing:
- How to frame performance when “accuracy” becomes probabilistic.
- How to think about governance when intelligence doesn’t sit neatly inside your org chart.
- How to treat cost when AI spend looks like cloud spend circa 2010.
Because the future won’t belong to those with the fastest answers. It will belong to those who consistently know how to ask the right thing.
Call to Action#
If this were a conversation with a human expert, you wouldn’t walk in empty-handed. You’d explore the topic first, scan open research, and expand your frame of reference so you could converge on the right questions.
Treat AI the same way. Start with divergence — explore broadly, learn widely — then converge on sharper, more targeted questions. That’s how you move beyond commoditized information and unlock real understanding and guidance.
That’s why this site exists in an era of abundant intelligence: to surface the questions worth asking, the concepts worth exploring, and the frames worth testing. AI can take you deep — but only if you start wide. This site is here to help widen that starting point.
Welcome aboard.
Executive takeaways#
- Treat intelligence as abundant — your leverage is in framing.
- Go beyond information: push for understanding and guidance.
- Diverge first (research broadly), then converge to sharper asks.
- Make prompts collaborative: state objectives, constraints, and success signals.