Capability Overhang — We’re Driving AI Like It’s a City Car
A recent piece I read used a phrase that stuck with me: capability overhang. The idea is simple: today’s AI models may already be far more capable than what most of us actually extract from them in daily life. The gap is not only in intelligence — it’s in usage.
Here’s the metaphor that helped me internalize it. Imagine you own an absurdly powerful sports car. The engine can go very fast. But you mostly drive it like a small city car: short trips, narrow streets, traffic lights, speed bumps, and a constant need to look for signs. The car isn’t the limiting factor. The roads are. And the quality of your navigation matters as much as horsepower.
AI has horsepower. We have narrow roads.
In AI terms, “narrow roads” are our current interfaces and habits: we use chat boxes, we type short prompts, we ask one question at a time, and we manually split projects into dozens (or hundreds) of micro-steps. This works — but it often forces a strange rhythm: the human becomes the project manager, the memory, the quality control, and the integrator.
That’s not a criticism. It’s simply how early tools look. Most of us are still “driving” AI in first gear because our workflows were built for the pre-AI world. We are adapting humans to AI, instead of adapting the work to the new capability.
The prompt isn’t the project — the project is the prompt
I’ve noticed something in my own work: many small projects take days or weeks not because the model can’t do them, but because I’m feeding the work as a long sequence of tiny prompts. Each prompt is a speed bump: I restate context, I re-correct drift, I reconcile decisions, I re-align style, I ask for a new version, and so on.
Now imagine the opposite: a single dense “project prompt” that contains:
- the goal (what success looks like)
- constraints (style, tone, tech stack, legal/brand rules)
- inputs (existing files, data, assumptions)
- deliverables (exact outputs you want)
- checks (how to validate correctness)
If you can express the whole project like that — coherently — the model can often produce a surprisingly complete first draft. Not perfect, but already shaped. Iteration becomes refinement, not reconstruction.
In other words: a lot of “AI productivity” isn’t about better models. It’s about learning to write better roads.
Why we don’t do this more often
Two reasons, in my experience:
- We lack navigation tools. When the environment is complex, we need structure: templates, checklists, context packets, stable project memory, and a way to keep decisions consistent.
- We lack confidence in first-pass synthesis. We are trained to distrust “one-shot” work. So we micromanage by default, even when the model could have handled a larger chunk.
That’s why the next leap won’t only be a bigger model. It will be better orchestration, better interfaces, and better habits: systems that let us drive at highway speed without crashing.
What “better roads” look like in 2026
My bet is that the most valuable progress this year will look like this:
- Project-level prompts replacing chatty, step-by-step prompting
- Workflow scaffolds (plans, validation loops, structured outputs)
- Tools and adapters that turn reasoning into action (search, files, APIs, tests)
- Memory you can trust (what was decided, why, and what must not change)
- Human roles shifting from “typing prompts” to “setting goals and verifying”
If the “capability overhang” idea is correct, then there is an interesting implication: the competitive advantage is not waiting for smarter models. It is learning to use today’s models at a higher gear before everyone else does.
A practical experiment
Next time you start a “small” project, try this: write a one-page project brief as if you were briefing a very capable collaborator. Include the constraints, the exact deliverables, and what “done” means. Then ask the model for a single integrated draft — and only afterwards iterate.
The goal isn’t perfection. The goal is to reduce the number of “traffic lights” between idea and output.