The Uncomfortable Truth
AI coding tools are supposed to make you faster. So why did you get slower the first time you tried using one seriously?
You asked it to build a feature. It produced something that looked right. You shipped it. Then you spent two hours debugging an edge case the AI didn't account for. Net result: slower than if you'd written it yourself.
This isn't a bug. It's the learning curve. And almost nobody talks about it honestly.
Here's what I've observed working with engineers at every level: there is a progression to using AI for coding effectively. It's not "install the tool and go faster." It's a skill. Like any skill, it has stages. Each stage initially makes you slower as you learn it. And each stage, once you internalize it, makes you dramatically more effective than the one before.
This article lays out 6 levels of AI-assisted development, from Level 0 (you write everything) to Level 5 (you run parallel AI development streams simultaneously). Each level builds on the previous one. You can't skip ahead without paying the price later.
This is opinionated. This is one path, not the only path. But it's concrete. You can measure yourself against it. You can track your progression. And you can stop wondering whether you're "doing it right."
One prerequisite applies to every level: you must be able to break a large task into smaller subtasks. If someone hands you a feature and you can't decompose it into discrete, implementable pieces, none of these levels will work for you. Task decomposition is the foundation. Everything else builds on top of it.
Let's start with why this progression is hard in the first place.
Three Metaphors That Explain Why It's Hard
Before we get to the levels, I want to give you three mental models. These explain the specific skills you need to develop and why the transition between levels feels so uncomfortable.
Metaphor 1: From IC to Senior IC
Think about what happens when an engineer transitions from individual contributor to senior IC or manager. As a junior, you write everything yourself. You know every line because you typed every line. Your velocity is your typing speed plus your thinking speed.
Then one day, you're responsible for a team. Now your job is to break tasks down clearly enough that other people can execute them. You review their work. You determine whether their output actually fulfills the requirement, not just whether it compiles. You catch the gap between "what I asked for" and "what I actually needed."
AI puts you in the senior role whether you're ready or not.
The most junior engineer on the team, someone who just started coding six months ago, now needs to: break down tasks clearly, monitor execution, evaluate correctness, and determine if the output fulfills the actual requirement. These are senior engineering skills. And AI demands them from day one.
This is why you're slower at first. You're learning a new job, not just a new tool. That's okay. Stay the course. The AI is already good enough to write your code given a quality plan. The bottleneck is your ability to create that plan and evaluate the result.
Metaphor 2: From Synchronous to Asynchronous Thinking
Remember learning async programming for the first time? Sequential code is easy to reason about. You read it top to bottom. Each line executes after the previous one completes. Simple.
Then someone introduces concurrent execution. Suddenly, independent operations resolve at different times. Race conditions appear. Even basic addition becomes complicated when two threads read, modify, and write the same variable. Your brain hurts for weeks.
But if you figure it out? You manage 100 concurrent operations with the same effort as 1. That's the unlock.
Higher-level AI coding follows the same pattern. At the lower levels, you do one thing at a time. You write a prompt, wait for output, review it, iterate. Sequential. Easy to follow.
At the higher levels, you manage multiple AI coding sessions simultaneously. Each one is working on a different subtask. Each one produces output you need to review. The complexity isn't in the code anymore. It's in the coordination. It's in your ability to context-switch between streams, hold the overall architecture in your head, and merge independent work products into a coherent whole.
Metaphor 3: The Art of Code Review
Senior engineers review more code than they write. This is not an accident. The highest-leverage activity in software development is ensuring that code does what it's supposed to do before it ships.
Reviewing code you didn't write is a distinct skill from writing code. It requires: understanding unfamiliar patterns and conventions line by line, recognizing good practices versus shortcuts, identifying subtle logic issues that tests might miss, and determining whether the code fulfills the TASK. Not just "does it run?" but "does it solve the problem we set out to solve?"
Here's why that matters: Ask it to review a file, and it will find something to change. That's how it's built. But sometimes the correct answer is to write no code at all. Sometimes the existing implementation is fine. Sometimes the task itself needs to be re-scoped. The ability to say "no, this is correct as-is" or "no, we shouldn't build this" is a skill AI won't develop for you.
For junior engineers especially: reviewing AI-generated code IS how you learn the codebase. Do not trust blindly. Read every line. Understand every line. Ask yourself why each decision was made. Yes, this is slower at first. That's the tax. Pay it upfront. It gets faster. And the understanding you build will serve you for the rest of your career.
The 6 Levels
Each level describes: what it is, the typical workflow, the pros, the cons, how to know you're at this level, and signs you're ready to move up.
Level 0: Manual Coder
What it is: You write all your code by hand. AI is only used to explain concepts, clarify documentation, or help you understand error messages. It never touches your codebase directly.
The flow:
Pros:
- Full control over every line
- You understand everything because you wrote everything
- No risk of AI-introduced bugs you don't understand
Cons:
- Zero speed benefit from AI
- You're doing all the repetitive work yourself
- Boilerplate, scaffolding, and routine patterns eat your time
Signs you're at this level:
- You search the web or documentation for everything
- You only use AI to explain errors or clarify concepts
- You've never pasted AI-generated code into your project
Move up when: You're comfortable with your codebase and your language. You find yourself writing the same patterns repeatedly and want to speed up the routine parts. You trust yourself to review someone else's code.
Level 1: Copilot Mode
What it is: You use AI as an advanced documentation tool and snippet generator. You ask it to draft small, self-contained pieces of code. Functions, boilerplate, utility helpers. You copy the output in, review it, adjust it, and move on. The AI only has local context, whatever you paste into the prompt.
The flow:
Pros:
- You're still in full control of architecture and flow
- Offloads the tedious, well-defined portions of your work
- Low risk. Small snippets are easy to verify
Cons:
- Speed improvement is modest. Maybe 10-20% faster
- Limited to local context. The AI doesn't know your broader codebase
- You spend time explaining context that the AI can't see
Signs you're at this level:
- You use AI for individual functions, boilerplate, and syntax lookups
- You write the glue code, the architecture, and the integration yourself
- Your prompts are short. "Write a function that takes X and returns Y"
Move up when: You notice that your conversations with AI are getting longer. You're pasting in more and more context. You're explaining your data models, your conventions, your file structure. You're spending more time setting up the conversation than you're saving on the output. That friction is telling you something: you need a workflow where the AI understands more of the picture.
Level 2: Conversation Mode
What it is: You have an extended conversation with the AI until it understands your task, your codebase context, and your constraints. Then you ask it to implement the full subtask. You review the output, iterate through conversation, and refine until it's right.
The flow:
Pros:
- Leverages the full context window. The AI understands your situation
- The conversation itself often clarifies your own thinking
- Can handle more complex, multi-file changes
Cons:
- Hard to review the "plan" before execution. The AI just starts coding
- Naturally single-threaded. One conversation, one task
- Context can drift over long conversations. The AI may forget earlier constraints
- Difficult to reproduce. If the conversation is lost, the reasoning is lost
Signs you're at this level:
- You paste large code chunks and iterate through conversation
- Your conversations run 10-20 messages before you get usable output
- You often say things like "no, I meant..." or "you forgot that we also need..."
- The output quality depends heavily on how well you explain things
Move up when: You find yourself repeating context across conversations. You wish you could capture the "plan" before the AI starts writing code. You want to review the approach, not just the output. You've been burned by the AI going in a direction you didn't want, and you had to start over.
Level 3: Plan Mode
What it is: You use AI to draft an implementation plan before any code is written. The plan describes what files to change, what the approach is, what the key decisions are. You review the plan. Edit it if needed. Save it. Then the AI executes from the plan. You review the resulting code against the plan.
The flow:
Pros:
- The saved plan is a single source of truth. Anyone (or any AI session) could execute from it
- You verify twice: once on the plan, once on the code
- Plans are faster to review than code. You catch directional mistakes early
- If implementation goes wrong, you revise the plan and re-execute. You don't start from scratch
Cons:
- Verification still relies entirely on code review and manual testing
- No automated way to confirm the code actually works
- Plan quality varies. You need to know what a good plan looks like
Signs you're at this level:
- You write (or have AI write) implementation plans before coding starts
- You treat the plan as a reviewable document, not just a mental note
- You've caught significant directional errors at the plan stage that saved you hours
- Your plans specify file changes, approach, and key decisions
Move up when: You want automated verification that the code works, not just that it "looks correct." You've been in situations where the code looked right in review but had subtle bugs. You want a safety net beyond your own eyes.
Level 4: Test-Driven AI Development
What it is: Before implementation, you have AI draft tests that define correct behavior. Then you draft the implementation plan. Review the plan. Execute. Run the tests. Fix what fails. Then do a final code review. You now have three layers of verification: the plan, the tests, and your code review.
The flow:
Pros:
- Three verification layers. Plan review catches directional errors. Tests catch functional errors. Code review catches quality and design issues
- Tests give you confidence to accept AI output faster. If it passes, the bar is already high
- Enables automation. You can script the "implement, test, fix" loop
- Tests become documentation of expected behavior
Cons:
- Still coding one thing at a time. Sequential workflow
- Writing good tests requires knowing what to test. Bad tests give false confidence
- Initial setup takes longer. You're writing tests before you write code
- Some changes (UI, infrastructure, configuration) are hard to test this way
Signs you're at this level:
- Your workflow routinely includes writing tests before implementation
- You use test results, not just visual inspection, to validate AI output
- You've experienced the confidence boost of seeing tests pass on AI-generated code
- When tests fail, you update the plan and re-execute rather than manually patching
Move up when: You're comfortable running Level 4 on individual subtasks and it feels routine. You look at your task list and see 3-4 independent subtasks that don't depend on each other. You think: "Why am I doing these one at a time?"
Level 5: Mastery (Parallel AI Development)
What it is: You execute multiple non-dependent subtasks simultaneously, each running the Level 4 workflow. You manage multiple AI coding sessions at once. Each session has its own plan, its own tests, its own implementation. Optionally, you use a separate code review pass (automated or manual) to review output across all streams, identify integration issues, and draft fix plans for your approval.
The flow:
Stream B: tests → plan → implement → test → review
Stream C: tests → plan → implement → test → review
Pros:
- Multiplicative output. 3 parallel streams means roughly 3x the throughput
- Full-scale AI development. This is where the "10x" promise actually materializes
- Each stream has its own verification (plan + tests + review), so quality doesn't degrade
- Integration issues surface during the merge/review phase, not in production
Cons:
- High cognitive overhead. You're holding multiple contexts simultaneously
- Easy to skip steps when you're juggling streams. Discipline matters more here than anywhere
- Requires strong task decomposition. If subtasks aren't truly independent, parallel execution creates merge conflicts and integration bugs
- The hardest level to do well. Most engineers overestimate their readiness
Signs you're at this level:
- You regularly run multiple AI coding sessions simultaneously
- You've developed a system for tracking which stream is at which stage
- You use automated review to catch cross-stream issues
- Your task decomposition explicitly identifies dependencies and parallel opportunities
- You can context-switch between streams without losing the thread
A note on Level 5: This is not the default. Most engineers don't need to operate here most of the time. It's the ceiling, not the floor. The real value of this framework is helping you progress from wherever you are now to the next level, at whatever pace makes sense for your work.
A Word of Warning
Moving up levels too fast without proper verification isn't just risky. It can be catastrophic.
Here's the danger most people don't talk about: when you skip verification steps, the problems don't always show up immediately. AI-generated code can look correct, pass a quick glance, and even work in testing. But underneath, it may contain subtle issues. Hardcoded assumptions that break when data changes. Race conditions that only surface under load. Security gaps that no one notices until they're exploited. Logic that works for the common case but silently corrupts data in edge cases.
These aren't the kind of bugs that crash your app and get caught on day one. These are the kind that go undetected for weeks or months. They accumulate. They create maintenance nightmares. They cause the kind of production incidents where you're reading code at 2 AM thinking "who wrote this and why does it work this way?" and the answer is "an AI did, and nobody reviewed it properly."
The worst part: the faster you move, the harder these issues are to trace back to their source. By the time you discover the bug, 50 other changes have been built on top of it.
This is why each level emphasizes verification. The verification IS the skill. The AI writing code is the easy part. Knowing whether the code is correct, maintainable, and actually solves the problem is the hard part. That's your job. Don't outsource it.
How to Measure Yourself
Self-assessment is the only assessment that matters here. Nobody is grading you. The goal is honest awareness of where you are and deliberate practice toward where you want to be.
Track Weekly
Ask yourself these questions at the end of each week:
- What level am I operating at most of the time? Not your best moment. Your default mode.
- How often do I catch AI errors before they hit production? If the answer is "rarely" and you're at Level 2+, that's a problem.
- How much time do I spend on breakdown vs. coding vs. review? As you level up, more time shifts to breakdown and review. Less time on writing code directly. That's correct.
- Am I faster this week at the same level than I was last week? Progress within a level matters as much as moving between levels.
Signs You're Ready to Level Up
- You feel bored or constrained at your current level. The workflow feels routine, not challenging.
- Your review catches fewer errors. AI output consistently matches your expectations.
- You can articulate WHY the AI's approach is correct. Not just "it looks right" but "this is correct because..."
Signs You Leveled Up Too Fast
- You're trusting AI output without reading it carefully. You merge before you understand.
- Bugs are making it to production that you should have caught in review.
- When someone asks what the code does, you can't explain it confidently.
- You feel out of control. Like the AI is driving and you're just watching.
If any of these resonate, drop back a level. There's no shame in it. The goal is sustainable effectiveness, not speed at the cost of quality.
The Honest Timeline
Most engineers spend months at Levels 1 through 3. This is normal. These levels are where you build the review skills, the decomposition skills, and the judgment that make higher levels possible.
Level 4 takes deliberate practice. Writing tests first is a discipline, not a natural instinct. It changes how you think about requirements.
Level 5 is where senior engineers with strong fundamentals operate. If you're early in your career, Level 5 might be a year or more away. That's fine. A strong Level 3 engineer is more effective than a sloppy Level 5 engineer every single time.
No shame in being at any level. The goal is consistent, measured progression.
The Bigger Picture
Here's something I've seen repeatedly: engineers who focus exclusively on AI coding speed without understanding the systems they're building end up going very fast in the wrong direction.
Getting to Level 5 without understanding system design is like managing 10 junior engineers who are all building the wrong thing quickly. Output goes up. Value doesn't.
The engineers who thrive aren't the fastest coders. They're the ones who understand how systems fit together. They know why you'd choose a queue over direct API calls. They understand why a cache improves read performance but introduces consistency challenges. They think in building blocks, patterns, and tradeoffs.
AI coding proficiency is a skill multiplier, not a replacement for understanding. It amplifies whatever you already know. If you understand system design deeply, AI makes you extraordinary. If you don't, AI makes you extraordinarily fast at creating technical debt.
If you want to build that system design foundation, the building blocks framework gives you a concrete, repeatable way to think about any system. Seven building blocks. Three external entities. Every system you've ever used, decomposed into patterns you can learn and apply.
The levels in this article give you the AI coding side. The building blocks give you the system design side. Together, they make you the engineer every team wants.