You throw a feature at Claude or your favorite AI. It runs for a minute. You get back 200 lines of code that doesn't match what you actually needed. Now you're rewriting it yourself, which defeats the whole purpose.

The problem isn't the AI. It's that you're not actually telling it what to do before it does it. Most people skip planning, skip course-correcting, and skip checking the work. Then they wonder why AI feels slower than just coding it themselves.

I spent way too long doing that. Then I switched to a 3-step system that actually saves time: plan what you're building, let the AI build it while you steer, then review the output before shipping. It sounds simple because it is. But it changes everything.

Before we start I want to make it clear that I will be mentioning “Greptile” and I have an affiliate link you can use to support me. But there are other alternatives like “CodeRabbit” and others.

Planning: Tell AI What Success Looks Like

This is where you win or lose.

You don't give AI vague instructions. You give it everything it needs to understand the problem. For me, that usually means dumping the actual docs or code I'm working with straight into the prompt.

Here's how I do it:

Go to the docs or repo you're building from. Copy the relevant section (markdown works great). Paste it into your AI chat along with what you actually want to build. Then ask it to make a plan, not the code yet.

Example: I wanted to add authentication to a project using Better Auth. Instead of saying "add auth," I went to https://www.better-auth.com/docs/installation, grabbed the installation section in markdown, pasted it, and asked Claude: "Here's the Better Auth setup docs. I need to add this to my Next.js app. Make a plan for what we'd do step by step. Don't code yet, just tell me the plan."

Claude came back with a 5-step breakdown:

  1. Install the dependencies

  2. Set up the environment variables

  3. Create the config file

  4. Add the API routes

  5. Wire it into the frontend

I read through it, suggested adding a database migration step, and said "okay, let's do it." Then we moved to the next phase.

This takes 3 minutes and saves you 30 minutes of rework later. You're not guessing anymore. You both know what done looks like.

Doing: Let It Build, But Stay Involved

Now you actually build the thing.

Tell your AI to start on step 1. It writes code. You read it. If something's off, you say so right then. You don't wait until it's all done.

This is not "fire and forget." You're in the chat, commenting on each piece as it comes out.

With the auth setup, I had Claude start with the dependencies and config. It created the files correctly. Then I said "add support for GitHub login in this step" before it moved on. It adjusted. We kept going.

The AI knows instantly what you want instead of building the whole thing wrong and having to backtrack. You stay in control. It's faster because you're not fighting it, you're collaborating.

Reviewing: Don't Ship Garbage

This is the part people skip and regret.

Once the code is done, don't just copy it into your app. Actually read it. Check for security issues. Look for bugs. See if it matches the plan you both agreed on.

I use an AI code reviewer to help with this. Greptile lets you review code against your codebase, catch style issues, find bugs. You paste the code in, it checks it, you get back a report. Then you decide if it's ready or needs changes. (There are other alternatives to greptile, like coderabbit but greptile has highest accuracy with their open source benchmarks here)

Here's a free option: take the code block, ask Claude "what could go wrong with this code? what would break it?" and let it audit itself. You'd be surprised what it catches when you ask directly.

For my auth setup, I ran the generated code through a quick review. Greptile caught that I wasn't handling the database connection properly. I asked Claude to fix it. Took two minutes. Now the code actually works.

Real Example: Setting Up Better Auth

Here's the actual workflow, step by step:

Step 1: Planning

  • I grabbed the Better Auth docs

  • Pasted them into Claude with "add auth to my Next.js project"

  • Got back a plan

  • Added "also handle logout" as a requirement

  • We agreed on the approach

Step 2: Doing

  • Claude started with environment variables

  • I reviewed them, they looked right

  • It moved to the config file

  • I asked it to add logging for debugging

  • It updated the approach mid-stream

  • Continued through the API routes and frontend wiring

Step 3: Reviewing

  • Pasted the final auth service code into a ai code reviewer

  • It flagged a missing error handler

  • Sent the code back to Claude

  • Got a fixed version

  • Done. Ship it.

Total time: 25 minutes. If I'd done it without the plan, I'd have thrown away 15-20 minutes on fixes.

When It Breaks

AI makes a plan you don't actually agree with Read it carefully. If it's missing something, say so. This is the time to course-correct. If you approve a bad plan, the whole thing's bad.

AI goes off track mid-build Jump in immediately. Don't wait. Say "that's not matching the plan" or "I need this different." It adjusts. Move on.

The code looks fine but doesn't work Usually a config issue or a missing step. Ask Claude to walk you through what it built and why. Ask it to audit itself. Most of the time there's a gap you both missed.

You don't have time to review Skip the review and you'll spend 10x the time fixing bugs later. Just do it. It's 5 minutes.

AI gets confused about your codebase Give it more context. Show it similar code you already have. Tell it the pattern you use. It's not the AI being stupid, it's not enough information.

Why This Actually Works

This isn't revolutionary. It's just actually using the tool right.

Most people treat AI like a search engine: ask once, take the answer, move on. That doesn't work for building stuff. Building needs back-and-forth. You need to steer. The AI needs to know when it's wrong.

This 3-step system gives you that. You plan together. You build together. You review before shipping. It takes the same time as explaining it to a person would take, but the person doesn't get tired and the output is code.

Compare that to the old way: write a vague prompt, get mediocre code, spend an hour fixing it, hate AI for wasting your time. This saves you money and your sanity.

Cost-wise, Claude Sonnet is like $3/day if you're prompting well. Better Auth is free. Greptile's reviewer is free tier or $15/month for more reviews. Total: almost nothing. Time saved: 10-15 hours a month if you're building regularly. (You can also get a glm coding plan, which is same performance as 4.0 Sonnet but way cheaper

Next Steps

  1. Pick something small to build, a feature, an API endpoint, a script. Something that'd take 30 minutes manually.

  2. Find the docs or reference code and dump them into your chat (markdown or URL).

  3. Ask for a plan, not code. Read it, change it, agree on it.

  4. Let the AI build step-by-step while you're there commenting.

  5. Review the code before using it. Use Greptile or just ask Claude to audit itself. (Be careful with this as Claude from my experience tends to be lazy so don’t review in the same session and use a seperate one)

Do this once and you'll see the difference. It feels slower at first because you're being intentional. But you'll ship better code faster, and you'll actually know what the AI built and why.

Try it this week. Let me know how it goes. (You can actually dm me on X at @leoisadev and I will help you with prompts and other things so you can be very efficient with AI)