I’ve been using AI coding assistants for 18 months now. Here’s what’s real and what’s theater.
When GitHub Copilot first landed in my editor, I was skeptical. Another “productivity tool” that would slow me down with bad suggestions? But after shipping hundreds of features in our SaaS transportation platform with AI assistance, I’ve formed strong opinions about where AI actually shines—and where it absolutely fails.
This is not a promotional piece. This is honest.
What AI is Genuinely Good At
1. Boilerplate and Repetitive Code
If you’re writing a CRUD endpoint for the nth time, AI will save you 15 minutes per endpoint. It doesn’t eliminate the work, but it removes the tedium.
Example: Entity Framework DbSet operations
I started typing:
| |
Copilot finished:
| |
Verdict: Saved me 5 minutes. I still reviewed every line.
2. Test Case Scaffolding
Writing test setup code is mind-numbing. AI is excellent at it.
| |
AI will complete the test methods, mock setup, and assertion patterns. Is it perfect? No. Do I have to fix things? Yes. But it’s a 40% time savings.
3. Documentation and Comments
Writing XML docs for 50 methods is torture. AI does this remarkably well:
| |
The AI’s context awareness of your codebase is strong here.
Where AI Falls Apart
1. Understanding Your Custom Patterns
This is the big one. Our transportation domain has custom patterns that Copilot doesn’t understand.
We have a custom idempotency pattern for OrderId generation:
| |
When I asked Copilot to “implement an idempotent command handler,” it wrote:
| |
That’s fine, but it missed our specific requirements:
- We use the combined hash of IdempotencyKey + OrderId as the cache key
- We have a fallback to database lookup if cache misses
- We use a TTL that’s business-configurable, not hardcoded
The lesson: AI generates “general” solutions. Your code often needs “specific” solutions. You can’t just copy-paste.
2. Complex Multi-Service Flows
When I described a delivery booking flow across 5 services, Copilot generated code that:
- Didn’t handle partial failures correctly
- Ignored the compensating transaction pattern
- Hardcoded timeouts
- Had no retry logic
I had to completely rewrite it. It saved me nothing—it cost me time reviewing obviously wrong code.
3. Performance-Critical Code
Copilot has no concept of your performance constraints. A routing algorithm that works for 10 shipments but O(n²) for 10,000 shipments? Copilot will generate it confidently.
We have a spatial index lookup for nearby trucks:
| |
This is O(n) and runs on every request. The correct solution uses an R-tree spatial index. Copilot didn’t suggest it because it had no context that this list grows to 50k+ records daily.
4. Architecture Decisions
“Should we use Service Bus or Event Grid for this event?” I asked.
Copilot gave me a generic answer that was technically correct but didn’t account for:
- Our need for message ordering (Service Bus)
- Our high volume (Event Grid would be cheaper)
- Our retry requirements (both work, but differently)
It did not know our constraints. An architect still had to decide.
The Honest Productivity Metrics
Over 18 months working with AI on our platform:
| Task | Time Saved | Quality Issue Rate |
|---|---|---|
| Boilerplate CRUD | ~40% | 5% |
| Test scaffolding | ~35% | 12% |
| Documentation | ~60% | 2% |
| Routing algorithms | ~10% | 45% |
| Configuration classes | ~45% | 8% |
| Business logic | ~5% | 60% |
| Database migrations | ~50% | 8% |
| API endpoint structure | ~30% | 15% |
Average across the project: ~28% time savings, but only when used in the right context.
How to Use AI Coding Tools Effectively
Use it for what it’s good at. Write the hard 20% yourself. Let AI handle the mechanical 80%.
Maintain healthy skepticism. If AI writes something you don’t fully understand, don’t ship it. Rewrite it.
Give it context. Comment your patterns. The more context files your AI tool has access to, the better it understands your domain.
Don’t outsource thinking. Use AI to accelerate thinking, not replace it.
Review ruthlessly. Every line of AI-generated code should be reviewed like a junior developer wrote it. Because in some ways, it did.
Know its blind spots. Performance, security, and domain-specific logic are where AI struggles most. Pay extra attention here.
The Real Question
“Does AI make you a better developer faster?”
Yes, but only if you’re already a good developer. AI amplifies your ability to execute your ideas. It doesn’t help you come up with the ideas. If you don’t understand distributed systems, no amount of Copilot will help you design one.
In our platform, AI has been a force multiplier. We’ve shipped 3x more features per quarter than we would have otherwise. But every complex decision—architecture, performance, reliability—still required human engineers. And that’s how it should be.
The hype says AI is replacing developers. The reality? AI is replacing boring tasks, so we can focus on the interesting ones.
Use it. But stay skeptical. And always, always understand what your tools write.
