AI Can Build It. But Can You Trust It?

I’ve used AI to build a spam monitoring system for a WordPress multisite, a booking system for sports pitches and courts, and a handful of custom plugins. It handled all of it impressively. Fast, functional, and far quicker than writing everything from scratch.

I also recently rebuilt a landing page from the ground up for a client who had designed it entirely with AI. They loved how it looked. The problem? They couldn’t get Google Analytics on it. They couldn’t connect it to their CRM. Every time they tried to extend it, the whole thing started to unravel. So they came to me.

That’s the gap nobody talks about.

The 80/20 problem

A developer I spoke to recently put it simply: AI can do the first 80% really well. It’s that last 20% where things get difficult. The security hardening, the complex workflows, the integrations that require a real understanding of how systems talk to each other.

And the frustrating thing is that 80% looks finished. It looks good. A client sees a working plugin or a polished landing page and reasonably assumes the job is done. But if nobody with development experience has reviewed it, there’s no way of knowing what’s sitting underneath.

I know one business that is currently paying a developer to fix systems that AI built for them. The original builds looked great. The problems only surfaced later. Edge cases, permissions logic, functionality that worked in isolation but broke when it had to connect to something else.

That’s not a reason to avoid AI. It’s a reason to understand what you’re working with.

What AI tends to miss

AI writes code that solves the problem as stated. It doesn’t always think about the problem you didn’t know to state.

Input validation. Exposed endpoints. User permissions that feel logical on the surface but create real vulnerabilities in practice. These aren’t exotic concerns. They’re exactly the kind of thing a developer checks for automatically, and exactly the kind of thing that won’t show up until something goes wrong.

Security isn’t visible. That’s what makes it easy to miss and expensive to fix.

The same goes for maintainability. AI can produce code that works perfectly today and becomes a headache the moment you need to change something, extend it, or hand it to someone else. Without a developer reviewing structure and logic, you often don’t find out until you’re already in the weeds.

The experience gap

Here’s the thing I keep coming back to: you need some development knowledge to evaluate what AI has built. Not necessarily to build it yourself, but enough to ask the right questions, spot the warning signs, and know when something needs a second look.

Without that, you’re trusting output you can’t assess. And AI is confident. It doesn’t flag uncertainty the way a cautious developer might. It gives you something that works, compiles, and looks right. Whether it’s actually right is a different question.

Used alongside real development experience, AI is genuinely powerful. It speeds up the straightforward parts of a project, handles repetitive tasks well, and can get something functional in front of a client faster than before. I use it regularly and I don’t plan to stop.

But I also know what to look for. I know where to push back. I know that “it runs without errors” and “it’s ready to go live” are not the same thing.

Ignoring it isn't the answer either

I want to be clear: this isn’t a case against using AI. If you’re a developer, an agency, or a business that builds or commissions digital products and you’re not using AI at all, you’re making life unnecessarily hard for yourself. The productivity gains are real.

But so are the risks, and they tend to be invisible until they’re not.

The smart position isn’t to avoid AI. It’s to use it with clear eyes. Know what it’s good at. Know where it needs checking. Build in the review time. Don’t assume that because it produced something quickly, that something is done.

The 80% is impressive. The 20% is still your responsibility.

Share the Post:

Related Posts