To fail is human. But with AI, you’ll fail faster and at scale

The Mirage of Magic: Why Speed Alone Isn’t Innovation

Every now and then, the tech industry falls in love with a shortcut. Remember CASE tools? Or that time everyone thought UML diagrams would generate enterprise-grade software? Now it’s low-code, no-code, and AI-generated source code that promise to make everyone a developer and every idea instantly executable.

The new narrative: Just describe what you want, and AI will build it. No more messy architecture discussions, no backlog grooming, no product specs. Speed, simplicity, and infinite scalability. A revolution!

But here’s the uncomfortable truth: we’ve seen this movie before, and we know how it ends.

It ends in broken systems, skyrocketing maintenance costs, codebases no one wants to touch, and management asking why their AI doesn’t seem as helpful after six months of accumulating quick wins.

AI ignorance instead of agility

Agile was never about moving fast in the absence of structure. It’s about learning fast because you have structure. Agile assumes:

  • You start with a clear product vision and break it down gradually
  • You write code iteratively, improving and refining as you go
  • You clean up as you build. Not later. Now
  • You bring people from different functions together to challenge, question, and correct

AI doesn’t do any of that. It doesn’t argue with you when you propose something vague. More importantly, it ignores decades of proven software development methods — from structured requirements engineering to iterative, test-driven development and architectural planning. These aren’t optional rituals; they’re hard-earned practices that emerged from countless failed projects. AI doesn’t care. It just spits out code, often with no awareness of whether it aligns with your system’s long-term goals or basic software engineering principles. It doesn’t say, “This requirement makes no sense.” It doesn’t advocate for testability or maintainability. It just gives you code.

Which, ironically, is what makes it dangerous. Because once code exists, people assume it’s ready.

From Waterfall to Agile to “Promptfall”

AI tooling ignores a lesson the industry painfully learned decades ago: you can’t compensate for poor inputs with good outputs.

In the waterfall model, requirements had to be painfully specific. Every edge case, exception, and user flow had to be documented up front. And still, the results were often disappointing.

Agile taught us to accept imperfection upfront and instead invest in iteration, collaboration, and ongoing clarification.

Now AI promises to do away with all that. But unless you give it a 60-page functional spec (spoiler: you won’t), it will misinterpret, overbuild, and underdeliver. And no, rewriting the prompt ten times won’t fix that. Because the problem isn’t the prompt. It’s the absence of a thinking, structured, critical dialogue.

“The Car Drives Itself” — Until You Hit the Roundabout

Let’s take a step out of code. Imagine self-driving cars.

On a highway? Works pretty well. The environment is predictable. The rules are clear. It’s boring.

Now imagine driving around the Arc de Triomphe in Paris. Twelve roads converge. No lanes. No signals. No predictability.

That’s product development.

AI in software development is like self-driving in the roundabout. It will happily plow forward based on an optimistic assumption that “everything is fine.” But without a human who can intervene, guide, replan, or just say “Nope, this doesn’t make sense,” things will go wrong. Fast.

And that’s the real risk: AI encourages a false sense of confidence and control. It looks clean, sounds smart, and outputs code faster than any human. But ask it to consider context, negotiate trade-offs, or validate outcomes across multiple teams and users? Silence.

AI Can Be Useful. Just Not How You Think

So here’s the twist: AI isn’t the villain. It’s the misunderstood sidekick. Used well, it can amplify a team’s speed and confidence:

  • Drafting boilerplate code or tests
  • Generating suggestions for refactoring
  • Highlighting edge cases you might have missed
  • Supporting documentation or translation tasks

But it needs context, constraints, and continuous review. The magic isn’t in replacing your team. It’s in freeing them to focus on what matters — defining the right problems, thinking in systems, and making real-world trade-offs.

The winning formula? AI as an accelerator within a clearly defined product development structure. Like cruise control on a road trip. Helpful, but not a substitute for knowing where you’re going.

So, Where Does That Leave Us?

AI isn’t evil. Low-code platforms aren’t useless. But they are tools. And tools don’t replace strategy, architecture, or collaboration.

In the right hands and within well-defined environments, they shine. Like self-driving on a highway, they reduce effort, increase speed, and free up mental space.

But in complex, evolving, interdependent systems? You need professionals who know how to think in scenarios, deal with ambiguity, and manage risk. You need people who question, not just code.

So next time someone says, “With AI, anyone can build software,” ask this:

“Would you board a plane designed by someone who just described it to ChatGPT?”


No?


Then maybe don’t base your product strategy on it either.

2 Comments

  • Design for Imperfection. – Pivot Point
    at 8 months ago

    […] “Would you board a plane designed by someone who just described it to ChatGPT?” Go to Article […]

  • Prompt once, pray forever: how vague requirements become million-euro recalls – Pivot Point
    at 8 months ago

    […] fail is human. But with AI, you’ll fail faster and at scale.” Read his cautionary tale here → Pivot Point Blog – Mirko Willms. It’s the perfect companion piece to the one you’re reading […]