Andreas Neukoetter

AI Is a Multiplier

February 26, 2026

AI is the strongest performance multiplier engineering has seen in decades. And multipliers widen gaps.

When I started programming, I had to cut holes in punch cards by hand. Later, someone handed me a hole puncher. It felt like cheating. It wasn't.

Every few years, engineering gets a new layer of leverage. A new abstraction. A new tool that removes friction. And every time, the reaction is the same. It's too easy. It hides complexity. It will make people sloppy. It will destroy craftsmanship.

Then the dust settles. The tool becomes normal. And the bar quietly moves up.

AI is not the first time this has happened. It is simply the most aggressive version so far. And this time, the amplification is large enough that pretending it does not matter is no longer a neutral stance. It is a strategic one.

Friction Has Never Been the Point

Punch cards removed the need to wire hardware manually. Compilers removed the need to reason about raw opcodes. Higher-level languages removed the need to manually manage every byte. Intelligent IDEs began to understand code structure instead of plain text.

At each step, friction was reduced. At no step did responsibility disappear. In fact, the opposite happened.

As the tooling became more powerful, the responsibility shifted upward. You were no longer debugging assembly, but you were designing systems. You were no longer counting cycles, but you were reasoning about concurrency. You were no longer fighting syntax, but you were accountable for architecture.

AI sits squarely in that pattern.

It removes friction in two ways that matter deeply. It compresses iteration cycles. And it expands analytical reach.

Iteration Speed Is No Longer the Bottleneck

Historically, iteration cost was dominated by typing and lookup. Writing boilerplate. Searching documentation. Sketching patterns from memory.

AI collapses that.

You can now prototype three approaches before lunch. You can refactor aggressively without staring at a blank editor. You can explore alternatives instead of committing early because rewriting is painful.

That changes behavior. It encourages exploration. It reduces attachment to first drafts. It lowers the psychological cost of throwing code away.

Typing was never the scarce resource. Understanding was. AI removes typing as a differentiator. What remains visible is understanding.

Analytical Depth On Demand

AI does not just generate code. It can analyze it.

There is a difference between asking:

"Make me a game."

And asking:

"The current implementation uses RwLock<Vec<T>> to coordinate worker queues across threads. Evaluate whether mpsc<T> would reduce contention. Analyze fairness, backpressure, and potential starvation. Identify code paths without test coverage and estimate the complexity of migrating."

The first prompt produces output. The second produces analysis.

In the first case, AI is a generator. In the second, it is an amplifier of architectural thinking.

It can reason about lock contention patterns. It can highlight untested paths. It can suggest alternative concurrency primitives. It can point out where assumptions are brittle.

But it cannot own the decision.

If you do not understand why RwLock<Vec<T>> behaves the way it does under contention, or how mpsc<T> changes the backpressure characteristics of your system, the AI output is noise.

If you do understand those trade-offs, the AI becomes a force multiplier.

It surfaces edge cases faster. It challenges assumptions. It compresses feedback loops that previously required hours of manual inspection.

That is not cheating. That is leverage.

Vibe Coding and Abdication

There is a term floating around recently: "vibe coding".

It usually describes an approach where you prompt your way to a working result. The output compiles. It runs. It demos well. But no one owns it.

Vibe coding is amplification without ownership.

It feels productive because friction is gone. It looks impressive because output appears quickly. But the underlying system is opaque even to the person who prompted it.

Under load, it cracks. Under change, it drifts. Under scrutiny, it dissolves.

This is not an AI problem. It is a mindset problem.

AI did not invent abdication. It merely made it easier to hide. The tool is not responsible for whether someone understands what they commit. That responsibility never moved.

Engineering Has Never Been Evenly Distributed

There is a comfortable myth about engineering performance. That there is a smooth curve. That most people are somewhere in the middle.

In practice, the distribution is lumpy.

There are engineers who think in systems. Who model failure modes. Who reason about trade-offs before they reason about features. Who understand that code is a liability the moment it is written. And there are those who assemble components until something works.

This divide has always existed. AI does not create it. It widens it.

Multipliers widen gaps.

When iteration becomes cheap, those who know what to iterate on move faster. When analysis becomes available on demand, those who can interpret it make better decisions sooner. When friction disappears, understanding is exposed.

An engineer who understands concurrency can use AI to explore lock-free alternatives, model contention scenarios, and stress-test assumptions in minutes. An engineer who does not will produce faster confusion.

The output may look similar at first glance. The long-term trajectory will not.

"AI Is Cheating"

The accusation that AI is cheating is revealing.

Cheating implies unfair advantage. It implies skipping effort. It implies that difficulty itself is proof of merit. But difficulty has never been the goal.

Building correct, maintainable, scalable systems has.

If using a compiler instead of hand-written assembly was not cheating, and if using an intelligent IDE that understands your codebase was not cheating, then using a system that can generate and analyze code is not cheating either.

It is the next abstraction layer.

What matters is not whether a tool reduces effort. What matters is whether the engineer understands the system they are responsible for.

AI does not absolve that responsibility. It makes the absence of it more visible.

Three Futures

From a distance, three patterns are emerging.

Some teams lean in deliberately. They establish norms. AI use is expected. Code ownership is non-negotiable. Review standards evolve. Architectural discussions become sharper because iteration is faster and alternatives are easier to explore. Their workflow changes. Their style changes. Their velocity compounds.

Some teams resist. They frame AI as a threat to craftsmanship. They delay adoption. They build policies around prohibition. They will not collapse overnight. They will simply be outpaced.

And some teams allow AI informally. It is used, but not discussed. There are no shared expectations. No explicit guardrails. No clarity about responsibility. The output looks modern. The process remains unchanged. This is the most dangerous position. It produces drift. It produces hidden dependencies. It produces systems no one fully owns.

If AI is banned, it will not disappear. It will move underground. And underground usage rarely comes with standards.

The New Baseline

AI use is quickly becoming expected. Not because it is fashionable. Not because it is magical. But because it compresses iteration and expands analytical depth.

Those are competitive advantages.

Code ownership remains non-negotiable. Understanding remains non-negotiable. Architectural responsibility remains non-negotiable.

The lever has grown longer. Whether that moves your system forward or tears it apart depends on who is holding it.

Multipliers widen gaps. That has always been true. It is simply more visible now.

← Back to all posts