← Back to blog

The Synthesis Problem AI Did Not Solve

Gary Fuller

Gary Fuller

Enterprise Digital Experience Architect

Enterprise ArchitectureAIEngineering Leadership

The smartest engineer in the room may become a liability, Aleksandrs Ralovecs recently argued in his article "The Future of Engineering is Not Individual Intelligence", and the reasoning is structural.

Code production used to be the hard part, so organizations built themselves around the engineers who could hold the most in their heads. When one person can now prototype in a day what used to take weeks, the constraint shifts. The hard question is no longer how fast you can build. It is how fast a team can collectively update its understanding of reality.

Defending decisions stops being valuable. Rapidly abandoning partial truths becomes the operational skill. Low ego becomes infrastructure.

The argument is right. It also leaves four structural problems unaddressed.

The bottleneck moved, but someone still has to close the loop

Collective reality assembly is not collective decision making. When a Tuesday prototype surfaces market friction, user confusion, legal exposure, and architectural weakness in the same hour, what surfaced still has to be integrated into a commitment. Distributed input does not converge on its own.

The architect of the previous era integrated cross-domain signal too. What is different now is not the integration but the conditions. The synthesizer in an AI-native environment cannot acquire personal mastery of the domains they are integrating. The inputs move too fast and the domains are too many.

The old architect built depth over years. Their replacement has hours, and the people feeding them input are themselves operating with capabilities that just changed in ways nobody fully understands.

The old failure was the architect being wrong. The new failure is no decision happening at all, or a decision that ratifies whoever spoke loudest. Without disciplined synthesis, cheap iteration produces paralysis.

Who is the synthesizer on your team, and what are you developing in that person?

Cheap iteration is necessary but not sufficient

The image of product, compliance, operations, and engineering reacting to the same morning prototype assumes prerequisites the technology does not create. Most organizations still route prototypes through approval chains built when iteration was expensive. Cheaper iteration does not dissolve those structures. It produces more artifacts for them to be slow about.

Psychological safety has to exist before compliance will name a problem in week one rather than wait until formal review gives them cover. Cross-functional access has to be designed. Leadership has to reward early problem identification rather than treat it as scope creep.

Strip any condition out and the result is not faster learning. It is faster accumulation of unreviewed work.

AI compresses coordination costs, not knowledge costs

The argument that small teams now operate in spaces previously requiring entire departments conflates two things. One is the coordination overhead of integrating domain experts. The other is the knowledge those experts hold.

AI reduces the first. It does not replace the second.

Compliance is not a coordination problem. It is a knowledge domain with legal stakes. Security is a discipline whose adversaries specialize in exploiting the gap between what a generalist thinks they know and what a specialist actually knows.

Velocity gains get translated into a hiring thesis: fewer specialists, AI fills the gap. The short-term output looks identical. The long-term liability profile does not.

The ego problem scales up, not just down

Low ego gets framed as an operational requirement for engineers. That framing is incomplete in a way that matters more than the other three problems combined.

The structural blocker is not engineering ego. It is leadership ego. Executives who built careers on being the strategic decision maker face the same identity threat, with substantially more power to resist it and a longer career investment in the model AI is invalidating.

An engineer defending an architecture creates local friction. A VP defending a direction creates organizational immobility.

Every other problem in this article gets worse when leadership cannot model rapid abandonment of partial truths. The synthesis function does not get built. The structural prerequisites do not get built. The specialist hollowing happens anyway, because the org chart has already been justified to the board.

This is a different ask than supporting innovation. The requirement is for leaders to publicly walk back positions when evidence forces it. To kill initiatives they personally championed when the prototype proves them wrong.

If your AI transformation plan does not include explicit work on this, it is built on the assumption that the hardest part of the change will happen by itself.

The question is not whether the game has changed. It is whether you are building for the game actually being played, or a simpler version that skips the parts that are hard.