The convergence point for agentic systems is the protocol layer
Frontier AI labs are no longer competing on raw model capability alone — coordination protocols between agents have become the contested ground.
For most of the past three years, the AI industry has marketed itself in benchmarks. MMLU. SWE-bench. ARC. The competitive pitch has been simple: bigger model, higher score, better product. That framing is breaking down.
The reason is structural. Once a single model can reliably handle a multi-step task in isolation, the bottleneck moves elsewhere — to the moment when an agent has to call another agent, hand off context, query a tool that lives in another company's stack, or recover from a partial failure halfway through a job. None of that is a model-quality problem. It is an interoperability problem.
That is why the most consequential releases of the last twelve months have not been new model weights. They have been protocol specifications. Anthropic's Model Context Protocol gave agents a uniform way to discover and call tools. Google introduced Agent2Agent, an open spec for one agent to delegate to another. OpenAI shipped richer function-calling primitives and a server-side tool registry. Each is technically narrow. Together they describe a future in which a buyer no longer purchases an "AI assistant" from a single vendor — they assemble a workflow from agents and tools that may originate anywhere, the way a modern web stack composes services from a dozen suppliers.
The economics shift away from monoliths
For incumbents whose moat depended on a closed agent platform, this is uncomfortable. The implicit promise of a closed agent — "only ours can talk to ours" — collapses the moment a customer can wire a Claude-driven coordinator to an OpenAI-driven specialist and a Google-driven retrieval agent through a stable, documented interface. It also removes one of the cleanest justifications for premium pricing: lock-in.
The gains accrue elsewhere. Identity providers will sell agent identity. Observability vendors will sell traces of agent runs. Insurers and auditors will sell verification that an agent did what it claimed. Each of those layers becomes commercially viable only once the underlying coordination is standardised — which is exactly what is now happening.
There is also a quieter implication for safety teams. Inter-agent calls cross trust boundaries. A protocol that tells one agent how to ask another agent for help also tells a hostile agent how to inject instructions, exfiltrate context, or impersonate a tool. The MCP and A2A specs both already include sections on capability scoping and authentication, but the implementation reality across deployed servers is uneven. Several recent reviews of public MCP servers have flagged authentication-bypass and prompt-injection patterns that are not theoretical.
For enterprise buyers, the practical implication is that procurement criteria need to change. "Which model do you use" matters less than it did. "Which protocols do your agents speak, how is identity propagated across calls, and how is a hostile call refused" matters considerably more. The labs that win the next phase will be the ones whose agents can survive in the open — talking to systems they did not build, under terms they did not write.