Why Defense Primes Are Rewriting Software Faster Than Hardware Acquisition Cycles Allow — aerospace-defense

Inside: Defense Primes Are Rewriting Software Faster Than Hardware Acquisition Cycles Allow

Agentic systems now iterate in weeks while platform lifecycles stretch across decades, forcing a fundamental rupture in how DoD manages technology refresh.

By Dr. Shayan Salehi H.C. 7 min read

Image: Unsplash

The F-35 Joint Strike Fighter began development in 2001 and will remain in service until at least 2070. OpenAI released GPT-4 in March 2023 and deprecated it operationally by late 2024. This velocity asymmetry is not a curious footnote but the central strategic dilemma facing every program executive officer, chief engineer, and acquisition leader in aerospace and defense today. Hardware platforms built for 40-year lifespans now host software components that turn over every quarter, and the integration surface between the two has become the highest-stakes battleground in defense technology. The question is no longer whether to embed agentic AI into mission systems but how to architect platforms so they can absorb continuous intelligence upgrades without triggering recertification cascades that paralyze fielding timelines.

This is not theoretical. In January 2026, the Air Force Life Cycle Management Center issued updated guidance requiring all Category I aircraft modifications to demonstrate software component traceability via cryptographically signed manifest logs, effectively mandating distributed ledger infrastructure for any platform receiving agentic decision-support modules. The policy emerged after two separate incidents in 2025 where autonomously generated flight-plan optimizations conflicted with legacy mission computers, forcing emergency software rollbacks that grounded aircraft for 72 hours. The cost of those groundings, measured in lost sorties and delayed operational evaluations, exceeded $140 million. The policy response was swift: if you cannot prove the provenance and version state of every AI agent touching mission-critical code, you cannot deploy.

The Certification Trap and the Agentic Escape Hatch

Traditional defense software operates under DO-178C for airborne systems and DO-254 for hardware, standards designed when software was static, deterministic, and exhaustively testable. Agentic systems violate every assumption underpinning those frameworks. A large language model fine-tuned on maintenance logs and integrated into predictive logistics does not have a fixed decision tree. Its outputs depend on training data composition, inference-time temperature settings, retrieval-augmented generation pipelines, and emergent reasoning patterns that shift with model updates. Certifying such a system under DO-178C Level A would require proving that every possible input produces a safe output, a combinatorial impossibility.

The pragmatic workaround now gaining traction is architectural, not procedural. Defense primes are implementing what Lockheed Martin's Skunk Works division calls "agent containment zones": software partitions where agentic systems operate under continuous human-supervised evaluation but are architecturally isolated from safety-critical control surfaces. The agent proposes, the deterministic controller disposes. Northrop Grumman's B-21 Raider, delivered in late 2025, uses this model for its mission planning suite. The AI agent ingests threat intelligence, weather data, and electronic order of battle updates, then generates optimized ingress routes. But the final navigation commands pass through a formally verified gateway that applies hard geometric and performance constraints, ensuring no agent hallucination can command a flight profile outside the aircraft's certified envelope.

This architecture enables rapid iteration. When a new foundation model is released, the agent can be swapped, retrained, and redeployed without recertifying the entire avionics stack. The gateway remains static and certified. The intelligence layer evolves. The tradeoff is latency and operational flexibility, but for programs facing 24-month certification timelines, the ability to refresh intelligence components in 90 days is worth the overhead.

Distributed Ledgers as the New Configuration Management Backbone

Configuration management in aerospace has historically relied on centralized databases, change control boards, and paper trails. That model collapses when software updates arrive weekly and span globally distributed DevSecOps pipelines involving contractors, subcontractors, and coalition partners. The F-35's Autonomic Logistics Information System (ALIS), intended to centralize fleet health data, became infamous for its inability to handle version drift across international partners. By 2024, the system had logged over 80,000 unresolved configuration discrepancies.

The successor system, now branded ODIN (Operational Data Integrated Network), incorporates a permissioned distributed ledger to timestamp and cryptographically attest every software build, configuration change, and agent deployment. Each node, whether a maintainer's tablet in Poland or a mission planning cell at Nellis Air Force Base, writes immutable records of what software version was loaded, when, and by whom. When an AI agent flags an anomaly in sensor fusion performance, investigators can trace the exact configuration ancestry of every software module in the chain, across organizational boundaries, without depending on a single vendor's proprietary database.

Raytheon integrated a similar ledger architecture into its Next Generation Jammer Mid-Band (NGJ-MB) system for the EA-18G Growler. The jammer's threat library now updates via over-the-air pushes coordinated through a Hyperledger Fabric network. Each update is validated by multiple coalition partner nodes before propagation, ensuring that adversarial actors cannot inject malicious threat parameters into the library. The system went operational in February 2026 and has processed 37 threat library updates in eight weeks, a tempo impossible under the previous manual distribution process.

Tactical Edge Computing and the Latency Budget

Agentic intelligence is computationally expensive. A single inference pass through a 70-billion-parameter model can require several gigabytes of GPU memory and hundreds of milliseconds of compute time. In a data center, that is manageable. In a fighter cockpit at Mach 1.8 under electronic attack, it is catastrophic. The latency budget for a combat identification decision is measured in milliseconds, not seconds, and backhaul to a cloud infrastructure is not an option when the adversary is jamming satellite uplinks.

This has driven an architectural shift toward tactical edge computing, where inference happens on ruggedized hardware mounted in the aircraft, vehicle, or forward operating base. NVIDIA's IGX Orin platform, designed for autonomous vehicles, has been qualified for airborne use by multiple primes and now appears in the mission systems of the Army's Future Long-Range Assault Aircraft (FLRAA) and the Navy's MQ-25 Stingray. These systems run quantized models, pruned to fit within thermal and power envelopes, and execute inference locally. The models are trained centrally and distributed via secure channels, but inference is federated.

The operational implication is profound. An AI agent managing sensor fusion on a distributed multi-domain command and control network does not phone home for every decision. It synthesizes data from onboard sensors, fuses inputs from wingmen via tactical datalink, and updates its threat assessment autonomously. The human operator receives decision-quality intelligence without waiting for a satellite pass or relying on a network that may not survive the first minutes of a contested fight.

Boeing's MQ-28 Ghost Bat, an uncrewed combat aircraft that first flew in 2023 and achieved initial operational capability in late 2025, exemplifies this architecture. The aircraft carries a Xilinx Versal AI Core FPGA running real-time object detection and mission planning agents. The system updates its behavioral models between sorties via ground data transfer, but in flight it operates entirely disconnected, making teaming decisions with manned aircraft based on learned coordination policies.

Zero-Trust Architecture as the Default Posture

The integration of AI agents and distributed ledger infrastructure has coincided with, and in many cases necessitated, the wholesale adoption of zero-trust security models. Legacy defense networks operated on perimeter defense: hard outer shell, soft interior. Once authenticated, a user or system was trusted across the enclave. That model fails catastrophically when agentic systems autonomously traverse security domains, pulling data from intelligence databases, mission planning tools, and maintenance systems, all within seconds.

The Department of Defense's zero-trust strategy, published in final form in late 2024, mandates continuous authentication, least-privilege access, and micro-segmentation for all systems handling Controlled Unclassified Information (CUI) or higher. For AI agents, this means every API call, every data retrieval, and every inference request must be independently authorized based on role, context, and risk posture. An agent running on a deployed platform in a contested environment operates under different trust assumptions than the same agent running in a test facility.

Northrop Grumman's open mission systems architecture for the B-21 implements zero-trust at the messaging layer. Every sensor feed, every command, and every status update is signed and encrypted, and the receiving module validates the sender's identity and clearance before processing. This incurs computational overhead, but it eliminates entire classes of spoofing and injection attacks. When an adversary attempts to inject false radar tracks into the sensor fusion pipeline, the system rejects the input at the transport layer, long before it reaches the AI agent.

What to Do Next Quarter

If you are leading technology strategy or acquisition for an aerospace or defense program, three actions are immediately executable. First, audit your software architecture to identify where agentic components will integrate and design isolation boundaries that allow agent refresh without recertification of the entire stack. This is not a software problem but a systems engineering problem, and it must involve your chief engineers and certification authorities now, not after the agent is built. Second, pilot a permissioned distributed ledger for configuration management on at least one program of record, preferably one with coalition partners or complex subcontractor networks. The goal is not blockchain for its own sake but immutable, auditable provenance for software components that update continuously. Third, conduct a zero-trust readiness assessment for every system that will host agentic decision support, focusing on API security, credential lifecycle management, and micro-segmentation. The adversary is already inside your network. Your architecture must assume that and enforce least-privilege access at every transaction.

Tags:ai-agentsdefense-acquisitionsoftware-defined-platformszero-trust-architecturetactical-edge-computingmulti-domain-commandsatellite-constellationdistributed-ledger