We are training virtual gladiators today that will rewrite the rules of aerial combat tomorrow. Labs and war games no longer pit one jet against another. They pit hundreds of low cost, networked aircraft against a handful of high end fighters, each side governed by different economics and different decision rules. The result is not a single dogfight but a new contest of scale, coordination, and graceful failure.

Modern simulation toolchains stitch together live, virtual, and constructive elements so a human pilot in an F-35 can be measured against a thousand simulated agents, or a single operator can task a mixed swarm of physical and virtual drones. Those demonstrations are not hypothetical. Research programs and field experiments have shown single operators and integrated architectures controlling 100 plus platforms in urban-style scenarios, turning what was once science fiction into an operationally relevant testbed for tactics.

On the autonomy side, multi agent reinforcement learning and other distributed decision methods have matured enough to generate emergent swarm behaviors in air combat simulations. Academic teams are now publishing confrontation models where swarms autonomously allocate targets, perform saturation tactics, and adapt to jamming or kinetic losses. Those papers are explicit about strengths and limits: swarms excel at mass, unpredictability, and attritable redundancy, but they are fragile where communications, sensing, or power chains are constricted. Simulations therefore shift the debate from whether swarms can fight to how and where they will fight.

The manned side is not standing still. The U.S. services and industry have pushed manned unmanned teaming experiments that aim to turn fighters into quarterbacks for loyal, expendable wingmen. Flight trials of attritable, AI-enabled loyal wingman concepts have shown the tactical promise of pairing a pilot with autonomous aircraft that scout, draw fire, and extend sensor coverage. That pairing radically changes the payoff matrix in simulated engagements: a single pilot can press an advantage deeper into contested airspace when supported by low cost collaborators.

When simulations place swarms against traditional air defenses and fighters they repeatedly reveal two themes. First, cost exchange matters. Low cost swarms can impose strategic effects simply by forcing defenders to expend expensive interceptors and sensor cycles. Second, layered defenses remain potent. Electronic warfare, high power microwave and directed energy concepts demonstrated in trials can blunt or collapse swarm cohesion when employed at scale. In short, swarms are an exponential problem for linear defenses but not an unstoppable one.

A realistic simulated dogfight therefore becomes a chess match of asymmetries. A defender will model and test layered responses where electronic attack scrambles links, HPM or lasers disrupt electronics, kinetic interceptors consume the swarm’s numbers, and deception or camouflage deny kill chains. An attacker will test mixed compositions, combining cheap kamikaze nodes, faster jet propelled entrants, decoys, and electronic cover, along with distributed autonomy that tolerates high attrition. Simulation allows both sides to discover inflection points: the swarm size at which defenders exhaust interceptors, the communications density beyond which jamming is ineffective, and the cost threshold where adding another drone gives less marginal effect than investing in bio-inspired sensors or AI.

There are practical limits that simulations are essential to reveal. Artificially synchronized swarms that rely on continuous data exchange collapse quickly under realistic jamming. Battery life and logistics constrain sortie density. Manufacturing and supply chains limit how many disposable nodes a nation can field before political will and budgets fray. Simulated campaigns that model logistics and electromagnetic environments therefore produce far different outcomes than sterile, perfect-communications dogfights. Good red teams fold those realities into multilayered LVC campaigns.

Ethics and command control cannot be an afterthought in these virtual dogfights. Simulations can and do test autonomy thresholds, but they must also model human in the loop controls, escalation risks, and attribution failures. A swarm that autonomously selects kinetic targets in a complex environment could create catastrophic legal and political fallout even if the tactical payoff looks attractive in a numbers game. The laboratory therefore must include moral and policy red teams alongside algorithmic adversaries.

What should militaries do next based on simulation results so far? Fund more live virtual constructive exercises that include realistic EW, directed energy and logistical constraints. Prioritize resilient, low bandwidth autonomy that survives contested communications. Build counter swarm prototypes now and evaluate them in the same LVC worlds used to evolve swarm tactics. And finally, accept that the future of air combat will be hybrid: not simply swarms replacing fighters, but swarms and fighters interleaving roles in which cost, stealth, speed and decision authority are traded in new combinations.

Simulated dogfights teach an uncomfortable lesson. Air superiority will not be decided only by better avionics or stealth. It will be decided by who integrates scale, autonomy, logistics and countermeasures into a coherent system faster. For strategists and technologists that is an invitation and a warning: win the simulation design and you shape the doctrine; ignore the realities the simulation reveals and you will fight yesterday’s fights with tomorrow’s losses.