Ethan Brooks: You paint bleak pictures of the battlefield, but let us start with the concrete. Where are we now with swarms and autonomous collectives?

Dr. Mira Calder: The technology pipeline is already in motion. Programs like DARPA’s OFFensive Swarm-Enabled Tactics have been demonstrating how hundreds of small aerial and ground agents can be coordinated, tested and iterated in realistic urban scenarios. Those efforts show the technology path from lab experiments to doctrine and field tactics.

EB: The public has also seen swarms in combat. How real is that threat on current battlefields?

MC: Very real. Recent combat in Europe and the broader Middle East has normalized massed small-vehicle attacks, improvisation of decoy tactics and AI-assisted coordination to defeat defences that assumed single, manned platforms. In several conflicts, researchers and journalists have documented how operators and engineers are using low-cost AI and networking to make drone groups far more effective than a lone UAV. That is not science fiction.

EB: You often talk about emergent behaviors in swarms. Give an example of a genuinely dystopian outcome.

MC: Imagine millions of tiny agents distributed across a theater to create persistent area denial. They work like a hostile ecology - lightweight loitering munitions that can self-organize to block supply routes, or micro-vehicles that disable infrastructure nodes without returning to base. Now add autonomy that can reassign roles when an agent is lost, and add adversarial learning that adapts tactics on the fly. The result is a battlespace that can be reshaped algorithmically, not just by commanders.

EB: That sounds like a nightmare for attribution and escalation control.

MC: Exactly. Attribution becomes fuzzy when swarms incorporate commercial hardware and open-source autonomy. The cheapness and modularity of the components mean non-state actors or proxies can mount disruptive campaigns. Combined with deceptive tactics - mixing decoy units with a small number of lethal agents - defenders are forced into either a costly, resource-intensive intercept posture or a paralysis that allows swarms to dictate local outcomes. We have seen variants of that approach in recent conflicts.

EB: Many technologists argue that human-in-the-loop safeguards will prevent the worst outcomes. Do you buy that?

MC: Human oversight helps but it is not a panacea. When systems operate at machine speed or in degraded communications environments, meaningful human control can become a paper policy rather than an operational reality. Additionally, the more decision authority is pushed into algorithms to handle contested communications or rapid adaptation, the harder it becomes to certify that humans are really in control when mistakes or unexpected emergent behaviors occur. We must be realistic about the limits of oversight in the middle of a firefight.

EB: What about legal and diplomatic fixes? Are nations moving to stop an arms race in autonomous swarms?

MC: There is momentum, but not agreement on a strict ban. Multilateral fora are actively discussing lethal autonomous weapons and frameworks to preserve human accountability. The UN’s Convention on Certain Conventional Weapons has convened groups of governmental experts to consider instruments addressing emerging military AI technologies. At the same time, civil society and many experts urge a stronger normative response. Those diplomatic tracks are necessary but slow.

EB: If states cannot agree on prohibition, what defensive and policy levers should we prioritize now?

MC: Three levers matter most.

1) Defensive depth and resilience - invest in layered, affordable counter-swarm systems that do not rely on single high-end interceptors. Expect saturation attacks and design defences that are distributed, rapidly manufacturable and cyber resilient.

2) Arms management and norms - push for export controls, interoperability standards for identification friend or foe, and shared incident-reporting mechanisms. Even if a formal ban is unattainable in the short term, norms can raise the political cost of certain uses.

3) Research governance - the AI research community must adopt and enforce nonproliferation norms for dual-use autonomy, similar to other scientific domains where misuse risks are high. A strong tradition of voluntary restraint, backed by industry and funders, can slow irresponsible diffusion. The AI community has already organized open letters and pledges about autonomous weapons, which set an ethical baseline for researchers.

EB: Can you be more specific about the kinds of dystopias that keep you up at night?

MC: One scenario is state-level automation of repression. Networks of autonomous aerial and ground agents tied to surveillance AI enable persistent, low-cost suppression of dissent. Another is escalation via deniable swarm attacks. A small proxy is launched, it causes disproportionate damage, and attribution is muddled by hacked or commercial platforms. Worse still is automated miscalculation. Imagine two actors deploy adaptive swarms with machine learning that seeks to outmaneuver the other. Their microscales of adaptation could interact in unforeseen ways, producing cascades of escalation.

EB: Sounds bleak. Is there any practical hope?

MC: There is hope. First, swarms also offer defensive advantages for humanitarian missions - search and rescue, rapid infrastructure repair and environmental monitoring. Second, the history of arms control shows that norms and technical constraints can shape behavior. Finally, political will changes quickly after consequential incidents. The key is preparation now - hardening civilian infrastructure, building interoperable defences, and forging international mechanisms for rapid incident review. Practical governance reduces the chance that a technological surprise becomes a strategic catastrophe. The lessons from recent analyses of drone use in major theaters show both vulnerabilities and adaptation pathways we can learn from.

EB: Last question. If you could say one blunt thing to military planners, legislators and technologists, what would it be?

MC: Do not treat swarms as just another weapons program. They are a systemic change agent. They rewire how militaries sense, decide and act. That rewiring affects civilians in cities as much as soldiers in the field. If you build them without safeguards, you will get the dystopia you imagined. If you build them with humility, transparency and robust international cooperation, you might keep them from making the worst violence routine. The difference is not inevitable; it is political.

EB: Thank you. Your warning is clear and unnerving. We will keep pressing the question of what kind of future we choose.

MC: That choice is still ours. But technology will make it easier to choose badly if we are complacent.