We have grown used to talking about human enhancement as if it is a one-way elevator to better soldiers and more capable militaries. But the biology of tradeoffs is unforgiving. What counts as an enhancement in one context can become a structural vulnerability in another. By 2025 the conversation is shifting toward a darker, subtler concept: disenhancement. In the military context this term names a set of risks where biotech interventions, whether intentional or adversarial, produce net losses in individual or force capability rather than gains.
Disenhancement emerges in at least three shapes. First there are unintended tradeoffs of well-meaning enhancements. Genetic edits or pharmacological regimens designed to increase resilience, reduce sleep needs, or blunt pain may introduce new medical susceptibilities or neurocognitive side effects that surface months or years later. Second there is deliberate coercive disenhancement where institutions impose biological constraints on recruits to make them more controllable or docile. Third there is adversary-enabled disenhancement where hostile actors weaponize knowledge of common enhancements or genetic variants to target and degrade a force’s effectiveness.
The history of biomedical surprises offers stark precedents. A canonical example is CCR5, a genetic variant that confers resistance to HIV but was tied to increased risk from West Nile virus in human studies. That tradeoff shows how removing or altering a single molecular function can help against one threat and worsen outcomes for another. In military terms the lesson is simple and brutal: an intervention that nets advantage against one hazard can magnify vulnerability to others, and those others can be precisely the threats an adversary chooses to exploit.
The neurotechnology frontier sharpens the hazard. Programs aimed at giving soldiers faster, hands-free control over unmanned systems and tighter human-machine teaming are technically exciting and militarily tempting. DARPA’s Next-Generation Nonsurgical Neurotechnology program illustrates this urge to translate neuroscience into operational speed and bandwidth for able-bodied service members. The program seeks bidirectional, high-resolution, nonsurgical interfaces that could let a warfighter command systems with thought-like intent. That capability also creates new attack surfaces. Writing into neural circuits, or even high-fidelity reading of intent, opens the possibility of cognitive disruption, manipulation of affect and decision-making, and persistent neural side effects that degrade a soldier’s long-term functioning. The device that amplifies situational awareness in the field can become the vector of disenhancement if misused or if its biology behaves unpredictably under stress.
A more modern worry sits at the intersection of AI and synthetic biology. The AIxBio convergence has lowered entry barriers for biological design and made precise, bespoke agents more attainable for sophisticated actors. Analyses of this convergence warn that adversaries could design agents that exploit population-level genetic patterns or the predictable side effects of deployed enhancements. In other words an enemy might craft a biological or chemical action that is especially damaging to soldiers who share a particular engineered trait or who are on a given enhancement regimen. That is disenhancement by design.
Ethics and consent complicate everything. Military service is built on hierarchy and obligation. Research and deployment of biomedical interventions for service members must contend with pressured consent, coercive incentives, and the reality that the line between treatment and enhancement blurs under mission pressure. Scholarly work on gene editing and military research emphasizes informed consent, careful risk benefit analysis, and the special ethical problems of applying nascent genomic techniques to populations organized by command. Those cautions matter because the people most likely to be first asked to adopt frontier interventions are those in constrained institutional settings.
What does this mean for policy and force posture? Four priorities should guide defense planners and ethicists.
-
Design for reversibility and monitoring. Any biomedical intervention intended for military use must come with a credible pathway for reversal, long-term surveillance, and independent safety data. Longitudinal monitoring systems and nonmilitary oversight can detect late-arising harms before they cascade into force-wide problems. Medical countermeasures and contingency plans should be part of any deployment, not an afterthought.
-
Treat enhancement as dual-use infrastructure. Investments that accelerate enhancements also raise the bar for adversaries to weaponize them. That means investing in biodefense capabilities that assume targeted, precision attacks are plausible. Early warning, rapid genomic epidemiology, and secure biomanufacturing pipelines should sit alongside R and D on enhancements. The National Security Commission on Emerging Biotechnology has argued that biotechnology must be integrated into national security planning for exactly these reasons.
-
Protect genomic and neurodata like armaments. The privacy of genetic, microbiome, and neural datasets is not just a civil rights matter. Those data carry operational value and risk. Policies should limit centralized repositories that could be stolen or abused, mandate robust encryption and access controls, and treat certain datasets as high-value assets requiring the same supply chain security as weapons systems.
-
International norms and red lines. NATO has already moved to articulate a strategy for biotechnology and human enhancement that emphasizes lawful, ethical use and protections against misuse. Those conversations need acceleration and expansion. The international community must codify boundaries against coercive biological alteration and against development of bespoke agents designed to exploit enhancements. Norms matter because technical fixes alone will not stop a determined state or non-state actor.
Finally, a cultural pivot is overdue. The rhetorical rush to ‘‘make soldiers better’’ must be balanced by a hard-eyed realism about what better can cost. Disenhancement is not a thought experiment. It is the structural flip side of every promise offered by biotechnology and neurotechnology. If we design troops as modular nodes optimized for a narrow set of tasks we risk creating populations of warriors whose biology is brittle against ordinary uncertainties and whose futures are mortgaged by short-term operational advantage.
The good news is that the same scientific tools that raise these risks can also build resilience. Reversible gene therapies, adaptive pharmacology, resilient supply chains, and secure, privacy-preserving neural interfaces are within reach if policy, funding, and ethics keep pace. The mandate for 2025 and beyond is not to abandon innovation. It is to pursue it with the humility to expect tradeoffs and the institutional will to limit harm when those tradeoffs go sour. That is the sober route to capability that endures rather than collapses under the weight of a newly weaponized biology.