The Pentagon’s expanding work in neuroscience in recent years has focused on medical applications, like research to understand traumatic brain injury and on concepts intended to help the military fight wars more effectively, such as studying ways to keep soldiers’ brains alert even after days without sleep.
But under the rubric of “Augmented Cognition,” DARPA has also pursued a number of military technologies, like goggles that would monitor a soldier’s brain signals to pick up potential threats before the conscious mind is aware of them.
While some of the applications might be a generation away, or may never arrive, like mind-controlled drones, others, like the brain-monitoring goggles, are already in testing (though probably not ready for use in the field).
[This raises] questions from ethicists, who are pushing for the government to begin now to think about “neuro ethics.” In a 2012 article published last year in the journal Plos Biology, Jonathan Moreno, a professor of medical ethics, and Michael Tennison, a professor of neurology, argued that many neuroscientists don’t think about the contribution of their work to warfare, or consider the ethical implication of such work.
The question they raise is what choice future soldiers might have in such cognitively enhanced warfare. “If a warfighter is allowed no autonomous freedom to accept or decline an enhancement intervention, and the intervention in question is as invasive as remote brain control,” they write, “then the ethical implications are immense.”