Misuse Made Plain: Evaluating Concerns About Neuroscience in National Security, Lowenberg, K., et al., (2010)

Misuse Made Plain: Evaluating Concerns About Neuroscience in National Security

Kelly Lowenberg, Brenda M. Simon, Amy Burns, Libby Greismann, Jennifer M. Halbleib, Govind Persad, show all
Pages 15-17 | Published online: 16 Apr 2010

Cite this article https://doi.org/10.1080/21507741003699447

This commentary is the result of a workshop sponsored by the Stanford Interdisciplinary Group in Neuroscience And Law (SIGNAL), supported by the Stanford Law School Center for Law and the Biosciences.

Neuroethics and National Security

Van Horn, J. D., and R. A. Poldrack. 2009. Functional MRI at the
crossroads. International Journal of Psychophysiology 73: 3–9.

Weisberg, D., F. C. Keil, J. Goodstein, E. Rawson, and J. R. Gray.
2008. The seductive allure of neuroscience explanations. Journal of
Cognitive Neuroscience 20(3): 470–477.

Williams, A., and S. Clifford. 2009. Mapping the field: Specialist
science news journalism in the UK national media. Cardiff University
School of Journalism, Media and Cultural Studies. Available at:
http://www.cardiff.ac.uk/jomec/resources/Mapping Science
Journalism Final Report 2003-11-09.pdf.

Misuse Made Plain: Evaluating Concerns About Neuroscience in National Security

Kelly Lowenberg, Stanford Law School Center for Law and the Biosciences
Brenda M. Simon, Stanford Law School Center for Law and the Biosciences
Amy Burns, Stanford Law School
Libby Greismann, Stanford University
Jennifer M. Halbleib, Stanford Law School
Govind Persad, Stanford Law School
David L.M. Preston, Stanford University
Harker Rhodes, Stanford Law School
Emily R. Murphy, Stanford Law School and Director of SIGNAL

We agree with Marks’s (2010) core assertion that science
should not be misused for national security ends. We are
still left, however, with the big question: What counts as
“misuse”? In this open peer commentary, we categorize the
possible “neuroscience in national security” definitions of
misuse and identify which, if any, are uniquely presented
by advances in neuroscience. Ultimately, we conclude that
while national security is often a politicized issue, assessing
the state of scientific progress should not be.

To define misuse, it is helpful to define what we would
consider appropriate use: the application of reasonably safe
and effective technology, based on valid and reliable sci-
entific research, to serve a legitimate end. This definition
presents distinct opportunities for assessing misuse: mis-
use is the application of invalid or unreliable science, or is
the use of reliable scientific methods to serve illegitimate
ends. The assessments of each further depend on the spe-
cific context in which a technology is being used. Within
the domain of national security, various different contexts
This commentary is the result of a workshop sponsored by the Stanford Interdisciplinary Group in Neuroscience And Law (SIGNAL),
supported by the Stanford Law School Center for Law and the Biosciences.
Address correspondence to Kelly Lowenberg, Stanford Law School, Center for Law and the Biosciences, Stanford, CA 94305, USA. E-mail:
[email protected]
exist, each with different standards for evaluating the scien-
tific basis of the technology and whether its use is justified.
SCIENTIFIC VALIDITY AND RELIABILITY

In assessing whether a technology is prematurely applied,
a threshold concern is whether the science itself produces
sufficiently valid and reliable results for the specific ap-
plication. For neuroscientific applications to national se-
curity, the gaps between research and valid application
have already been identified in a recent publication by
an interdisciplinary group of experts (see Canli et al.
2007).

Validity and reliability questions are important to as-
sessing the potential use of functional magnetic resonance
imaging (fMRI)-based lie detection, where it is presently
unclear how reliable the results are outside of the con-
trolled laboratory context (for a review, see Greely and Illes
2007). A concern that Canli and colleagues (2007) articulate
is for “study designs that assume experimental control over
April–June, Volume 1, Number 2, 2010 ajob Neuroscience 15
Downloaded by [Stanford University Libraries] at 00:25 24 February 2012
AJOB Neuroscience
stimulus conditions, subject selection, and participant co-
operation that may not exist in field applications.” This
concern generalizes to Marks’s area of interest: How valid
and reliable fMRI-based lie detection would be in a high-
pressure national security context is far from clear, but is
ultimately a scientific question.

Furthermore, how reliable does a technology need to be
for national security experts to use it appropriately? Con-
text is central to a complete answer to this question. For
example, most U.S. jurisdictions do not permit polygraph
evidence to be admitted in court because of concerns about
reliability, 1 but a technology need not meet that same stan-
dard to be used during an investigation. Police officers, not
just military interrogators, regularly use polygraphs as an
investigatory tool. Most would hardly consider police use of
polygraphs to be “misuse” per se, perhaps because the poly-
graphed person consented to participate, often after consul-
tation with a lawyer. Thus, a technology that is not reliable
enough for use in some contexts might still be acceptable in
others without arousing serious ethical concerns.
Marks is a self-described “neuroskeptic,” meaning he
questions whether neuroscientific technologies can actually
do what their adopters claim. His article, however, does
not discuss the validity of claims about neuroscientific tech-
nologies, nor does it revisit the research-application gaps
unique to neuroscientific technologies. 2

Instead, Marks discusses the failings of other tools used in the service of na-
tional security: polygraph errors, incorrect interpretations
of satellite imagery, and enhanced interrogation resulting
in false confessions. With regard to the neuroscientific tools
he mentions (neuroimaging and psychoactive drugs), he
focuses on how persuasive their results will be, but not on
whether those results will be right. Perhaps the views in
his paper could be better described as “neuroconcerned”:
concerned about whether even scientifically reliable neu-
roscience could be used to cause harm or further harmful
ends.

LEGITIMACY OF MEANS

Contexts within national security scenarios will affect
whether applying a technology is a misuse. Did the per-
son consent to the procedure? Is it a part of interrogation
(see Thompson 2005)? Will it be presented as evidence to
decision makers in policy or law? Or, will it be used to
justify harsh physical measures for extracting further in-
formation, perhaps functioning as a pretext to apply meth-
ods that would otherwise be considered illegitimate (see
Marks 2007)? These contexts produce different outcomes;
1. But 18 states allow polygraphs to be used as evidence if both
parties stipulate to their admission. New Mexico is the only state
in which polygraphs are presumptively admissible without the
parties’ stipulation: State v. Dorsey, 88 N.M. 184 (1975).
2. Marks may have excluded that discussion because he suspects
that the national security enterprise may not care how well such
technologies work. For example, Marks described how military
interrogators were unconcerned that the manner in which they
used polygraphs could not produce reliable results (Marks 2010, 4).
one would expect a higher standard of scientific reliability
of technology used to produce evidence shown to a decision
maker than to produce data that merely create leads in an
investigation.

Independent from whether the technology works well
enough for a particular purpose, we must also determine
whether the technology is inherently harmful. For example,
the procedure may be unsafe, painful, invasive, or cause
some other harm. Marks draws analogies between the mili-
tary’s exploitation of neuroscience and enhanced interroga-
tion techniques, which he says resulted from the exploita-
tion of behavioral psychology. Regardless of whether en-
hanced interrogation techniques deployed science prema-
turely or resulted in false confessions, many people would
find them objectionable because they cause a person to be
in fear or pain. The neuroscientific technologies Marks dis-
cussed are not equivalent to enhanced interrogation tech-
niques in this respect.

Neuroscientific technologies, however, may be consid-
ered inherently harmful as means because they are uniquely
violative of a person’s privacy and perhaps unduly coercive.
Strategically intervening in brain processes to create feel-
ings of trust or to extract truthful statements is coercive in a
way that undermines an individual’s autonomy. Addition-
ally, extracting information about an individual’s unspoken
thoughts directly invades spaces believed to be the most
private. In this respect, applications of neuroscience may be
more susceptible to misuse than other types of technology.
However, as with validity and reliability standards, how
much coercion or intrusion into privacy is appropriate will
depend on the context.

LEGITIMACY OF ENDS

Finally, misuse may be found if the ultimate ends that a
technology is serving are illegitimate. Offering the results
of a scientific analysis to intentionally mislead courts or de-
cision makers would always be a misuse of the technology.
When scientific findings are intentionally skewed, however,
the problem is not the technology itself, but rather officials
who manipulate the truth for political purposes.
Beyond using results to intentionally mislead, it is less
clear what purposes should be deemed illegitimate. Who
should decide what uses of neuroscience are legitimate,
particularly with respect to national security? Marks sug-
gests that we “should empower the public to challenge
decisions regarding the development and application of
neuroscience, and engage with them in figuring out the
road ahead” (Marks 2010, 4). Certainly, we should encour-
age the public to think critically about scientific findings,
but national security uses of science are often likely to be
classified and not subject to public debate. Because they
are often opaque, such uses of neuroscience in national se-
curity may be more susceptible to misuse than in other
applications.

Furthermore, what role should the public have in deter-
mining the direction of scientific research and technology
development? Although much of the funding is collected
16 ajob Neuroscience April–June, Volume 1, Number 2, 2010
Downloaded by [Stanford University Libraries] at 00:25 24 February 2012
Neuroethics and National Security through taxes, relying too much on public opinion about
support for research and development may unnecessarily
politicize those decisions. Determining what applications
of neuroscience are appropriate will not be as simple as
determining what uses are popular.

Marks also suggests that the powerful are more likely
to exploit neuroscience against the powerless than against
themselves, and that society should reconsider neuroscience
as a way to help disenfranchised and impoverished per-
sons. Nontherapeutic neuroscience research aims to gain
knowledge about the brain, which may be applied in a
number of ways, some of which could help marginalized
persons. Whether research fits with an overriding political
agenda, however, should not be the ultimate test of whether
it is appropriate. Certainly, we would be concerned if all
neuroscience research must be justified by an overriding
national security goal. If it is inappropriate to imbue a field
of basic research with a political agenda, switching politics
does not remedy that concern.

CONCLUSION

Canli and colleagues mapped the terrain of neuroscience
and national security in The American Journal of Bioethics 3
years ago. The technologies have moved forward, but the
framework they established for considering the issues still
applies. Canli and colleagues concluded that neuroscience
might be misused in national security, but also that it might
be ethically and appropriately used. Marks repeats the first
conclusion, by examining risks presented by the persuasive
effect of neuroscientific technologies, but does not address
the second—and offers no useful path forward beyond a
general “neuroskepticism.” Skepticism is a useful tool for
reaching a balanced and rigorous assessment of a technol-
ogy and its use, but it is a method, not a conclusion. If
neuroscientific technology is used in the service of national
security, careful consideration must be given to whether that
use is appropriate. What constitutes inappropriate use—
misuse—depends much more on the circumstances of use
than on whether the technology arises from neuroscience
specifically or from another field of scientific research. ■

REFERENCES

Greely, H. T., and J. Illes. 2007. Neuroscience-based lie detection:
The urgent need for regulation. American Journal of Law and Medicine
33: 377–431.
Canli, T., S. Brandon, W. Casebeer, et al. 2007. Neuroethics and
National Security. American Journal of Bioethics 7(5): 3–13.
Marks, J. H. 2007. Interrogational neuroimaging in counterterror-
ism: A no-brainer or a human rights hazard. American Journal of Law
and Medicine 33: 483.
Marks, J. H. 2010. A neuroskeptic’s guide to neuroethics and na-
tional security. AJOB Neuroscience 1(2): 4–12.
Thompson, S. K. 2005. The legality of the use of psychiatric neu-
roimaging in intelligence interrogation. Cornell Law Review 90: 1601.
From Brain Image to the Bush Doctrine:
Critical Neuroscience and the Political
Uses of Neurotechnology
Suparna Choudhury, Max Planck Institute for the History of Science
Ian Gold, McGill University
Laurence J. Kirmayer, McGill University
When science is used for a practical purpose it can go wrong
in at least three different ways. First, theory can be incor-
rectly applied. In 1986, NASA had the necessary theory to
construct O-rings for the space shuttle that wouldn’t deform
in the cold, but that theory was incorrectly applied, and the
Challenger exploded. Second, theory can be correctly applied
in the service of an immoral act; the use of the atom bomb
provides an obvious example. Third, one can misrepresent
Address correspondence to Laurence J. Kirmayer, Division of Social and Transcultural Psychiatry, McGill University, 1033 Pine Ave. West,
Montreal, Quebec H3A 1A1, Canada. E-mail: [email protected]
or overinterpret scientific theories for immoral ends. Ewan
Cameron’s notorious experiments in psychic driving were
predicated on the theory that psychosis could be cured by
reconstructing the mind from the ground up (Marks 1991).
Such experiments, clearly immoral from the vantage point
of the present, would presumably have been justified in
the eyes of some by that theory. In this latter case, a poor
theory allowed a psychiatrist—and a political agency, the
April–June, Volume 1, Number 2, 2010

Notes

1. But 18 states allow polygraphs to be used as evidence if both parties stipulate to their admission. New Mexico is the only state in which polygraphs are presumptively admissible without the parties’ stipulation: State v. Dorsey, 88 N.M. 184 (1975).

2. Marks may have excluded that discussion because he suspects that the national security enterprise may not care how well such technologies work. For example, Marks described how military interrogators were unconcerned that the manner in which they used polygraphs could not produce reliable results (CitationMarks 2010, 4).