Suparna Choudhury, Max Planck Institute for the History of Science
Ian Gold, McGill University
Laurence J. Kirmayer, McGill University
When science is used for a practical purpose it can go wrong
in at least three different ways. First, theory can be incor-
rectly applied. In 1986, NASA had the necessary theory to
construct O-rings for the space shuttle that wouldn’t deform
in the cold, but that theory was incorrectly applied, and the
Challenger exploded. Second, theory can be correctly applied
in the service of an immoral act; the use of the atom bomb
provides an obvious example. Third, one can misrepresent
or overinterpret scientific theories for immoral ends. Ewan
Cameron’s notorious experiments in psychic driving were
predicated on the theory that psychosis could be cured by
reconstructing the mind from the ground up (Marks 1991).
Such experiments, clearly immoral from the vantage point
of the present, would presumably have been justified in
the eyes of some by that theory. In this latter case, a poor
theory allowed a psychiatrist—and a political agency, the
CIA—to act in a way that would otherwise not have been
contemplated.
There is no doubt that our knowledge of the brain has
exploded over the last 100 years, and, in particular, since
the end of the Second World War. But the idea that a deeper
understanding of the brain will enable us to understand
mental life remains an ideal. With the possible exception
of classical conditioning, and related forms of elementary
learning, there is no behavior, however simple, for which
we have a complete theory, much less a complete theory
expressible in neurobiological terms alone (Gold and Stol-
jar 1999). It is an entirely open question what neuroscience
will tell us about the mind, and almost any neuroscientific
finding that has a bearing on human behavior is as likely
to be revised by future research as it is to be confirmed.
It seems obvious, therefore, that our ignorance about the
neural basis of mental life should prevent us from endors-
ing any technology or public policy justified primarily on
the basis of neuroscientific research, however respectable.
The danger is not that we will make mistakes in applying
neuroscience, as the Challenger engineers did, or that we
will use neuroscience for nefarious purpose as we did with
the A-bomb, but rather that we will merely read whatever
views we already have into the fragmented, diffuse, and
inchoate body of neuroscientific theory.
There is a particular risk associated with the interpre-
tation of neuroscience. Any inchoate science is a screen on
which we can project our beliefs and hopes. When the in-
choate science aspires, as neuroscience does, to understand
human beings, it offers us a screen on which to project our
prejudices and stereotypes about certain types of individual
or human groups. Because neuroscience purports to tell us
about the nature of human beings, it invites value-laden
interpretations of its findings. Moreover, because neuro-
science research occurs amidst, and is partly driven by, the
widespread belief in what we might call “brainhood,” the
view that the human self is nothing more than the brain,
discoveries of neural correlates of any manner of complex
behavior may seem to portend radical transformations of
our social institutions (Vidal 2009). As Marks (2010) points
out in his target article, revolutionary rhetoric about the
promise of preliminary findings, crude speculations, and
now iconic images of the brain reinforce the conviction that
neuroscience holds the answers not only to who we are but
also to how we should live.
Of course, the close alliance between current neu-
roimaging research and defense agencies cannot be ex-
plained in terms of the pliable nature of a young science
alone. Efforts to use technology to address issues of na-
tional security, including the biological study of deception,
have existed in the United States throughout the 20th cen-
tury (Adler 2007). Earlier methods of polygraphy have been
repeatedly discredited as unreliable and likely to produce
large numbers of false positives (Brett, Phillips, and Berry
1986). However, technological approaches have gained new
purchase with the shift in public attitudes since the events
of September 2001. The hope for a technological fix that
can protect us from terrorist violence has lent credence to
new functional magnetic resonance imaging (fMRI)-based
lie detection such as NoLieMRI and electroencephalograph
(EEG)-based brain fingerprinting, in spite of their method-
ological and conceptual continuities with outdated tech-
niques. The basic scientific rationale for lie detection has
not been radically transformed by neuroscience: The new
technologies use much the same physiological parameters
but the locus of deceit has shifted from the bodily signs
of autonomic nervous system activity to the brain itself
(Littlefield 2009). The assumptions are that strategic lying
will leave a specific brain trace that indicates knowledge
of hidden information, or a signature that can be distin-
guished from the activity associated with the complexities
of the everyday social presentation of self—despite the fact
that most people are beset by internal contradictions, ten-
sions, and ambivalence and that these very tensions and
contradictions may be greatest in situations where individ-
uals face suspicion, discrimination, and marginalization.
Despite the emphasis put on the novelty and veracity of
these “objective” methods, they are unreliable indicators of
lying.
What is notable about the new neurotechnologies
(which are found not only in the realm of defense, but also
in law, education, and medicine) is the style of thought
that characterizes their application (Choudhury and Slaby
forthcoming). Although, at present, neuroimaging research
provides us mainly with brain correlates of psychological
or behavioral functions, the neuro-image is viewed as offer-
ing penetrating insight in the “objective” functioning of the
brain. Along with this heightened sense of objectivity and
veridicality comes the idea of neuroimaging as a powerfully
predictive method. To some extent, the confidence that neu-
roimaging allows prediction may arise from a conflation of
different notions of prediction. In neuroimaging research,
the statistical results of multiple regression, used to ana-
lyze data, are sometimes presented in terms of the degree to
which one variable “predicts” an other. In most studies with
cross-sectional data, however, what the regression equation
identifies are simply correlations among variables. This slip-
page in terminology from “correlate” to “predictor” has
become more prevalent in neuroimaging with the use of
specific statistical techniques (e.g., Haynes and Rees 2006).
Increasingly, neuroimaging-based diagnostics are claimed
to “predict” the risk of disease in an individual and to raise
the possibility of intervention even before a diagnosable
condition can be identified.
This use of neuroimaging for “prediction” in psychiatry
also reflects an important epistemological shift related
to the use of biomarkers for psychiatric disorders. The
identification of biomarkers, such as a particular pattern
of brain activity in a given region, does not reveal a
determinate cause of disease, but rather (when factored
into a risk algorithm) can be used as a predictor of the
potential for a disorder (Singh and Rose 2009). What
the biomarker predicts, though, is the risk of developing
a particular type of illness (in a given context), not a
specific behavior. Through the conflation of these various
contexts of prediction, however, neurotechnology acquires
to determine not only who has participated in a terrorist
act but also who has the potential to commit a terrorist act
in the future (www.brainwavescience.com). With the right
picture of the brain, we can root out the “evildoer” before he
acts.
This fantasy of prediction is highly resonant with the
post-9/11 US security framework—which emphasizes pre-
emptive military action meant to root out terrorism before
it can strike. Of course, the possibility of labeling people not
for what they have done but for what their brain indicates
they might do at some indeterminate point in the future
completely up-ends the ethical and legal principles that peo-
ple are innocent until proven guilty and that thoughts them-
selves are not crimes. Equipped with scanners, we seem to
be heading in the opposite direction: toward presumptive
guilt and toward the possibilities of thought crimes, an Or-
wellian vision indeed.
To the extent that brain-imaging technology is thought
to provide data on which preemptive action can be taken, it
seems to offer a scientific foundation for a policing strategy.
Given the poverty of the evidence for the success of pre-
dictive lie detection, however, these neuroscientific claims
seem little more than a reflection of contemporary political
expediency. However, precisely because our actual scien-
tific understanding of the neural basis of lying is so limited,
there is little science to support or contradict the overblown
claims of those with a political agenda.
A focus on the problems inherent in the epistemic claims
for neurotechnology does not address the deeper problem of
the contexts of its use—i.e., who gets scanned or tested and
how the information is handled once it is collected. These
issues of the regulation of use are ultimately more impor-
tant than the technology itself. Focusing on technology, we
have a double displacement, from the geopolitics of polit-
ical strife and security to the micropolitics of identifying
individuals’ potential for criminal actions, and from the mi-
cropolitics of labeling groups and individuals as high-risk
to the dynamics of brain function.
Not only does the focus on brain imaging provide an
illusory or inflated rationale for measures to insure national
security, but more perniciously, it suggests that the solutions
to our political problems lie within the skull rather than in
the social world. Whatever one’s political orientation, there
can be no doubt that the origins of terrorism lie in social and
political processes. Technology that focuses on the brain
diverts attention from the real arena of action. The focus
on neurotechnology encourages us to interiorize a problem
that has essentially to do with how we are seen by others
and how we conduct ourselves in the political world. By
peering into the brain, we become even more blind to the
meaning of our own actions to others (Habermas, Derrida,
and Borradori 2003).
Neuroscientists—and those who value and respect what
neuroscience aspires to do—must not be afraid to admit that
neuroscience has very little to tell us at the moment about
any complex human behavior including terrorism. The
attempt to exaggerate the utility of neuroscience by putting
it at the service of political agendas may yield short-term
benefits for researchers seeking funding and for political
ideologues who hope to capitalize on public insecurities,
but ultimately it seems likely to constitute a serious disser-
vice both to science and to global security.
REFERENCES
Alder, K. 2007. The lie detectors: The history of an American obsession.
New York: Free Press.
Brett, A. S., M. Phillips, and J. F. Beary. 1986. Predictive power of the
polygraph: Can the “lie detector” really detect liars? Lancet 1(8480):
544–547.
Choudhury, S., & Slaby, J., eds. Critical neuroscience: Between lifeworld
and laboratory. Wiley: Blackwell forthcoming.
Gold, I., and D. Stoljar. 1999. A neuron doctrine in the philosophy
of neuroscience. Behavioral and Brain Sciences 22(5): 809–869.
Habermas, J., J. Derrida, and G. Borradori. 2003. Philosophy in a time
of terror. Chicago: University of Chicago Press.
Haynes, J. D., and G. Rees. 2006. Decoding mental states from brain
activity in humans. Nature Review Neuroscience 7: 523–534.
Littlefield, M. 2009. Constructing the organ of deceit: The rhetoric of
fMRI and brain fingerprinting in post-9/11 America. Science Tech-
nology Human Values 34(3): 365–392.
Marks, J. 1991. The search for the “Manchurian candidate.” New York:
Norton.
Marks, J. H. 2010. A neuroskeptic’s guide to neuroethics and na-
tional security. AJOB Neuroscience 1(2): 4–12.
Prince, R. 1995. The Central Intelligence Agency and the origins of
transcultural psychiatry at McGill University. Annals of the Royal
College of Physicians and Surgeons of Canada 28(7): 407–413.
Singh, I., and N. Rose. 2009. Biomarkers in psychiatry. Nature
460(7252): 202–207.
Vidal, F. 2009. Brainhood, anthropological figure of modernity. His-
tory of the Human Sciences 22(1): 5–36.
ajob Neuroscience 19