I just attended the weekly Symbolic Systems Forum at Stanford. This week’s talk, NeuroEthics: Science, Ethics, and Law, was given by Hank Greely of the Stanford Law School. Here’s the abstract:
“Neuroscience is in the midst of a revolution that is transforming our knowledge of the human brain and how it works. Our ability to predict future mental illness, neurological disease, or personality characteristics is expanding dramatically. We seem likely to be able to use devices to “read minds,” by directly detecting brain activity that is correlated with various mental states. And drugs and devices, developed to help the injured or ill, hold out the possibility of “enhancing” human brains with unprecedented powers. This talk will describe those advances and the legal, ethical, and social issues they pose.”
The ethical issues related to this technology are very interesting to me, mainly because they are so deeply intertwined with design. This technology, in as little as a few years, will begin to force the entire design community to consider the ethics of situations that simply haven’t existed before.
What happens when we as designers have the ability to know things about the users of our services that they don’t know about themselves?
At the beginning of his talk, Hank outlined the 3 major areas of neuro-ethics:
- Research ethics, or what is the ethical thing to do with the information you gain about the subjects of, say, a study employing Functional Magnetic Resonance Imaging (FMRI) technology?
- Neuro-economics, or what happens in the brain when we make decisions?
- Social implications, which is the most interesting area for both Professor Greely and myself. This area deals with topcis like prediction of behavior, mind reading and body enhancement.
A particularly interesting subtopic in the area of mind reading is that of reading emotions. How exciting! Imagine the accuracy with which you could design a service to elicit a certain emotion if we could read the users’ emotions directly from their brain?
Of course, this is currently completely impractical, but only because of the rapidly vanishing constraints of, for example, getting people into an FMRI machine and forcing them to work on tasks they’d probably never do otherwise. During his talk, Hank outlined several new technologies that could be used for the purpose of lie detection – some of them could even be administered without the knowledge of the subject!
As a group of people who pride ourselves on believing that we are a force for contributing positive change to the world, I have to think that there are others out there who care about the ethics of our profession. Unfortunately, it doesn’t seem like there’s anyone doing much talking about it.
Anyone have any pointers for me?
Serendipity: As I sat here outside a cafe on University Avenue in Palo Alto, Brian Knutson walked past me and into a restaurant a few doors down. If you’re unfamiliar, Brian’s primary research interest is in the neural basis of emotion.