This past summer I spoke with Sheila Jasanoff, Pforzheimer Professor of Science and Technology Studies at the Harvard Kennedy School, about her latest book, The Ethics of Invention: Technology and the Human Future (W.W. Norton & Co., 2016). A science, technology, and society program pioneer, founder ofCornell University’s prestigious STS department, Jasanoff has made her lifework the exploration of the relationship between science and technology and the law, politics, and government policy.
I had a lot of questions. Historically, scientists and engineers have distanced themselves from the social implications or unintended outcomes of their work. Can this mind-set be changed? Will it ever be possible to get out in front of the unintended consequences of new technologies, or are we doomed to a hamster wheel of innovation, disaster, and remediation? Why is it so unusual for engineers and scientists to weigh in on the potential problems associated with their inventions or discoveries in advance of their widespread dissemination?
Well, as Jasanoff pointed out, an enormous amount of human thought and energy in the two centuries since the Enlightenment has gone into creating and sustaining the idea that scientific discovery and technological innovation are in and of themselves “value neutral.” It then stands to reason that if you incorporate values into science and technology, you have corrupted the process. Extraordinary benefits have resulted from this value-neutral approach to science and engineering. But it has also produced problems, sometimes big ones.
Most STS scholars will tell you that a lot of science and technology is value laden at the outset. Jasanoff goes a step further: She says that technology can never be neutral because it is always informed by a desired future that does not come from within the technology itself but from societal ideas about what “the good” is. According to Jasanoff, we construct ideas of goodness first, and then innovation and discovery follow that trajectory.
STS programs try to bridge the gap between technological innovation and social outcomes. But many science and engineering students still view these programs as liberal arts havens for athletes and English majors trying to complete their undergraduate science requirements without actually running into any science or engineering. Or they’re seen as time sinks, not something essential and useful.
Something is getting lost in translation. Perhaps the fact that many of these departments are situated in the liberal arts colleges of universities reinforces the idea that technological policy issues are someone else’s problem. Perhaps these departments are focused away from the central disciplines of science and engineering, like math and computer science, when they should be leaning into them.
But there’s no shortage of creative STS program developments. For example, the University of Virginia’s program is completely embedded in the engineering school, and every UVA engineering student has to have an STS component in his or her thesis. Another noteworthy example is the program at Stanford University, where STS undergraduates must achieve a solid understanding of the fundamentals of an area of engineering or science to complete the major.
Oddly enough, Jasanoff’s home institution, Harvard, has no stand-alone Ph.D. degree program in STS. Perhaps Harvard could try embedding one in the School of Engineering and Applied Sciences. The John A. Paulson schoolis awash in inventions and innovations with potentially enormous social, economic, and political impact. An STS program positioned in the midst of all that fast-paced work might be able to achieve some remarkable good indeed.
In the 20th century it was possible for scientists and engineers to plead ignorance. In many ways we really didn’t understand the impact of the inventions we made until we saw them in action. But now, deep into the 21st century, there can be no excuse for failure on the part of technologists to imagine—and take responsibility for—the futures they are creating for us all.