Over the past few days, Internet users have done what they do best: rage over some controversy. In this case, the controversy hinges on paper a published in the Proceedings of the National Academy of Science by three researchers: one of whom works at the University of California, San Francisco, one at Cornell University, and one at Facebook. In the paper, the researchers report on a study conducted on 690,000 Facebook users in early 2012, in which some users were shown more negative content in their news feeds and others were shown more positive content. The researchers wanted to test the theory that emotions are contagious, and they found that, indeed, users who were shown more negative content were more likely to make negative posts. (Here's the paper.)
Most of the squabbling over this study has focused on the question of its morality, and as with all squabbles, there are several different positions on the matter. Some Facebooks clearly feel violated by this study. They think it's creepy. Others argue that users shouldn't feel violated, or at least if users want to feel violated, they should realize that Internet and social media companies are doing this kind of thing all the time. Web and social media firms often conduct what are called "A/B tests," in which one group of users is shown one thing and another group is shown a second thing, to improve user experience and—more important—to, like, GET MORE CLICKS. In other words, this second group looks at the Facebook experiment and says, "Same as it ever was." Some go even further and say that not only are experiments like this not new but also that the findings aren't all that new. As Katherine Sledge Moore, a professor of psychology at Elmhurst College told the BBC, "The results are not even that alarming or exciting."
A slightly deeper level of questioning has focused on the academic research ethics of the paper. Academic psychological experiments require consent on the part of the research subjects and (to my knowledge) some form of debriefing, especially when the subjects are put through a negative experience. Facebook claims that users had already consented to the research done in the study when they agreed to company's Terms of Service (ToS), the long document we all click through when signing up for services like Facebook, which hardly anyone ever reads but which, it turns out, contains a clause saying that the firm can utilize user information for research. But it's not as if Facebook users agreed to this particular study, and (again, to my knowledge) academic research ethics generally does not accept the kind of standing consent entailed in the ToS. Given this seeming misalignment between academic research ethics and Facebook's ToS, it's unclear how how this study made it past the research standards of the journal or the internal review boards (IRBs) at the two universities. We can be sure that it did make its way past such ethics filters, however. Perhaps the decisions to OK this study will now become controversial at the journal or the two universities, but I doubt it. We shall see.
In this post, I would like to focus on a slightly different issue than any of the ones I have just mentioned: how the researcher at Facebook fits into the history of corporate R&D and what this might have to do with anxieties about the study.
One question has continuously hovered in the background of this controversy, why would Facebook bother publishing anything in an academic journal? Well, perhaps the company is seeking some new form of scholarly legitimacy or cultural capital. Maybe. But does a multibillion dollar enterprise like Facebook really care about being taken seriously by academics? I doubt it. Perhaps Facebook believes that the research it is doing can benefit the world. I doubt that's the motivation either.
I think that the answer may lie elsewhere.
In the past two decades, companies, such as Microsoft, Yahoo!, Facebook, and Twitter, have opened research divisions. What do researchers at these places study? Well, many things, but one of the things they study is us. Whereas the traditional R&D labs of the 20th Century, at places like Bell Telephone and Telegraph and DuPont, focused on physics and chemistry and the other sciences that could produce new materials and technologies to increase the companies' bottom-line, the R&D groups at these digital technology firms focus on human behavior. We are their material. We are the way to increase their bottom-line. Especially our clicking and swiping fingers. Consequently, these new Web 2.0 R&D labs have been hiring academically-trained, often PhD-bearing social scientists, including social psychologists, sociologists, anthropologists, and scholars of communications studies. And you can imagine that these social scientists are making $BANK$, at least compared with what they would make at a university.
Companies should always be wary of hiring academics, however. We come with baggage. This was even more true when the first R&D labs were formed in the early 20th century. Scientists who went to work for such labs very often faced being blacklisted and struck from the registers of institutional science, never to work in academia again. Corporations offered more money, but scientists also worried about their prestige. To attract researchers, the corporations created academic settings within them, including libraries, seminars, and other trappings of the scholarly life. Still, the scientists wanted more: they wanted to publish, the primary means of securing an academic reputation. Corporate executives had mixed feelings about publication, and often placed limits on what went out, but they did go along with some publishing.
While the social scientists at Web 2.0 firms are enriching themselves (again, at least in comparison with university compensation), they likely also want to see themselves as legitimate social scientists and to have the status and prestige that comes from publishing and performing in front of one's peers. I imagine that this desire at least partly answers the question, why would Facebook bother publishing anything in an academic journal? It did so because its people want to do so, and it needs to keep its people happy, or they will leave.
Viewed in this light, part of the controversy surrounding the Facebook study arises from blurring the boundary between academic social science and corporate strategy. My guess is that this boundary and the moral and political issues that surround it will only get messier with time. More problems are a comin'. I could go on about this forever, but I will limit myself to a few points.
First, as more or these types of studies are published and researchers increasingly try to blur the line between Web 2.0 market research and academia, I think we will hear more questions about how trustworthy the published results are. Physics done by corporate researchers is still physics (though perhaps academic physicists critical of their corporate peers didn't always think so). Social science, however, often has a short shelf-life and its results are seemingly more open to manipulation for political or economic ends. (My friends and colleagues will justly quibble with this fast and dirty distinction between the natural and social sciences, but there it is.) As social media firms publish more research, we may see people call its credibility into question.
Indeed, I think we've already seen cases of this questioning. One example is danah boyd's book, It's Complicated: The Social Lives of Networked Teens. boyd received her PhD from the UC Berkeley School of Information, and she works at Microsoft Research. In It's Complicated, boyd argues that teens' social media use is simply an extension of ordinary adolescent sociality, and that parents should spend less time worrying about such activity. Furthermore, she argues that parental "paternalism and protectionism hinder teenagers' ability to become informed, thoughtful, and engaged citizens through their online interactions." A PBS program probably put it best when it wrote that, according to boyd, "the kids are all right."
I know several academics who refuse to take boyd seriously because of where she works. You mean to say that someone employed at Microsoft Research is telling us that social media and digital technologies are nothing to worry about? A colleague likes to remind us, "A conflict of interest simply is the appearance of a conflict of interest. The appearance is enough." Others I know—those of a Marxist bent and those prone to conspiracies theories—go further, asserting that the only reason we are hearing about boyd in the first place is because she serves the interests of the very media that are informing us about her.
I actually like boyd's work. After teaching Computers & Society, a college course that examines the history, politics, and morality of computers and digital technologies, four times, I think that boyd's account generally accords with how my students see their digital technology and social media use. In fact, I plan to use portions of It's Complicated in that course when I teach it again this fall. Yet, when we talk about the book in class, I will be sure to describe its background and have a discussion with the students about how the writer's position may have biased her assertions or at least biased our reception of them.
Second, beyond these issues of epistemology and trust, it's an open question whether Web 2.0 social science researchers at corporate labs will be able to avoid moral judgement by their peers, harkening back to the time when scientists were blacklisted for working in corporate R&D labs. In popular culture, people in marketing are seen as . . . well, something less than human. (Sleazoid pond scum?) Given that, in the end, Web 2.0 companies are advertising businesses, it is unclear whether social scientists at the firms will be able to avoid marketing's social stain—at least when it comes to the judgment of their peers.
Third, some people are already worrying that "Big Data" operations are "sucking scientific talent into big business," as my colleague and buddy, the science writer, John Horgan put it in one of his blog posts. A corollary concern might be that Web 2.0 firms will come to have an outsized influence on the direction of the social sciences connected to the companies' interests. We know how the Cold War influenced American science. Will 21st century click-baiting and P.R.E.A.M. culture (that's Page-views Rule Everything Around Me) affect the course of social science?
Finally, an issue that has already caused controversy with papers coming out of these companies: the firms might be willing to allow, or even encourage, their researchers to publish and give academic presentations based on their work, but they will be less likely to answer in the affirmative when someone asks, "Can I see your data?" The most valuable property of these firms is their information, and they will not sacrifice their core proprietary secrets for the sake of academic openness. In this way, the social scientific research produced at these corporations will violate the so-called Mertonian Norms of Science, including the norm of "communalism," the belief that scientific results (and the data they are based on) should be the "common ownership" of the scientific community.
Part of the furor surrounding the Facebook experiment has to do with the way it blurred the boundaries between corporate marketing research and academic social science. There's a good chance that it'll only get blurrier. We'll see how the different communities involved manage this interruption.