Facebook’s responsibilities to research subjects

by Libby Bishop

Amid the latest privacy kerfuffle in which WhatsApp agreed to sell users' data to its parent Facebook, an article published by Jackman and Kanerva in the Washington and Lee Law Review Online that describes new procedures for research review at Facebook could be deemed inconsequential, or at best, ironic. Even readers familiar with the outcry over Facebook's “emotion contagion” experiment might conclude, with boyd (2015), that Institutional Review Boards are not the solution (IRBs are committees that assess the ethics of federally funded research in the U.S.), and move on to the next item in their newsfeed. That would be a mistake, for there is more at stake here. First, Facebook has over 1.6 billion users, all of whom are potentially its research subjects and thus, would be affected by these procedures. Second, the authors hope the principles they present will “inform other companies” (Microsoft has also recently formed a review group https://vimeo.com/134004122.) Most important, however, this new system at Facebook provokes urgent questions about the role of review systems in achieving ethical research.

The Facebook contagion experiment

161960-166594In 2010, researchers at Facebook and Cornell University published research that provided evidence that online social networks can transmit large-scale emotional contagion (Kramer, et al., 2014). The experiment demonstrated that reducing positive inputs to users' feeds resulted in users posting fewer positive, and more negative posts, and when negative inputs were reduced, the pattern was reversed: there were more positive and fewer negative posts. Kramer et al. emphasised the meaning of their findings: emotional contagion had been shown to occur without face-to-face and non-verbal cues. The change was small but statistically significant. Moreover, the authors pointed out that small changes can have “large aggregated consequences” (the sample size was 689,003) in part because of connections between emotions and off-line behaviour in areas such as health.

The import of the findings was swamped by the ensuing public outcry about the methodology, in particular, the manipulation of users' feeds, and hence emotions, without their consent. But a key question that emerged was the issue of research review: had the project been subjected to any formal ethical review, and if not, why not? Editors of the journal where the article had been published wrote an Expression of Editorial (Verma, 2014) stating that Cornell had confirmed that the research did not fall under the purview of their Human Research Protection Program because the experiment had been done at Facebook and not Cornell. Furthermore, because the research was not federally funded, it was not required to go through an IRB (boyd, 2015).

Facebook's new research review process

Earlier this year, Jackman and Kanerva, two Facebook managers, published an article describing the new research review process now implemented at Facebook (Jackman and Kanerva, 2016). Facebook had announced an internal review process in October 2014, four months after the contagion experiment was published, but no detail about it had been provided. Even though their article does not mention the contagion experiment, it seems highly likely that this new review process is a response to the earlier controversy. Both authors were hired at Facebook in 2015: Kanerva managed an IRB at Stanford University, and Jackman completed her Ph.D. in political science in 2013, also at Stanford.

Their article describes and defends the research review process. Training “related to privacy and security” is deemed central to an effective review process, and three levels are offered: 1) all employees receive mandatory “onboarding” (or “socialization” about data access and privacy); 2) researchers working with data attend “bootcamp”, and 3) members of the research review group (and area experts) must take human subjects training provided by the National Institute of Health. Managers of the Facebook research team decide on the appropriate level of review: ‘expedited' (also called standard) or ‘extended'. Extended review includes the researcher, and adds other Facebook experts in law, policy, and others. There are no standing external (non-Facebook) members, but they could be called in if needed.

In describing the decision-making procedures, Jackman and Kanerva say “our basic formula is the same as an IRBs [sic]: We consider the benefits of the research against the potential downsides.” While stressing repeatedly that there is no “one size fits all” and that every proposal is different, they enumerate four criteria that are taken into consideration for any research. First, “we consider how research will improve our society, our community, and Facebook”. Second, any “potentially adverse consequences” such as privacy and security are taken into account. Third, the research needs to be “consistent with people's expectations”. Finally, they take “precautions designed to protect people's information.”

Unanswered questions about Facebook's research review process

When introducing a new programme, one that has been implemented but probably not yet been extensively used, a lack of exhaustive detail can be forgiven. Nonetheless, there are some ambiguities and omissions compared to similar procedures at university IRBs and, in the U.K., research ethics committees.

Jackman and Kanerva conclude in their article with “lessons learned” from their experiences at Facebook. They hope these will inform others creating similar review processes. They identify “inclusiveness” as one of these key lessons, saying “including researchers and managers in the deliberations leads to faster turn-around and more informed decision-making”. This strongly implies that the managers whose projects are under review sit as members of the group assessing their own projects, but that is not made explicit in their discussion. They say the expert group works by consensus. Does the need for consensus include external members? If external and internal members disagree, there is no indication of how this would be resolved. The authors also mention the existence of a separate ‘privacy group', but no further detail is provided. What if the privacy and research groups recommend different levels of review?

Concerns about openness, independence, and conflicts of interest

The authors suggest that “openness” is a core value; it is itemised as the second “lesson learned” in the introduction (p445). But oddly, openness is then not mentioned anywhere else in the article. In the conclusion, the second lesson learned has been changed to “inclusiveness” (p456), demonstrated by the fact that activities of the group are accessible. However, this is true only for Facebook employees; there is no mention of any access for customers, users, or the public. Despite the openness claim, names of those in the review group are not disclosed. This is quite different to the practice in universities. The University of Essex Research Ethics Committee (on which I sit) publishes its members' names, and even private U.S. universities, such as MIT, openly publish names.

Such limited openness would be less troubling if it were not accompanied by questions of lack of independence. According to The Wall Street Journal, Facebook managers have authority to approve projects, and sit on decision-making groups. More broadly, the Facebook process has to be regarded as a form of self-regulation, as no external assessment is required at any stage. Again, this differs from at least some university procedures (e.g., University of Essex), where regular external audits are required. Garfinkel understates the situation: “self-regulation does not have a good track record”; the current review regime of the Common Rule and IRBs emerged in response to egregious failures of self-regulation in Nazi Germany, Tuskegee and elsewhere.

Perhaps the most serious concern is that Jackman and Kanerva do not address the existence of a possible conflict of interest that members of review group may face. If the proposed research is ethically dubious yet is obviously seen to benefit Facebook financially, how might this be addressed? While “improved products and services” are noted as company objectives, there is no mention of profit, revenue, advertising revenue, etc. If, as seems nearly certain, all Facebook's' employees have incentives (direct or indirect) to enhance its share price, then what structures are in place to ensure research subjects' privacy and other interests are not compromised? As commentators on the WhatsApp controversy noted, Facebook now has numerous examples of putting the “company's needs ahead of its users'“, and thus such concerns are not idle speculation.

It is not only the IRBs that need to change

There is a rapidly growing body of literature on how to improve the ethics of big data research (Metcalf, et al., 2016), involving suggestions for how IRBs and researchers should attend to conceptual gaps that big data present for established ethical frameworks and procedures (Zimmer, 2016). An equally important, but less discussed point, is that it is necessary for accommodation to go in the opposite direction, namely, for big data researchers to acknowledge that although their data may be “big”, and take “new and novel forms,” the underlying ethics questions arising have, in all likelihood, already been posed in some form. And effective review processes have protocols and procedures to enable broad debate, and deep moral reflection, about these questions.

I believe this awareness is missing in the process presented by Jackman and Kanerva. One indication is a sentence highlighted above: “our basic formula is the same as an IRBs [sic]: We consider the benefits of the research against the potential downsides.” They are referring to one branch of moral theory, consequentialism, which judges moral decisions by their results, their consequences. Cost-benefit analysis is derived from utilitarianism, which is a form of consequentialist moral theory.

Beyond cost and benefit

Jackman and Kanerva make no reference to alternative theories or frameworks that are, in most applied ethics teaching, seen as essential to good moral reasoning. A key omission here is that of approaches that emphasise rights and duties (i.e., deontological moral theory). This matters because while utilitarian approaches are most useful for ensuring that acts that benefit the many are suitably weighted, conversely, rights-based approaches ensure that the rights of the few are not trampled by the gains for the many.

Jackman and Kanerva are not unaware other perspectives; they reference publications that draw on such rights-based theories. In footnote 33, they cite a publication by the European Data Protection Supervisor, “Towards a New Digital Ethics: Data, Dignity, and Technology“, which proclaims:

“The dignity of the human person is not only a fundamental right in itself but also is a foundation for subsequent freedoms and rights, including the rights to privacy and to the protection of personal data48.”

The challenge then, is how to implement research review so that that it brings to bear not only thinking about costs and benefits of research, but also the fundamental dignity and rights of persons.

To be clear, I am not advocating that research review should be a seminar in philosophical theory (although there are worse ideas), but what research review can accomplish is to expose researchers to broader concerns, a variety of disciplinary perspectives, diverse ethical frameworks and theories, all of which enhance the quality of both the assessment process itself and the decisions reached. Garfinkel explains in more detail:

“IRB-mandated education and training provides many scientists the background information and intellectual framework to make sophisticated ethical decisions—training that many data scientists might not otherwise receive.”

It is just this quality of thinking and reflection that might have brought issues such as conflicts of interest to the foreground in the development of Facebook's new process.

The final lesson Jackman and Kanerva present in their article is that of “flexibility”. They claim to be committed to incorporating feedback and “improving our research process over time.” Whether or not we are one of Facebook's 1.6 billion research subjects, let us hope they mean what they say.

* * *

References

boyd, d. (2015). “Untangling Research and Practice: What Facebook's “Emotional Contagion” Study Teaches Us.” Research Ethics 12(1): 4-13.

Jackman, M., & Kanerva, L. (2016). “Evolving the IRB: Building Robust Review for Industry Research”. Washington and Lee Law Review Online,72(3), 442.

Kramer, A. D., Guillory, J. E., & Hancock, J. T. (2014). “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” In Proceedings of the National Academy of Sciences, 111(24), 8788-8790.

Verma, I. (2014) “Editorial Expression of Concern and Correction”. Proceedings of the National Academy of Sciences 111(29): 10779.