Law and Ethics of Online Human Subject Experiments
Facebook data scientists recently conducted an online experiment on 689,003 unknowing Facebook users - likely including children under the age of 18 - to see if it could manipulate and change user emotions. One group had positive words like “love” and “nice” filtered out of their News Feeds. Another group had negative words like “hurt” and “nasty” filtered. The result - published in a paper entitled "Experimental Evidence of Massive-scale Emotional Contagion through Social Networks" - was that folks who viewed fewer positive words created fewer positive posts and folks who viewed fewer negative words created fewer negative posts.
Facebook's attempt to manipulate, change and measure human emotions raise troubling legal and ethical questions. It is well to remember past abuses of human subjects in biomedical experiments, including those by the Nazis during World War II. This led to laws and ethical codes to assure that experiments involving human subjects would be conducted pursuant to informed consent and designed to prevent both psychological and physical harm.
Upon information and belief, the Facebook user experimentees were not informed of the experiment and did not provide informed consent to be subjects of the online experiment. Was there potential for harm in the online experiments protocol? When you manipulate and change human emotions there is always the potential for harm. Here, the data scientists - by their own admissions - were able to successfully alter users’ emotions positively or negatively by curating News Feeds content.
The legal and ethical issues are disclosure and informed consent: Facebook failed to disclose the experiment and failed to obtain informed consent. In addition, most average folks do not understand that if free the product is you and your personal privacy data - that may be shared with government, business and others, and may potentially be used against and harm you (as well as help you). Moreover, it appears the experiment failed to use age filters and likely included users under the age of 18.
Today, Facebook's terms of service specifies that users’ data may be used "for internal operations, including troubleshooting, data analysis, testing, research and service improvement." The Facebook data scientists argue their experiments fall under these terms of use because "no text was seen by the researchers" - an algorithm filtered either positive or negative words. They assert "it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research."
However, at start and duration of the experiment Facebook's "Data Use Policy" failed to include "research" - it appears Facebook failed to update its user agreement to include “research” until four (4) months after the experiment and only after the U.S. Federal Trade Commission hit Facebook for "unfair and deceptive" user privacy practices.
Nevertheless, I suggest the average person cannot give legal “informed consent” in a boilerplate terms-of-use agreement. Does a private firm really have legal permission to conduct experiments on human subjects by getting informed consent in a boilerplate terms-of-use agreement - tucked away in small print in a section entitled "Data Use Policy" granting "permission" to use information for "internal operations" including "research" - where users must accept the entire agreement before obtaining an account?
Both law and professional conduct should require data scientists (and other researchers) conducting online experiments on human subjects to (1) make full disclosure of experiment(s) and other uses of personal privacy data and obtain individual consent in a reasonably understandable manner - separate from catch-all boilerplate terms-of-use agreements, and (2) ensure the experiment(s) are designed to prevent potential psychological and physical harm.
For example, the International Compilation of Human Research Standards describes over 1,000 laws and regulations regarding human subjects research in over 100 countries and ethical standards from international and regional organizations.
United States federal law prohibits universities and institutions that receive federal funds from conducting experiments on human subjects without clear informed consent. In 1975, the United States passed a law for the Protection of Human Subjects: Title 45 of the Code of Federal Regulations (45 CFR 46). The law requires that institutional review boards oversee experiments using human subjects and requires that informed consent include:
(1) A statement that the study involves research, an explanation of the purposes of the research and the expected duration of the subject’s participation, a description of the procedures to be followed, and identification of any procedures which are experimental;
(2) A description of any reasonably foreseeable risks or discomforts to the subject;
(7) An explanation of whom to contact for answers to pertinent questions about the research and research subjects’ rights, and whom to contact in the event of a research-related injury to the subject;
(8) A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits to which the subject is otherwise entitled.
It appears two of the Facebook data scientists conducting this experiment were employed at federally funded universities at the time of the experiment. As a result, they may have violated both federal law and the Data Science Code of Professional Conduct.
The Data Science Association "Data Science Code of Professional Conduct" states in relevant part:
Rule 1 - Terminology
(yy) "Informed consent" denotes the agreement by a person to a proposed course of conduct after the data scientist has communicated adequate information and explanation about the material risks of and reasonably available alternatives to the proposed course of conduct.
Rule 8 - Data Science Evidence, Quality of Data and Quality of Evidence
(e) If a data scientist knows that a client intends to engage, is engaging or has engaged in criminal or fraudulent conduct related to the data science provided, the data scientist shall take reasonable remedial measures, including, if necessary, disclosure to the proper authorities.
Rule 9 - Misconduct
It is professional misconduct for a data scientist to knowingly:
(b) commit a criminal act related to the data scientist's professional services;
(c) engage in data science involving dishonesty, fraud, deceit or misrepresentation.
Additionally, the Belmont Report "Ethical Principles and Guidelines for the Protection of Human Subjects of Research" summarizes basic ethical principles that surround the conduct of research with human subjects. In part, ethical principles include:
Informed Consent. "Respect for persons requires that subjects, to the degree that they are capable, be given the opportunity to choose what shall or shall not happen to them. This opportunity is provided when adequate standards for informed consent are satisfied...there is widespread agreement that the consent process can be analyzed as containing three elements: information, comprehension and voluntariness."
Comprehension. "The manner and context in which information is conveyed is as important as the information itself. For example, presenting information in a disorganized and rapid fashion, allowing too little time for consideration or curtailing opportunities for questioning, all may adversely affect a subject's ability to make an informed choice."
Respect for Persons. "In most cases of research involving human subjects, respect for persons demands that subjects enter into the research voluntarily and with adequate information."
Beneficence. "Persons are treated in an ethical manner not only by respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being... general rules have been formulated as complementary expressions of beneficent actions in this sense: (1) do not harm and (2) maximize possible benefits and minimize possible harms."
Facebook chief operating officer Sheryl Sandberg - while apologizing for "poor communication" - attempts to confuse and justify by stating "This was part of ongoing research companies do to test different products, and that was what it was." Yet the online experiment was secret and designed to intentionally manipulate, change and measure user emotions.
Note that online experiments involving human subjects are explicitly distinguishable from "A/B testing" that attempts to show users different versions of the same thing to measure product effectiveness. Here, the Facebook data scientists intended to manipulate and change human emotions - and in fact did succeed. See: "Experimental Evidence of Massive-scale Emotional Contagion through Social Networks".
A colleague writes: "I find it funny that there is an expectation of ethics from Facebook, whose sole purpose is to exploit, manipulate, package and "feed forward" their users to marketing and other 3-letter agencies for monetary gain. The biggest social experiment is demonstrating how effortlessly one can get the general public to expose themselves (their intimate, private, frivolous, mundane and random selves) and share their associations with the entire world -- all voluntarily."
Again, the ethical and legal issues are disclosure and informed consent, and whether the experiment was designed to prevent both psychological and physical harm. In this case the Facebook data scientists failed to comply with basic ethical principles regarding human subjects.
Yet my colleague raises another set of troubling legal and ethical questions regarding the actual practice of using personal private data by government agencies and private businesses. Again, the average reasonable person does not know that if a product is free, the product is their personal privacy data - that may be shared and potentially be used both to help or harm.
Businesses like Google, Amazon, Facebook, Yahoo and others are already conducting online controlled experiments without users (human subjects) knowledge or consent. We do not know about them because the experiments and results are not published for public consumption.
We need to have a public debate about acceptable personal privacy data collecting and sharing - as well as acceptable conduct for data science experiments conducted by government and private businesses, especially online controlled experiments involving human subjects - and codify acceptable (ethical and legal) practices into laws and professional codes of conduct. Furthermore, we need to carefully consider the ethics, legality and potential negative consequences of new financial trading, credit-scoring, human resource hiring and firing, political, marketing and advertising data science (e.g., algorithms, machine learning...et al.) techniques that rely on personal private data.
Comments
esuela
Fri, 2014/07/04 - 5:59am
Permalink
How to consider control groups?
On one side, I agree that these kind of tests should be under control.
On the other, if we stick to the letter to our regulations, I have some serious concerns about the so oftenly used control groups. Shouldn´t they be considered as experiments as well?