Was the Facebook Mood Study Unethical?

Share

Facebook data scientists recently conducted a study testing the impact of negative or positive post biasing in the news feed. They were hoping to find out if a user’s mood could be impacted by the weighting different content. But the aim of the study is no longer the story; the story is that Facebook performed experiments on its user base without their knowledge.

Since the study became common knowledge to Facebook users, Facebook staff have been apologizing profusely. “It was poorly communicated,” said Facebook COO Sheryl Sandberg “And for that communication we apologize. We never meant to upset you.” But Facebook has done a lot of apologizing in the past, and it doesn’t mean the company will do the right thing in the future.

The study itself surveyed mood changes to more than 600,000 users, some of whom may have been minors at the time of the study, so the issue becomes one of consent. Users under the age of 18 would lack the capacity to consent to the study, and most of the users experimented on had no knowledge that any changes were taking place.

While some would argue that implicit consent for this kind of research and study is given when agreeing to the Terms Of Service, Facebook may have altered the Terms of Service after the fact. Forbes points out that the statement “For internal operations, including troubleshooting, data analysis, testing, research and service improvement” was added four months after the study concluded. [emphasis added]

A Facebook spokesperson responded to Forbes saying “To suggest we conducted any corporate research without permission is complete fiction. Companies that want to improve their services use the information their customers provide, whether or not their privacy policy uses the word ‘research’ or not.”

Some commentators have tied the research scientist behind the project, Jeffrey T. Hancock, to funding from the Department of Defense, noting that he has conducted research in the past on how civil unrest starts and propagates on social networks.

Possible connections to the DOD aside, the issue remains that the Facebook data scientists weren’t just testing users reactions — they were intentionally manipulating user moods. Occasional observation of behavior patterns happens on most social sites, but many believe Facebook has crossed a line in more ways than one.

Manipulating users that are unaware that they are being studied sounds pretty unethical. Changing the ToS to retroactively protect your company sounds even worse.

Image credit: zeevveez

New Career Opportunities Daily: The best jobs in media.

SocialTimes

Share

Facebook’s science experiment on users shows the company is even more powerful and unethical than we thought

Share

sad-smartphone

If you were still unsure how much contempt Facebook has for its users, this will make everything hideously clear.

In a report published at the Proceedings of the National Academy of Sciences (PNAS), Facebook data scientists conducted an experiment to manipulate the emotions of nearly 700,000 users to see if positive or negative emotions are as contagious on social networks as they are in the real world. By tweaking Facebook’s powerful News Feed algorithm, some users (we should probably just call them “lab rats” at this point) were shown fewer posts with positive words. Others saw fewer posts with negative words. “When positive expressions were reduced,” the paper states, “people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.”

The results shouldn’t surprise anybody. What’s more surprising, and unsettling, is the power Facebook wields in shifting its users’ emotional states, and its willingness to use that power on unknowing participants. First off, when is it okay to conduct a social behavior experiment on people without telling them? Technically, and as the paper states, users provided the consent for this research when they agreed to Facebook’s Data Use Policy prior to signing up, so what Facebook did isn’t illegal. But it’s certainly unethical.

Furthermore, manipulating user emotions in a digital space comes with uniquely disturbing consequences. In the real world, if you feel like the people around you bring too much negativity into your life, the solution is easy: Find a new crowd. But on Facebook, short of canceling your account, this is impossible to do if the company suddenly decides, whether as part of a research study or at the behest of certain advertising or engagement interests, to start sending more negative content your way. The whole point of the News Feed algorithm, to hear Facebook tell it, is to give users an experience tailored to their wants and interests. Clearly, that objective falls by the wayside anytime Facebook wants to turn its user base into a science experiment.

And then there’s the tone deaf gall of the whole thing: This research wasn’t uncovered by an investigative reporter, Facebook submitted the research to PNAS themselves. To make matters worse, there are questions about whether the methodology used was even sound. To determine “positive” and “negative” sentiments, the researchers used a technique called “Linguistic Inquiry and Word Count” or LIWC. But even the creators of LIWC admit that assessing its validity when applied to “natural language” (like a Facebook update) is “tricky.” LIWC’s reliability has largely been tested by analyzing essays, where there is more repetition than in natural language.

Perhaps I’ve been watching too much Black Mirror, but my brain can’t help but extrapolate on some of the alarming potential uses of this power. Psychological warfare techniques, like gaslighting, have long been used by government agencies to create cracks in the psyches of political dissidents or other undesirables. Assuming the ties between government organizations and tech companies continue to strengthen (and we’ve already seen Facebook cave to government pressure before), what’s to stop the NSA from manipulating what content a person sees in their News Feed in a manner designed to drive them to insanity? It might not be that hard to do: If every time you opened Facebook, all you saw were ex-girlfriends, old friends who are more successful than you, and upsettingly extreme political rants from family members, that might be enough to drive a person mad.

It doesn’t have to be the government pulling the strings either — Facebook itself could target certain users, whether they be corporate rivals or current/former employees. Having such strong psychological control over your workforce would certainly have its benefits. And if Facebook ever gets caught? Why, the company could claim it’s all part of a social experiment, one that users tacitly agreed to when they signed up.

With over one-tenth of the world’s population signing into Facebook every day, and now with evidence to back the emotional power of the company’s algorithmic manipulation, the possibilities for widespread social engineering are staggering and unlike anything the world has seen. Granted, Facebook’s motives probably are simply to convince people to buy more stuff in order to please advertisers, but the potential uses of that power to impact elections or global trade could be enticing to all sorts of powerful interest groups.

Or maybe I’m just being paranoid. Hey Facebook, can you please crank up the happy meter on my News Feed so I can enjoy the rest of my weekend in peace?

[illustration by Brad Jonas for Pando]

PandoDaily

Share