Among researchers there’s some controversy surrounding a study conducted by Facebook internal Core Data Science Team.
At issue is an experiment conducted to see if adjustments to Facebook’s timeline algorithm, which determines which of your friends status’ to show you, can be manipulated to effect your mood.
To me, there seem to be two big important questions to process:
- From a user perspective, are users made aware that Facebook [the world’s largest social network with more than 1 billion active users] has the ability to manipulate people’s emotions on a massive scale? This means that they have learned that they have the ability to effect what you are thinking about, how you feel about those topics, etc.
- From an ethical perspective, is it OK that Facebook conducted a massive experiment impacting the emotions of almost 700,000 people without their consent?
Facebook is not a public utility. It’s a for profit business.
It’s in their best interest to get you to come back to their site as much as possible, to enjoy using the site, to find the site useful for your daily life. Why? Because the more you like Facebook the more time you’ll spend there and the more money they’ll make off of you from advertising.
As I’ve taught for years, it’s important to remember your relationship to Facebook. You aren’t Facebook’s customer, you are their product. Facebook’s customers are advertisers who use the data you freely share to target users who might be interested in buying their products/services.
You’re familiar with the phrase, “Happy wife, happy life.” All this experiment was testing was the idea that they could apply what social science has learned about emotional contagion in the physical to the digital space. Is it “Happy timeline, happy user?”
As social media researcher Danah Boyd correctly points out, every media company curates content to increase readership/viewership:
Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial system is prioritizing some content over others (including the one used here on Medium). But let’s compare what we glean from this study with standard practice. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. What do you think garners more traffic?—?a detailed analysis of what’s happening in Syria or 29 pictures of the cutest members of the animal kingdom? Part of what media learned long ago is that fear and salacious gossip sell papers. 4chan taught us that grotesque imagery and cute kittens work too. What this means online is that stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, favorited, etc.
All Facebook tested as if they could adjust their algorithms to show you things that’d make you happy, instead of what was popular. (Think baby announcements, job promotions, etc.)
And yet… it does ask a very important question worthy of consideration. As a user am I aware of and OK with Facebook having the power to manipulate what I think about various topics?
This practice reminds me of the movie The Truman Show where the television show manipulated Truman’s day-to-day life in order to manipulate their own ratings.
When you think about this power to manipulate your emotions you have to wonder if Facebook is also testing the capability of testing what they can manipulate you to buy, think about kale, or vote for in the presidential election?
There are always ethical concerns when it comes to research and experimentation. The question yet unanswered by Facebook’s researchers is if they needed informed consent of the 700,000 users they experimented with or was this rather small data set in light of all Facebook’s users somehow covered within Facebook’s existing terms of service.
There’s no doubt that if the interest of the research was market study, they were covered under their normal terms of service. As a Facebook user you’re basically exchanging your free usage of the service for Facebook’s ability to do whatever they want with what you post, share, or read. (Your mom was right. There’s no such thing as a free lunch.)
But there is a lingering question about the purity of the research they conducted, it’s academic viability if it’s not able to be replicated and users may have not had direct consent to being experimented on.
I think an important question to reflect on is… “Should a for-profit business, one with the ability to impact the daily lives of more than a billion people worldwide, be allowed to manipulate the emotions of users without concern for broader concerns?”
What Does This Mean for Users?
Again, you need to be aware that because the service is free you are not their customer. Just like any media company, it’s within Facebook’s prerogative to make as much money as they possibly can off of you.
With that said, you also need to be aware that what you are seeing isn’t organic. It’s manipulated by an algorithm designed to make you want to keep using Facebook, to keep that tab open, to comment on that thread, to share that status or video.
Just like notifications are the devil because they mess with your brain at a subconscious level to interrupt whatever you are doing to look at your device, Facebook is moving beyond just the subconscious level to manipulate your day-to-day emotions.
It’s not a warning. It’s simply the case that you need to be aware of how the game is played.
Is Facebook messing with your brain? Of course!