Does reading a friend’s happy post on Facebook make you happier?
This seems to be the case, according to research published in the Proceedings of the National Academy of Sciences (PNAS) today – we may well be affected by the emotional content we see every day on our Facebook feed.
But if we look a little closer at the study’s assumptions and methods we can see that the results are less than straightforward.
The research paper investigates the effects of “emotional contagion” within Facebook, and is the work of researchers from Cornell University, the University of California San Francisco and the Facebook research team.
The theory of “emotional contagion” suggests that we pick up on other people’s emotional states and that our own emotional state can vary without our knowledge. This may seem common sense, because if we think about it, when we encounter people in our everyday lives their good or bad moods can rub off on us.
So while “emotional contagion” may sound like more of a sickness, it is really just a metaphor for describing large-scale transfers of emotion.
This research addresses emotions in the context of Facebook, and because of this connection to Facebook, the research team had the luxury (academically speaking) of being able to access gigantic numbers of test subjects – nearly 690,000 people, in fact. Users of Facebook agree to this kind of experimentation as part of the terms of service
The study involved the use of control and experimental groups set up for a period of one week. All data was collected by computer, and the researchers never saw the posts themselves. They solved this by setting up a pre-selected group of keywords which indicated a “positive” or “negative” emotional tone in a Facebook post, and having software compile the results into statistical data.
Facebook doesn’t control what you write, it controls what you see. To test how often their different groups would see posts that contained emotional words, researchers removed a proportion of the emotional posts to see if more emotional material would lead to users, in turn, making more emotional material.
The researchers found that by removing a proportion of negative content from News Feeds tended to make users post less negatively in the week that followed. Likewise, users whose News Feed had fewer positive posts engaged in a less positive manner for that week.
Simply put, if you see less bad stuff, you tend to say fewer bad things. While this might seem innocuous, the prospect of broadly affecting the way literally billions of people think is a fairly scary thought.
Fortunately, the effect that the researchers find is small. In fact, the total change was found to be as small as 0.1%, much less than what we are accustomed to describing as significant.
The researchers argue, though, that this effect is still significant because of Facebook’s gigantic user base. After all, 0.1% of Facebook’s total user base still refers to more than a million of the 1.28 billion active monthly users.
The claim that the research is significant at 0.1% is doubtful. The significance of the research is reduced to a very small percentage, but the authors justify the significance of the effect solely due to Facebook’s large user base.
Certainly, yes, 0.1% of Facebook’s user base is a large number of people, but the effect itself is incredibly small. Indeed, because of the small effect size, the result cannot be tied to the research methods used. Instead, the change could be produced by incidental effects, such as copying and pasting bits of text, or as a result of insincere or sarcastic remarks.
The other issue with the research is that the emotional spectrum is limited to “positive” or “negative” attributes. It’s easy to think of “sad” and “happy” as being opposite to each other, but this doesn’t quite work in other situations, nor do such emotions really tie to particular individual words all that well.
To feel “desire” could really be on either end of the spectrum, with positive or negative emotional consequences depending on the context. So too with words such as, “love”, “nice” and “sweet”, which are all a part of the software used in the research.
It is quite easy to imagine these words appearing out of context, or in deliberately negative or vindictive ways (“I would love you to be quiet”, “Isn’t she ‘nice’” and so on). Yet big data research has to cut corners such as these, and it has to be up to each reader to accept such an approach.
What can we take away from this?
The most interesting aspects of the article are not, in fact, a consequence of the research hypothesis. One is the fact that Facebook users appear to be weighted towards positive status updates:
- 47% of posts contained positive words
- 22% contained negative words.
The other finding that drew our attention is the “withdrawal effect”. Facebook users who were exposed to fewer emotional posts – whether positive or negative – were less likely to engage (by posting, liking and so on) with Facebook later on.
Of course, the closer we are to people, the more likely we are to have other channels of communication: phone, email or even speaking in person.
This study is a reminder that the perspective of the world we see through Facebook is a partial one, and one that many software engineers are tinkering with all the time. We can’t necessarily rely upon one particular form of communication to keep us emotionally connected to the world.
Larger implications?
Facebook’s effect on the way we see the world is a lot less than we might think. Internet activist Eli Pariser warns that we are becoming cocooned in a Filter Bubble, and MIT professor Sherry Turkle wonders if social media is making us more anti-social. At least when it comes to emotional content, what we see doesn’t determine to a great extent what we post.
The researchers note that emotions are not determined by Facebook but by all the other things going on in our life. But then again, if we assume that it’s in Facebook’s interest for users to be more engaged, then it seems likely that emotional content will be favoured in the News Feed’s algorithm.
One glaring gap in this study is that it can’t address the meaning of Facebook posts at this scale. Irony and sarcasm, circumlocution and manners, and metaphor and analogy all play a part in the way we express ourselves, and it is this kind of nuance that big data number-crunching is just so good at detecting!
And this is a common problem with social media research, whether you think Facebook is the best thing ever or worse than mind control. While it’s increasingly easy to count “likes” or usage of words, it’s a lot harder to say what they might mean to the people behind the screen.
Facebook’s effects on emotion might remain minor, the research suggests that we shouldn’t necessarily be worried about the way in which social media changes the way we think. But the fact remains that existing research suggests that Facebook needs to protect the privacy of its users, or that people should consider leaving Facebook entirely.
Facebook is not a complete project. It’s always changing and developing, and research like this – particularly research produced in tandem with Facebook Inc. – gives us an insight into the direction that Facebook might head in the future, and the protections we might need to develop to use it safely.
The authors do not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article. They also have no relevant affiliations.
This article was originally published on The Conversation. Read the original article.