Recently, Frontline released a two part documentary on Facebook and how it operates (click here for Part One, here for Part Two). I highly recommend taking the roughly two hours to watch both parts. The film highlights the driving force behind Facebook’s development from a college social media platform to a news and information giant. Not only that, it goes into the effects Facebook has had on the world, including the Arab Spring and the ethnic cleansing of Rohingya people in Myanmar.
The short version for people who haven’t watched it.
Essentially Facebook’s development has created an application which thrives on user interaction. That is, their metrics capture what motivates people to do stuff with Facebook (and then other media once Facebook could track outside their own site). Facebook would look at everything people did and figure out what was getting people to share or hit like or ignore things.
From there, Facebook went into ways of inciting user interaction. Algorithms and changes to features were designed specifically to keep people scrolling and using the app. There’s a reason why people sometimes feel like they can’t pull away from their Facebook accounts. In my own life, every time my family gets together, there will be at least one moment where everyone is on their phones checking Facebook. None of this is on accident.
The problem alleged in the documentary is that this has helped change maps across the globe. In addition, Facebook was made aware that people were using their app to facilitate ethnic violence. Facebook’s responses were carefully calculated to disclaim any ownership or facilitation of violence against people. This last part, while possibly infuriating to some, is understandable due to potential liability. I doubt any company could pay damages if it was found liable for a genocide.
The consequences of Facebook and other social media.
What I didn’t see addressed – possibly because the consequences are pretty staggering – is that Facebook has designed a platform which can influence people regardless of what gets put into it. The only thing I’ve seen that might matter is the volume of media which gets produced. That is, people who are more inclined to create attack propaganda are going to be heard more often. This is true of any end of the political spectrum.
In a way, I’ve struggled with this for a while. Whenever I write something now, I wonder if it might get used to justify dehumanizing people. It’s never my intent, but social media sharing can throw that intent out the window. All I need to do is write a clickbait title and say some harsh things about a group of people, and I can become Internet famous. For all the wrong reasons.
More than that, it means that the only metric for Internet attention is whether or not it grabs people. The content doesn’t have to be true, meaningful, or even useful. In fact, if it’s any of those things, it probably won’t get shared very much. The outrageous is what prompts an algorithm to publish everywhere, and so the outrageous is going to be more magnified than anything against it.
And if all of this is true, it means that social media inherently adds to extremism.
I get the irony of wondering about this on a blog which has sharing buttons, like buttons, and modes of getting pushed to various places. Blogs are a part of social media, although there is better opportunity to be moderate here than on Twitter. Still, I have noticed that when I create an edgier title along with certain tags, I can inflate views. Outrage is more visible than reason.
And it leaves me wondering what the ethical course of action is. I feel vindicated for deleting my former Facebook account; it turns out Facebook really was trying to manipulate everyone just to get attention. It appeals to the lowest common denominator in humanity, the primal urge to end that which enrages or terrifies us. Creating content enables social media to keep the attention of more people for longer.
That attention appears to be fueling online groups with new labels and slogans. There’s a culture of outrage which is forming in almost every part of the ideological spectrum. These cultures seek and destroy people for not conforming to their message or believing what they believe. It’s almost like a tech-religion with digital trials for heresy.
Except that not all of it is digital now. The Rohingya persecution is particularly alarming. People are dying because communities hear more about how to hate than how to love. Nobody who promotes those messages are asking whether it is the right thing to do.
Even looking on the smaller scale, I don’t like watching what it’s done to my family. They tend to ignore things which do not exist on their social media feeds. That last part hurts the most, and it’s a pain which I’ll never forget. Regardless, I think social media is having an effect on people. I just hope that it doesn’t have a permanently debilitating one.
My first thought would be this on the extremism. It’s even made friends enemies and divided. Truth never travels through many layers, and falsehood flies around the world in no time and penetrates multiple layers. People keep sharing ideologically abrasive lies. We’re not so divided as it appears, but social media brings out the worst, through anonymity. While hiding behind the keyboard!
LikeLiked by 3 people
Several people who worked on facebook have used terms like “Hacking the mind” These guys are very clever and sadly they became smart before they became ethical and are either trying to hide their unethical practices or struggling to bring the maturity they received over the years to the monster the created in their youth. Either way good luck.
ECHO ECHO
LikeLiked by 1 person
The dual problem of anonymity and algorithms that favour distribution of controversial or extreme opinion is worrying. As long as social media rely on AI for identifying harmful content, I suspect the problem is only going to get worse. The case of the Rohingya is a “good” example of this.
By coincidence (or AI algorithm), your post, as displayed on my screen, is accompanied by a link to the issue of Facebook and the harmful effect it has had on the Rohingya: Facebook in Myanmar: A Human Problem that AI Can’t Solve. It’s worth a read.
LikeLiked by 3 people
I’m fairly certain it’s AI algorithm, which I’m fine with. The article you linked synthesizes a few reports I’ve been hearing on NPR about the problems Facebook was having. What concerns me is that Facebook is trying to hide behind their incompetence to deflect blame, namely that they weren’t aware of how inadequate their anti-hate speech measures were in Myanmar. The Frontline documentary calls that into question. Some sources were fairly adamant about showing Facebook was aware of what was going on.
And really, none of this in hindsight should have been surprising. Most social media companies care about market share first, which drove them to increase in size until they could not enforce their own policies. Going forward, I think countries would be within their rights to require Internet social media companies to demonstrate their ability to comply with legal and internal policies.
LikeLiked by 1 person
I did my part. Deleted my Facebook account years ago! (And I’ve never had a Twitter account.)
LikeLike