
Image source.
Reprinted under a CC BY-NC 2.5 license.
On Monday, a video was shared widely by social media outlets that included false and potentially harmful information on COVID-19. I first heard of the video here. Pink also wrote a helpful post about it, which you can read here. Although I never encountered the video directly, I’m not surprised at its existence or its spread before it was taken down. This has been a recurring issue regarding social media, its dependency upon user interaction, and its inability to stop the spread of false and dangerous information.
The problem.
According to YouTube, it boasts two billion logged-in users every month. As of 2019, roughly 500 hours of content was uploaded to the platform every minute. In 2020, Facebook users upload an average of 300 million photos a day. Many social media platforms can measure their users in the hundreds of millions.
All of these people create, share, and download data. Algorithms spread the viewership of media. People engage with it whether they like it or hate it. And since the audience sizes can potentially reach hundreds of millions of people in a given day, wrong information doesn’t even have to be spread widely across a platform to reach millions of people. If Facebook’s user base is over 2 billion people, then the 10 million shares of the above pastor’s video account for less than half of one percent of Facebook users overall.
Here’s another wrinkle: media platforms – including the one that hosts this blog – rely on user interaction and activity to generate revenue. Just like television and radio before it, social media can translate the power of people using it into a price for advertisers and other businesses have to pay for access. That interaction depends on being able to share information with only the platform setting the limits. People aren’t going to use an app that has to fact check every meme or scrub every video for accuracy.
This has created a system where information has to be created faster than anyone can figure out how to control it. The driving factors are usage, time, and engagement – not safety. So when a health crisis hits that requires accurate information to get to people…
And the solutions are not attractive.
The most obvious solution is to treat platforms like digital newspapers. They collect data, sort it, and spread it to a wider audience. They promote users and content over others. They retain editorial power to block or delete data which violates their terms of use.
But that would slow down usage, which would slow down engagement, which would decrease a platform’s value. Users also might resent having to wait for approval on their latest post. Imagine WordPress having to fact check every blog post! It might take days or weeks to publish something.
A more drastic example would be to limit social media altogether. Doing so would require personal restraint. Some people might not have the willpower or desire to do so. In many ways, my family would dread giving up Facebook: it’s how they pretend to keep in touch. And I haven’t gotten to the vast number of people who are starting to make money on social media by influencing others to buy stuff or by direct sponsorship of content. Those people can’t afford a change in the social media landscape.
Is there nothing to be done, then?
I think it depends on how people view the problem. People know that yelling, “Fire!” in a crowded room is illegal and dangerous. Why are they able to get away with it if they tweet it or share it to a Facebook timeline?
Until now, social media has only been focusing on one thing: growth. That growth has spawned platforms that pay attention to users’ habits and information. No attention has been paid to maintain any level quality in the information that they post. There’s been no need, because nobody has bothered to consider the consequences.
Requiring social media to be responsible for the data it promotes would be effective. This doesn’t have to happen immediately, but it does need to be a requirement for social media to do business in the long run. Platforms are reaching such influence over the public that misinformation can be just as dangerous as more mundane examples of dangerous speech.
To be clear, if social media had better controls over how it promotes information, the above video of a Texas church pastor dressed up in a lab coat wouldn’t have found millions of views. With a minimum of investigation, this person could have been dismissed as a fraud. In these times, with a dangerous illness killing thousands of people worldwide, the wrong information could be deadly.
Pretending otherwise won’t make the problem go away.