Nick Fernbaugh | staff writer
Meta, the parent company of Facebook, decided to rethink its application of fact-checking on its platforms. Mark Zuckerberg, the CEO of Meta, made the announcement in a video released on Jan. 7 that they would be ditching their established fact-checking system in favor of community notes.
Everyone has instant access to information in the palm of their hand and can say whatever they want thanks to the prevalence of smartphones. With how little content is being checked, it’s now turned into a place where someone can seek out their version of the truth and what they want to hear rather than actively and intelligently seeking the truth.
Facebook isn’t the only platform that has removed its fact-checking from their websites. X switched to community notes shortly after Elon Musk purchased the platform. Since then, it has become a bot-filled, misinformed space, with people fleeing the platform en masse for clearer pastures.
According to Meta, the move to community notes has been done to help give the average person a voice to express the truth online when someone is spreading misinformation.
Community notes allow approved contributors on the platform to add context or clarify misleading or false posts.
The issue stems from the fact that anyone can get approval for adding context. All they need is a phone number, to be active on the platform for six months and not receive a violation on X. The worst part is that the contributor can remain anonymous, which creates a lack of accountability.
The nonprofit Center for Countering Digital Hate released a study last year that investigated posts on X and found that out of 283 that were deemed as misleading, 209 did not have accurate notes regarding issues about the election in 2020.
The way Facebook did fact-checking started in 2016 in response to the election of President Donald Trump.
Zuckerberg sent a letter from the International Fact Checking Network asking to collaborate with websites that could help make a more accurate news feed for everyone on the platform.
Meta partnered with news sources including PolitiFact, FactCheck.org, the Associated Press, Snopes, ABC News and others that would look at posts on their platforms and provide needed context to misleading posts. They could also ask Facebook to remove posts if they were considered dangerous.
There is some pushback to the idea that community notes work. Yang Gao, an assistant professor from the University of Illinois, led a study in 2024 to determine if community notes are an effective tool in the fight against misinformation.
They collected over 89,000 posts from X and analyzed them to see if a publicly displayed note demonstrated a higher volatile reaction from people scrolling online.
They found that more X users were willing to delete their posts in response to the public community note.
While there is some merit to this system with benefits including transparency, being accessible to anyone to provide accurate context and promoting the idea of discourse instead of professional sites, this can lead to problems of misinformation in the hands of the wrong individuals.
The question regarding content fact-checking comes down to who should do it: A full-time trained fact-checker, or an anonymous user who happens to have behaved for six months.
The answer seems obvious to most, except for extremists who screamed at Zuckerberg to make the change due to pushback the push back they were getting for their intentionally misleading posts.
With companies adopting this policy of community notes versus trained professionals, a dangerous precedent is set for individuals who want to push a false agenda without having any consequences attached to their outlandish beliefs. Take for example someone who denies the events of 9/11.
That sounds outrageous, but people can provide context with incorrect information that supports conspiracy theories. This statement can now be “proven” true, and suddenly a well-documented event in history is being challenged with ridiculous opinions that are completely false.
With a lack of proper fact-checking on social media, people are getting pushed into echo chambers, getting the information that they think is correct and supports beliefs they already had.
Echo chambers create problematic spaces where people aren’t getting a diverse set of facts and opinions that are needed for modern discourse. This way of spreading misinformation creates the risk of emotions and opinions taking the forefront of discussion over the complete information because fact-checking doesn’t get the user’s attention.
Attention sells whether it be rage bait or clickbait, and for the time being it seems that neither Meta nor X have any plan on making fact-checking as reliable as it should be for the average person scrolling either application.
What can individuals do in the absence of proper fact-checking on the world’s largest social platform?
Take the necessary time and do proper research to educate yourself on what might be misinformation. The time to search for the correct information is most likely the same amount of time it takes to read and process the post at face value.
According to the American Psychological Association, one easy way to stop the spread of misinformation is to avoid repeating it without a correction. The worst thing possible is to respond with an emotionally charged answer or complain to others about it without fully understanding the context of the post.
In an age where people are continuing to spread misinformation, everyone is at risk of being taken down dangerous rabbit holes of information silos, getting told what they want to hear instead of the truth or being taken advantage of by terrible individuals online.
