An Analysis of Existing Solutions
Way take a look at the various solutions to the problem of dis- and misinformation, including fact-checking and moderation, then look specifically at technology's role in making the problem both better and worse.
False information is a complicated problem that can be approached from many different angles. Perhaps the most serious effort at compiling actionable recommendations comes from the Aspen Institute’s Commission on Information Disorder. In its aforementioned report, it looked at three areas it deemed necessary for turning the tide on information disorder: increasing transparency, building trust, and reducing harm. Every stakeholder in the information ecosystem—governments, social media companies, media companies, individuals, and employers—has a role to play.
There is no shortage of potential solutions out there. Some look to pass judgement on the information in question, others focus on mitigation strategies. Stemming the flow, moderation, and regulation have also been tried, to varying degrees of success. It is important to note that no one solution is going to solve the problem.
Fact-checking
Fact-checking plays a key role in the solution to the problem of false information. Whether for profit or not, fact checkers can assess any piece of content and come to a conclusion as to the degree of its veracity. Popular organizations and services include: Factcheck.org, PolitiFact, Snopes, and Captain Fact.
Fact-checking’s inherent weakness is bias. For fact-checkers, even the most neutral among them are still subject to personal bias, unconsciously ascribing greater factual weight to sources which align with their own views or only choosing to fact check statements they disagree with. This fact alone means that people are unlikely to trust fact-checkers who regularly dispute the veracity of content that confirms their own beliefs. One proposed solution to the problem of bias is the use of adversarial fact-checking—using teams of people with “diverse sociopolitical views” instead of the partisan approach currently employed.
Another challenge for fact-checking is scale. The manual approach—using humans—cannot keep up with the quantity of information spread every day. Artificial intelligence-based fact-checking could, in theory, meet the scaling needs but remains a work in progress. It is worth noting that AI-based solutions are also subject to bias because humans ultimately train the algorithms.
Gamification
Gamification is a way to inject fun into dull topics. As important as the impact of disinformation is, it is not exactly an exciting topic. Do students really want to read textbooks or listen to lectures about disinformation? Or would they rather experience it using a more practical and engaging approach like gamification? Given the success of gamification in other areas such as financial literacy, it stands to reason that games represent a key cog in helping people understand the nature of the disinformation threat.
We have already seen some attempts to gamify media literacy and the fight against fake news. The game Bad News, for example, looks at how disinformation is produced and disseminated by allowing people to play the role of a fake-news media baron. The goal of the game is to amass followers and credibility using fake news tactics.
There is also the possibility of combining gamification with monetization by tapping into the blockchain and cryptocurrency ecosystem. For example, blockchain can be used as a decentralized and immutable repository of shared truth, while cryptocurrency can be used to incentivize actions which have a positive impact.
Education
The most logical place to provide people with the tools to recognize and respond to the threat of disinformation is at educational institutions. The sooner people understand how to cope with today’s information landscape the better. A teacher at a school in Pennsylvania has taken a novel approach to the topic. Instead of passing judgment on the veracity of information, she helps students learn to evaluate the sources of information before reaching a conclusion. This creates the critical-thinking habits needed to effectively evaluate information.
Others believe that the way “digital literacy” is currently being taught is actually making things worse. A Stanford History Education group study found that college students, using commonly touted digital literacy techniques, had a very difficult time understanding the source behind information and even whether the information was legitimate or not.
Regulation
Some countries have instituted laws and regulations to limit the spread of fake news. Indonesia, for example, has made the spreading of fake information and news a criminal act. While punitive, such laws are unlikely to be enforced beyond high-profile cases. On the contrary, some assert that these laws are put in place for political purposes, specifically to stifle dissent against the government.
Governments are also exploring the regulation of social media companies to better hold the latter accountable for the spread of disinformation on their platforms. In the United States, Section 230 of the Communications Decency Act—the law which protects internet companies from liability for user-generated content spread on their platforms—is currently under review, changes to which would force social media companies to act.
Content moderation
Content moderation is one solution commonly foisted upon social media companies and other content creation platforms. Some have tried it, either through the use of independent fact-checking agencies or in-house moderation teams.
There are three primary challenges with content moderation. The first is the fine line between moderation and censorship. This is especially relevant in a post-truth society, where one side’s moderation is the other side’s censorship. Societies that prioritize free speech risk alienating one of the key pillars of democracy by requiring the moderation of content.
Second is the question of who the gatekeepers are. Who gets to decide what content gets moderated? Governments? The platforms themselves? Independent organizations? Given that social media companies often earn the most from content of questionable integrity, it stands to reason that they should not be the appointed gatekeepers. Governments, with their tendency to politicize, are also not good candidates. Independent organizations are likely the best option, but even they suffer from bias.
Third is how to handle the sheer scale of information. Even if independent organizations could moderate, they simply cannot respond to the quantity of information disseminated every day. AI-based solutions are an obvious candidate, but can technology really be the answer?
Technology’s role in a potential solution
Technology is often blamed for the state of the current information landscape. While this is fair in that the creation and dissemination of information has never been easier, cheaper, and more targeted thanks to technology, we should not apportion blame on the internet, social media, and artificial intelligence alone.
The best way to characterize technology’s impact may be in understanding the gap between a person’s ability to responsibly consume information and the quantity of information the person consumes. With each technological innovation, quantity has increased disproportionately to our ability to consume information responsibly. People no longer have the time or energy to evaluate every piece of information they see. To analogize, when a hospital becomes overwhelmed with patients, it has no choice but to triage.
At the same time, technology is considered by some to be the savior. After all, who better to confront the scale enabled by technology than technology itself? Meta CEO Mark Zuckerberg is one such proponent, stating in 2018 that “... over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content.” Whether he is right, at least in terms of content moderation, depends almost entirely on whether AI algorithms can accurately evaluate information while remaining free from the same bias that affects human moderators.
Beyond dissemination, our increasing reliance on AI-based solutions only increases the number of potential attack vectors. If sensors cannot distinguish between real and fake information, we will find it difficult to ever fully trust things like self-driving vehicles.
At best, we believe technology will have a net neutral impact on disinformation. It will be a core part of any solution, such as decentralized fact-checking, deepfake recognition, and disinformation education, but will also enable increasingly sophisticated deepfakes, better amplification, and even greater micro-targeting. At worst, though, technology will eradicate whatever line remains between fact and fiction. The ever-increasing prevalence of deepfakes generated by AI algorithms have the potential to erode our trust in the very idea of fact itself.