Alongside the COVID-19 pandemic, we’ve experienced a parallel ‘infodemic’ of disinformation, with nutty ideas racing around the globe through Twitter, Facebook, TikTok, WhatsApp and other social media. In March, prominent politicians were saying the virus was a Chinese hoax. If not, others suggested, it was nothing to worry about because even forcing hot hairdryer air up the nose would kill it. Recently, in the western United States, gun-toting mobs have formed to turn back imaginary busloads of ‘antifa’ gangs which exist only on their Twitter feeds.

When research shows that lies now spread more quickly than facts (Vosoughi, S. et al. Science 359, 1146–1151; 2018), it’s tempting to think that human psychology is just deeply flawed. But experts suspect the real problem is a range of subtle practices which, in shaping human contacts and the way we make decisions, turn social media into an engine of untruths.

Our digital environments need re-engineering if our collective brain is to find a way back toward promoting the truth.

One problem is that we still underestimate just how much information is flowing through social media. In the United States, a study early in the coronavirus pandemic found that the volume of low-credibility information about the virus shared on Twitter fully matched the volume of more legitimate news coming from the New York Times and the Centers for Disease Control. Among the consequences, disbelief in the benefits of vaccines is now growing so fast that researchers estimate most people could be against vaccination in a decade or so (Johnson, N. et al. Nature 582, 230–233; 2020). This despite vaccination being one of medical science’s greatest success stories.

It’s happening, this research also suggests, because anti-vaccination groups — despite being a small minority — occupy choice territory in the social media ecosystem, being closely linked through Facebook to the bulk of undecided people whose minds might be changed. In contrast, pro-vaccination groups — including the CDC and the Bill and Melinda Gates Foundation — are only loosely connected to those people. No one planned this network disadvantage. It’s just a natural consequence of current social media practices, which no longer help real information to rise to prominence.

What can we do about it this epidemic of misinformation? One obvious idea it to get tech platforms to police their content more effectively. To counter the anti-vaccination trend, for example, Facebook might be encouraged to give lower priority to links that distribute misinformation, making it less likely for people to see these messages. Reddit recently banned some 2,000 sub-communities from the site, including one focused on Donald Trump, for fostering hate and making coordinated attacks on other groups.

But this top-down approach is also problematic, for several reasons. It requires companies which readily profit from the spread of misinformation to take steps against it, which may be unlikely. Hence, it will require some mechanism of public pressure, on a scale far beyond a few companies’ recent advertising boycott of Facebook. On another level, do we really want Facebook or any other private company deciding which information is legitimate and which isn’t?

A far better approach would be to tweak social media practices to empower individuals to make better decisions on their own, thereby re-engineering social media to promote the emergence of truth. This may be a lot easier than it seems.

After all, most people don’t actually want to spread disinformation. A survey published earlier this year found that most people feel it’s very important not to share false information on social media (Pennycook, G. et al. Preprint at https://psyarxiv.com/3n9u8/; 2020). Fake news only spreads so easily, the researchers think, because accuracy is just one of many things people care about. When people forward unbelievable stories, truth gets pushed aside by other desires — to attract and please followers and friends, or to signal allegiance to some group. That means that subtle changes to social media environments might help to boost the value of truth relative to other desires, helping to channel people back toward collectively beneficial behaviour.

A recent exploration along these lines suggests a variety of simple measures which might help give considerations of truth more weight in the decision-making calculus (Lorenz-Spreen, P. et al. Nat. Hum. Behav. https://doi.org/d7g8; 2020). What we need, the authors argue, is social media offering a richer variety of cues to help people judge whether an item is legitimate or not. For example, Facebook might show not only how many ‘likes’ a post has received, but also allow and count ‘dislikes’, giving a more symmetric view of opinions. Or some friction could be included in the act of re-tweeting — it could take several clicks and seeing a reminder about the importance of considering the origin of the initial tweet. This could help bring consideration of the truth of the message higher into any user’s consciousness.

There’s a huge imbalance between the tech firms, which have profound knowledge of their users’ behaviour, and the users, who have little idea of how their data is used to shape their online environment — typically for commercial benefit of the company. The result is systematic corrosion of our mechanisms of belief formation.

People may be the most important resource for restoring integrity to social media. We just need to redesign the technology to work with them. Software engineers and psychologists have been exploring many possibilities, but we need the big tech firms to start experimenting with some in practice. What happens in the social media environments is so hugely complex that we won’t find a solution quickly. It will require lots of experimentation and learning through trial and error.