Instagram recommended false claims about COVID-19, vaccines and the 2020 U.S. election to people who appeared interested in related topics, according to a new report from a group that tracks online misinformation.
"The Instagram algorithm is driving people further and further into their own realities, but also splitting those realities apart so that some people are getting no misinformation whatsoever and some people are being driven more and more misinformation," said Imran Ahmed, CEO of the Center for Countering Digital Hate, which conducted the study.
From September to November 2020, Instagram recommended 104 posts containing misinformation, or about one post a week, to 15 profiles set up by the U.K.-based nonprofit.
The automated recommendations appeared in several places on the photo-sharing app, including in a new "suggested posts" feature introduced in August 2020 and the "Explore" section, which points users towards content they might be interested in.
The study is the latest effort to document how social media platforms' recommendation systems contribute to the spread of misinformation, which researchers say has accelerated over the last year, fueled by the pandemic and the fractious presidential election.
Facebook, which owns Instagram, has cracked down more aggressively in recent months. It has widened its ban on falsehoods about COVID-19 vaccines on its namesake platform and on Instagram in February. But critics say the company has not grappled sufficiently with how its automated recommendations systems expose people to misinformation. They contend that the social networks' algorithms can send those who are curious about dubious claims down a rabbit hole of more extreme content.
Ahmed said he was particularly concerned by the introduction last year of "suggested posts" on Instagram — a feature geared at getting users to spend more time on the app.
Users who view everything posted recently from accounts they already follow now see posts from accounts they don't follow at the bottom of their Instagram feeds. The suggestions are based on the content they've already absorbed.
"Putting it into the timeline is really powerful," Ahmed said. "Most people wouldn't realize they're being fed information from accounts they're not following. They think 'These are people I've chosen to follow and trust,' and that's what makes it so dangerous."
The Center for Countering Digital Hate says Instagram should stop recommending posts "until it can show that it is no longer promoting dangerous misinformation," and should exclude posts about COVID-19 or vaccines from being recommended at all.
To test how Instagram's recommendations work, the nonprofit, working with youth advocacy group Restless Development, had volunteers set up 15 new Instagram profiles.
The profiles followed different sets of existing accounts on the social network. Those accounts ranged from reputable health authorities; to wellness, alternative health, and anti-vaccine advocates; to far-right militia groups and people promoting the discredited Qanon conspiracy theory, which Facebook banned in October.
Profiles following wellness influencers and vaccine opponents were served up posts with false claims about COVID-19 and more aggressive anti-vaccine content, the researchers found.
But the recommendations didn't end there. Those profiles were also "fed election misinformation, identity-based hate, and conspiracy theories," including anti-Semitic content, Ahmed said.
Profiles that followed Qanon or far-right accounts, in turn, were recommended disinformation about COVID and vaccines — even if they also followed credible health organizations.
The only profiles that were not served up misinformation followed, exclusively, recognized health organizations, including the Centers for Disease Control and Prevention, the World Health Organization and the Gates Foundation.
The study does not disclose how many suggested posts were reviewed for each of the profiles, making it impossible to determine how frequently Instagram recommends misinformation.
Facebook spokesperson Raki Wane told NPR the company "share[s] the goal of reducing the spread of misinformation" but disputed the study's methodology.
"This research is five months out of date and uses an extremely small sample size of just 104 posts," Wane said. "This is in stark contrast to the 12 million pieces of harmful misinformation related to vaccines and COVID-19 we've removed from Facebook and Instagram since the start of the pandemic."
Facebook says when people search for COVID-19 or vaccines on its apps, including Instagram, it directs them to credible information from authoritative health organizations such as the WHO, CDC and the UK's National Health Service.
"We're also working on improvements to Instagram Search, to make accounts that discourage vaccines harder to find," Wane said.
Researchers have tracked the overlap between conspiracy theories, and how they show up in social media recommendations, for some time. Some anti-vaccine activists began posting Qanon content last year, while high-profile spreaders of baseless election fraud narratives pivoted to posting vaccine misinformation.
"That there is a correlation between these communities is something that's fairly well documented," said Renée DiResta, who studies misinformation at the Stanford Internet Observatory. She said as early as 2016, a Facebook account she used to track the anti-vaccination movement got recommendations to join groups about the Pizzagate conspiracy, a predecessor to Qanon.
Ahmed connected the overlap in different conspiracies recommended in his group's study to the insurrection at the U.S. Capitol.
"That's precisely what we saw on January the 6th," he said. "This coming together of these fringe forces. And what had been driving it, in part? The algorithm."
Editor's note: Facebook is among NPR's financial supporters.