How quickly fake news spreads

How fake news spreads on Twitter

A group of scientists at MIT examined known fake news and how it was spread on Twitter: 126,000 stories tweeted by over four million users over a period of ten years. The team found that the truth can hardly compete against lies and rumors.


False news spreads much faster than facts. They reach more people and penetrate more deeply into the social network, according to the result of Soroush Vosoughi, who has worked on fake news as a data scientist since 2013 and led the study. And the analysis revealed something else: It's not just the evil bots, it's the people who spread the fasted news. Twitter bots reinforce true stories just as much as false ones, according to the study.

Although Vosoughi and his colleagues only focus on Twitter - the company provided the scientists with the data - their work also applies to Facebook, YouTube and every other major social network, political scientists believe. It is much more likely that a fake story goes “viral” than a real one. A false story reached 1,500 people, on average, six times faster than a true story, the authors found. That goes for business, terrorism and war, science and technology and entertainment. The most effective news, however, is fake news about politics.

Political scientists and social media researchers attest the study to an extremely broad and rigorous view of the extent of fake news in social networks. The scope of the material viewed is enormous: every controversial message that spread via Twitter from September 2006 to December 2016 was analyzed. The researchers found this fake news on third-party fact-checking websites, including Snopes.com, politifact.com, and FactCheck.org.

Twitter as the only source of information

The trigger for the researchers' meticulous search for lies and deceit was the attack on the Boston Marathon in 2013. The governor of Massachusetts asked millions of people to stay in their homes while the police carried out a widespread manhunt. Two of the scientists who now published the study discovered Twitter as a source of information to the outside world. "We heard many things that were not true, but also many that turned out to be true," they reported now.

As a result, they invented an algorithm that can sort tweets and identify the facts that are most likely correct. The characteristics of the author, the type of language used and the type of distribution of the tweet were checked. You could even take pictures into account, at least if there was text stored there. They found two types of tweets.

If a person posts something with a lot of followers, it can potentially be noticed by many people, but it does not necessarily have wider distribution. A tweet that is passed on again and again, i.e. that spreads vitally, may reach just as many people, but according to Vosoughis it has a greater depth. The authors found that accurate messages generated no more than 10 retweets. Fake messages could show a retweet chain with up to 19 links and that 10 times as fast as real messages.

Why is the lie so good?

The MIT team put forward two hypotheses: First, fake news appears more novel and different than real news. The team found that counterfeits are often very different from the rest of the tweets. Also, fake news evokes a lot more emotion than the average tweet. That increases their attraction. The researchers created a database of the words Twitter users used to respond to the 126,000 controversial tweets, and then analyzed them using an emotion analysis tool. Fake tweets tended to produce words associated with surprise and disgust, while tweets with correct content tended to produce words associated with sadness and trust.

It is known that people prefer information that confirms their previous attitudes and that they consider them to be more convincing. They tend to accept information they like.

Trump is not trusted to do good deeds

An article by The Atlantic magazine on the study cites two examples from the US presidential election campaign. In August 2015, a rumor circulated on social media that Donald Trump had a sick child use his plane for urgent medical care. Snopes.com confirmed that story. However, it was only shared or rewritten by about 1,300 people. In February 2016, however, a rumor arose that Trump's older cousin had recently died and that in his obituary he should have pleaded with the Americans: "Please do not allow the walking bursa to become president." Snopes could not find any evidence for the cousin or However, around 38,000 Twitter users shared the story on their social network, and it put together a retweet chain that was three times as long as the sick-child story.

The author of the article on The Atlantic, Robinson Meyer, interviewed various scientists about the study. Above all, they expressed concern that the statement about the bots could be misused to belittle them. One of the authors of the study says that the bots were not the focus of the study and that more and more bots appeared during the 2016 presidential election campaign. The question remains whether the infrastructure of social networks causes the emergence and dissemination of fake news. Meyer concludes his research with political scientists and social media researchers with the statement: “On platforms on which every user is a reader, writer and publisher at the same time, falsehoods are too seductive not to be successful: the thrill of novelty, the tickling disgust is too tempting. "