The truth about truthiness
Read this story as it originally appeared.
Dr Eryn Newman is describing the results of one of her studies of ‘truthiness’ at the ANU Research School of Psychology: “The really cool thing about this is … ,” she begins, before stopping herself mid-sentence. “No, wait. I should say, it’s the really disturbing thing.”
That’s the problem with studying the perception of truth in the fake news era: there’s a lot of cool data from a scientific perspective, but the real world implications are far more concerning.
In 2012 when Newman first started looking at truthiness, the word was still strongly associated with its creator, American comedian Stephen Colbert – so much so that her study was actually featured on his satirical television show The Colbert Report.
“My only problem with this scientific study is that it’s a scientific study,” Colbert quipped. “Truthiness and empirical evidence don’t mix.”
Colbert originally defined the term as ‘truth which comes from the gut, not books’ in reference to then-US President George W. Bush’s political rhetoric, but the idea took on a life of its own once Donald Trump stood for election.
“At the time of our first study, we were wondering how we could sell this research as an applied issue,” Newman says. “And then, all of a sudden, fake news came along – and boom! Truthiness was everywhere. Now, when I’m creating a talk about the subject, I’m wondering what horrific applied example I’ll pick from the media that day!”
In that first study, Newman asked participants to assess whether a particular claim was true. What she found, over and over again, was that, if the claim was accompanied by a photograph, participants were more likely to believe it. The photograph itself was merely decorative and proved nothing, but its presence did something.
When participants were asked to assess the claim ‘Turtles are deaf’, they were more likely to agree ‘Yes, they are’, if the claim was accompanied by a picture of a turtle swimming in the ocean. Similarly, participants were more likely to agree with the claim ‘Nick Cave is alive’ when it was accompanied by a picture of Nick Cave on stage.
Incredibly, they were also more likely to agree with the statement ‘Nick Cave is dead’ if it appeared with the very same photo.
“We found that the effect was hugely robust,” Newman says. “Other labs around the world are now replicating it with different types of materials.
“For example, if you say a wine is of high quality and this claim is presented with a decorative photograph, people are more likely to believe it. When you give people claims about the stock market, people are more likely to believe that a commodity has a high value if there’s a photo associated with the commodity.”
From a cognitive perspective, the photographs appear to make it easier for people to imagine the claims, Newman says.
“Humans already have a bias to generally believe information we encounter and to nod along. And since we only have so many cognitive resources at any one moment, it’s about energy consumption – it takes more cognitive effort to figure out that something’s false.”
In subsequent studies using different information inputs, Newman has consistently found the same kinds of biases when it comes to our intuitive assessment of truth. “They’re insidious and they’re scary in terms of how they influence people’s beliefs.”
Her catalogue of research findings certainly makes for uncomfortable reading.
She has shown that, when someone’s name is easier for us to pronounce, we find them more trustworthy and are more likely to believe what they say, compared to someone with a difficult-to-pronounce name. She has also found we’re happier giving our credit card details to someone on eBay when their seller ID is easier to pronounce, even though this is just a made-up collection of letters.
And in a triumph of style over substance, she demonstrated that audio quality affects whether we believe a scientist or not, regardless of their expertise or what they’re actually saying.
“Across all those different lines of research, the theory which ties it together is the idea that when people encounter information, if it’s easy for them to process, they believe it,” Newman says.
“This means any variable in the information environment that increases the chances that I can understand or quickly perceive the information, also increases the chances that I believe it.”
In our information-rich, post-truth environment, she agrees that these gut feelings about truth can have significant consequences.
“A study by MIT recently found that false information on Twitter travels six times faster than the truth, and that is worrying,” she says.
“From moment to moment, we’re moving from one piece of information to the next, quickly, and task switching between our phone and computer. This task switching actually comes at a cognitive cost, so then it’s even more difficult to unwind information and decide that it’s false.
“People aren’t stopping to read the articles and critically analyse information to make sure they’re only sharing accurate, validated content.”
Slowing down to think critically and apply our prior knowledge is the key to improving our information literacy, Newman says.
“Even if we know the answer, we often don’t bring our prior knowledge to mind in the situation and apply it. And that is what we, as educators as well as scientists, want people to be doing.”
Research has shown, however, that even when we’re aware of our possible biases in how we might process information, we have a hard time overcoming them.
“When I’m consuming information, especially on social media, I try to do so through a critical lens,” she says.
“But any decision-making in a human domain is susceptible to error and I think from my research, that is a terrifying take-home message: no one is immune.”