26 October 2024

Misinformed About Misinformation

 Interview: Are We Misinformed About Misinformation?

The influence of false and inflammatory online content is overblown, says researcher David Rothschild.

In June, the journal Nature published a perspective suggesting that the harms of online misinformation have been misunderstood. The paper’s authors, representing four universities and Microsoft, conducted a review of the behavioral science literature and identified what they characterize as three common misperceptions: That the average person’s exposure to false and inflammatory content is high, that algorithms are driving this exposure, and that many broader problems in society are predominantly caused by social media.

“People who show up to YouTube to watch baking videos and end up at Nazi websites — this is very, very rare,” said David Rothschild, an economist at Microsoft Research who is also a researcher with the University of Pennsylvania’s Penn Media Accountability Project. That’s not to say that edge cases don’t matter, he and his colleagues wrote, but treating them as typical can contribute to misunderstandings — and divert attention away from more pressing issues.

David Rothschild is an economist at Microsoft Research and a researcher with the Penn Media Accountability Project.

Visual: Courtesy of David Rothschild

Rothschild spoke to Undark about the paper in a video call. Our conversation has been edited for length and clarity.

Undark: What motivated you and your co-authors to write this perspective?

David Rothschild: The five co-authors on this paper had all been doing a lot of different research in this space for years, trying to understand what it is that is happening on social media: What’s good, what’s bad, and especially understanding how it differs from the stories that we’re hearing from the mainstream media and from other researchers.
Specifically, we were narrowing in on these questions about what the experience of a typical consumer is, a typical person versus a more extreme example. 
A lot of what we saw, or a lot of what we understood — it was referenced in a lot of research — really described a pretty extreme scenario.
  • The second part of that is a lot of emphasis around algorithms, a lot of concern about algorithms. 
  • What we’re seeing is that a lot of harmful content is coming not from an algorithm pushing it on people. 
Actually, it’s the exact opposite. The algorithm kind of is pulling you towards the center.
And then there are these questions about causation and correlation. A lot of research, and especially mainstream media, conflate the proximate cause of something with the underlying cause of it. . .

The idea was to really create, hopefully, a comprehensive piece that allowed people to really see what we think is a really important discussion — and this is why I’m so happy to talk to you today — about where the real harms are and where the push should be.

None of us are firm believers in trying to pull out a stance and hold to it despite new evidence. There are shifting models of social media. What we have now with TikTok, and Reels, and YouTube Shorts is a very different experience than what the main social media consumption was a few years ago — with longer videos — or the main social media a few years before that with news feeds. These will continue to then be something you want to monitor and understand."

LINK >> UNDARK.org


Fake news and the spread of misinformation: A research roundup
Fake News, Social Media and Politics | Center for Mobile Communication  Studies
Humans may be more likely to believe disinformation generated by AI | MIT  Technology Review

No comments:

CALL JONATHAN PIE: US ELECTION SPECIALS