Thursday 30 July 2020

Known Mechanisms: How Fake News Spreads

This is not meant to be an exhaustive list of the ways that fake news gains attention and spreads.  Instead, it is meant to be a collection of common ways that fake news will appeal to internet users through tapping into the mechanisms of persuasion and emotional appeals.

 

Moral outrage: drawing attention through generating anger or shock in order to get a negative response and inspiring people to comment or reply with objections

What it looks like: The best-known form of moral outrage is trolling, or presenting deliberately incendiary statements that the poster does not actually believe or endorse.

Example: Widespread engagement with stories about shocking or unacceptable behavior, such as “Mother Theresa Stabs Kittens” generating both anger that she would do such a thing and anger that someone would post blatantly untrue slander about Mother Theresa.

Further Reading


Manipulative advertising: information whose actual intent is to sell a particular product or service, disguised as a different type of information such as news, research, charitable donations, or expert opinion

What it looks like: Advertising is misinformation any time it disguises its intent.

Example: Unscrupulous vitamin supplement companies often claim that USDA recommendations for nutrition are wrong, that most people are suffering from nutritional shortages, and that vitamins can cure everything from celiac disease to depression.

Further Reading

 

Glurge: heartwarming or moral stories about good deeds being emotionally rewarded, bad deeds ending in cosmic comeuppance, or other unrealistically happy endings

What it looks like: These stories are underpinned by the “belief in a just world,” or the human impulse to want to believe that we all deserve what we get and get what we deserve.

Example: “One Boy Didn’t Let Cancer Stop Him from Achieving His Dreams” and similar headlines.

Further Reading

 

Rumor: information with a vague or nonexistent source, generally shared out of an interest in protecting oneself or others from potential threats

What it looks like: Any chain emails, viral Facebook posts, or other content that focuses on warning others about vague fears and then recommending a course of action to allay those fears typically counts as rumor.

Example: “Share this statement to avoid Facebook charging you a subscription fee next month,” “Tell your friends about the risks of gangs that use baby cries to lure gas station attendants into the woods” and anything else with a command or warning.

Further Reading

 

Myside bias: telling you what you want to hear in order to keep you from interrogating information too closely, because we all scrutinize information less when we like what it is telling us

What it looks like: This type of misinformation tends to slip past our radar; sometimes, this takes the form of sharing an article without first reading it.

Example: “Study Finds That Messy People are Secretly the Most Creative” and similar headlines, which tap the fact that we pretty much all wish we were less messy and like to think of ourselves as creative.

Further Reading

 

Pseudo-profound bullshit: sharing information designed to sound wise or profound that has no actual meaning behind it, generally to make oneself or one’s company look good

What it looks like: The test for whether information is genuinely profound or pseudo-profound bullshit is whether or not the saying can be reversed and still sound profound (in which case it is bullshit) or will become nonsensical if reversed (in which case it is a genuinely insightful observation).

Example: Memes such as “My past does not define me, my strength is an illusion, my calm hides a storm” are largely nonsensical in practice.

Further Reading

 

Transgressive advocacy: using an “ends justify the means” mentality to share negative misinformation about individuals whom one believes deserve scorn, or positive misinformation in order to try and benefit disadvantaged individuals or groups

What it looks like: Often, blogs will share information without checking where it came from if the blogger believes that it is important for individuals to know.

Example: A widespread story on Facebook described the (false) accomplishments of a woman with Autism, in an effort to spread awareness about Autism and decrease stigma

Further Reading

 

Social identity processes: giving positive information about groups or identities important to an individual, and/or negative information about groups that compete with one’s own group

What it looks like: Often, this misinformation will take the form of reassuring information that your own political party is more often correct or that the opposing political party is inherently hypocritical or misinformed.

Example: Editorials will sometimes posit “proof” that Iowa State University students are smarter, kinder, harder-working, or more trustworthy than University of Iowa students.

Further Reading


Confirmation bias: giving you additional information to support a position that you already suspect to be true, thereby bolstering stereotypes or increasing perceived plausibility

What it looks like: Often, misinformation will use specific paths (such as google searches) to direct interest.

Example: Googling the phrase “proof that milk is good for you” will turn up pages explaining the health benefits of milk.

Further Reading

 

 

 

 

 

 

 

 

 

Many of the most popular fake news stories fit several of these mechanisms.  For instance, consider a fake news story about several Ames residents coming together to help a man dig his car out of a snowbank after he got stuck driving his neighbor to the hospital and ended up trapped for two days.  This story would tap the social identity process of identifying with the city of Ames, and confirmation bias if it fits with our expectations of Ames.  It would be glurge to the extent that it would reward good behavior, and moral outrage to the extent that it presented the horrifying behavior of the drivers who passed without helping.  It would involve myside bias in that many Ames residents would not scrutinize a positive Ames story, and rumor if it contains warnings about not ending up like the driver in the story.  If it was posted on the page of an Ames snow-clearing service, then it would be manipulative advertising.

No comments:

Post a Comment

ChatGPT isn't hallucinating. It's confabulating.

 As AI language generative models such as ChatGPT have become the hot topic of the internet in the past few years, several headlines have em...