Misinformation Topics

 Murphy, Charles


One issue I found on misinformation in the media was hive-mind mentality when it pertains to certain issues. Many people start to believe a single narrative in the news because of the addictiveness of their social media algorithms. This happened with Qanon, where right wingers got all riled up by these posts and believed every post as true when they should be more skeptical of the system. It also happened with misinformation related to Covid-19 and its vaccine. Some ways to mitigate herd mentality in social media is to take regular breaks, verify with real credible sources that a  news item is true, and do your own research rather than believing in algorithms and the number of likes and reposts to inform you.


Shaikh, Elsa


One big problem with misinformation in the media is that opinions are often misrepresented as facts. A lot of the time, news stories or posts leave out important details or show only one side of a story, which can mislead audiences. When people see the same message repeated often, they may start to believe that it is true even if it's not.

To help reduce misinformation, it is important to learn how to separate facts from opinions and look for evidence that supports claims. Reading and researching information from different perspectives and holding off on sharing information until you check it first are also good practices. Taking these steps can help people better understand complex issues and make informed decisions. 

Jean Pierre, Olivier

The image in the tweet is blatant misinformation because its claims said in the post are not supported by scientific evidence anywhere. Numerous credible research shows that high protein intake does not damage the kidneys in healthy individuals and the “480% more kidney damage” is not linked to any credible study and likely made up numbers. Also the claim about a company paying billions to hide the truth is nothing but a conspiracy.

Posts like this can be countered by checking credible sources like medical orgs (Harvard health for example) or peer reviewed studies instead of trusting viral internet claims (especially from random people). Supplements like creatine often face similar misinformation.

Barsha, Mehnaz


On social media platforms like TikTok, there’s a growing trend of people presenting themselves as “health experts” and sharing food advice that isn’t verified or backed by research. A common example is the claim that seed oils are harmful because they cause inflammation, despite the lack of support from medical professionals. Another widely discussed topic is the red dye Red40, found in snacks like Hot Cheetos and Doritos. Some argue it’s dangerous because it’s banned in certain countries, although it’s approved in places like the EU under the name Allura Red E129.


Spreading fear about food can unnecessarily stress people and negatively impact their eating habits. One way to address this is similar to regulations in China, where only qualified individuals are allowed to speak on topics in their area of expertise, and they must prove their credentials. Requiring proper credentials shows audiences that the person giving advice has the training and knowledge to do so. Doctors and food scientists with the right qualifications should be the ones sharing guidance on food safety and nutrition.


Gemini What is the origin of snake oil salesmen?

The term "snake oil salesman" is one of those rare instances where the "oil" started as a legitimate medicine but ended up as the poster child for fraudulent marketing.

Here is the breakdown of how a traditional remedy became a synonym for a scam.

The Original: Chinese Water Snake Oil

In the mid-1800s, thousands of Chinese workers came to the United States to build the Transcontinental Railroad. They brought with them water snake oil, a traditional Chinese medicine used to treat joint pain and inflammation.

Scientifically speaking, they were onto something. Water snakes are rich in eicosapentaenoic acid (EPA), an omega-3 fatty acid that actually has anti-inflammatory properties. When the Chinese workers shared it with their American counterparts, it was genuinely effective.

The Mimic: Clark Stanley

Enter Clark Stanley, the self-proclaimed "Rattlesnake King." Seeing the success of the Chinese remedy, Stanley decided to market his own version. At the 1893 World’s Columbian Exposition in Chicago, he put on a gruesome show where he sliced open live rattlesnakes and squeezed them into bottles to prove the "purity" of his product.

He became incredibly wealthy, selling "Stanley's Snake Oil" across the country.

The Exposure: The 1917 Lab Test

The "snake oil" industry came crashing down thanks to the Pure Food and Drug Act of 1906. In 1917, the government finally seized a shipment of Stanley’s product and tested it in a lab.

The results were embarrassing:

  • It contained zero snake oil.

  • The ingredients were mostly mineral oil, beef fat, red pepper (to provide a "warming" sensation), and turpentine.

Stanley was fined $20, and the term "snake oil" was forever cemented in the American lexicon as a fake cure-all sold by a charismatic liar.


Qureshi, Zahra


One issue I find with misinformation is that there is a lot of disinformation and malinformation on social media that makes people angry and causes arguments. Many people react right away without checking if something is true, which leads to hurtful comments and negativity online. If people took a moment to think before commenting, conversations would be more respectful. Social media platforms should also make it easier to fact-check information, so false or harmful content does not spread so quickly.


Please let me know if you want me change anything for future assignments. Thank you. 


it’s a heavy question, but you’ve hit on something that researchers and "doomscrollers" alike have noticed for years. If it feels like hate is louder, faster, and more prevalent on your feed than love, you aren't imagining it—but the "strength" of the emotion depends on how you measure it.

Let’s ask Gemini

IS HATE A STRONGER EMOTION THAN LOVE ON SOCIAL MEDIA?


On social media, hate often outperforms love in terms of engagement, but love usually carries more weight in our actual lives.


Why Hate Wins the "Algorithm" Game

Technically, hate isn't "stronger" in a human sense, but it is more virulent. Here is why it dominates the digital space:

  • Negativity Bias: Humans are evolutionarily wired to pay more attention to threats than rewards. A "hateful" post triggers our fight-or-flight response, making us more likely to click, comment, or share.

  • High Arousal: Emotions like anger, outrage, and indignation are "high arousal" states. They demand action. Love and contentment are "low arousal" states—they make us feel good, but they don't necessarily make us want to smash a keyboard.
    +1

  • The "Outgroup" Effect: Social media thrives on "us vs. them" dynamics. Moral outrage (a form of hate/anger) is the ultimate social glue for an "in-group," making those posts spread much faster than a simple "I love this" post.
    +1

The "Love" Counter-Argument

While hate spreads faster, love has more staying power.

  • Community Building: Long-term groups, fanbases, and support networks are built on shared love/interests. These spaces are often more stable than the fleeting "outrage of the week."

  • The Silent Majority: Most people browse social media to see photos of friends, hobbies, or pets. This "quiet love" doesn't generate the same metrics as a flame war, so it remains less visible to the algorithm.



Moutinho, Dylan


I was going through twitter (X) and I saw that Kai Cenat launched a new clothing brand called "Vivet" and it was strange cause kai is supposed to be taking a break right now and he didn't make any announcement on his social media pages. Turns out the website was not official at all and thousands of people got scammed for $200+ (around $100,000 in total). The website was then deleted shortly after the "clothing" sold out.


The best thing you could do to mitigate something like this would be to check any and all official social accounts for announcements and do not share or post anything related to it unless your absolutely sure it's real. Being skeptical is much better than losing hundreds of dollars. 


Gemini


The rise of digital "imposters"—fraudsters who mimic trusted brands, celebrities, or government agencies to sell fake goods or steal data—has created a multi-trillion-dollar shadow economy. In 2026, this threat has been "supercharged" by AI, making it harder than ever to distinguish a fake storefront from a legitimate one.

The impact can be broken down into three main categories:

1. The Economic Toll: Beyond the "Bargain"

While a consumer might think they are just getting a cheap knock-off, the broader economic consequences are severe.

  • Direct Losses: In 2025 alone, Americans lost over $12.5 billion to fraud, with imposter scams being a leading category.

  • Business Erosion: Legitimate merchants lose an average of 3% of their total revenue to fraud. When factoring in indirect costs (shipping, chargeback fees, and customer support), businesses actually lose about $207 for every $100 stolen.
    +1

  • Funding Crime: The proceeds from counterfeit goods often fund more serious criminal enterprises, including human trafficking and drug smuggling.

2. The Trust Crisis: Brand & Consumer Psychology

The most lasting damage isn't the lost money, but the erosion of trust in the digital marketplace.

  • Brand Damage: When a customer receives a dangerous or low-quality counterfeit from a site mimicking a real brand (like Nike or Apple), they often blame the original brand, leading to "brand erosion" and lost future sales.

  • Psychological Distress: Victims report feelings of vulnerability, helplessness, and "financial anxiety." Research shows that even exposure to news about these scams can make people more hesitant to engage with government or retail services online.

  • The "AI-Phobia" Effect: As AI generates hyper-realistic deepfake ads and fake reviews, consumers are becoming increasingly skeptical of all digital interactions, which can slow the growth of legitimate e-commerce.

3. Physical Health and Safety Risks

Buying from an imposter isn't just a financial risk; it can be a physical one.

  • Toxic Materials: Counterfeit cosmetics often contain harmful levels of lead or mercury, while fake electronics lack safety certifications and are prone to catching fire or exploding.

  • Ineffective Medicine: Counterfeit pharmaceuticals are a global crisis. These "medicines" often contain zero active ingredients or, worse, toxic fillers like rat poison, leading to thousands of preventable deaths annually.
    +1


Tactic

How it Works

Deepfake Endorsements

AI-generated videos of celebrities or influencers "promoting" a limited-time sale on a fake site.

Phantom Inventory

High-quality, AI-generated images of products that don't actually exist.

Search Poisoning

Scammers use SEO and paid ads to push their fake storefronts to the top of Google or social media feeds.

"Verified" Imposters

Using stolen data and bot networks to create social media profiles with fake verification badges and thousands of fake reviews.

Pro-Tip: If you see a deal that seems "too good to be true" on social media, don't click the ad. Instead, manually type the official brand's website into your browser to verify if the sale is real.


Socci, William


I've seen countless misinformation images, videos, comments, and even articles claiming to be credible. Almost everything we see online is filled with misinformation, even if it's on a microscopic level.

Some are simple and really won't harm anyone, for example, I've seen several Facebook posts saying Brandon Aiyuk Trade Confirmed to Washington Commanders. Now I know this isn't true yet there were some speculations. Thankfully this isn't necessarily hurting anyone, but it's just false, so it takes me an extra second to have to look it up and verify it. Another silly one that isn't necessarily hurting anyone but has the power to, but is just blatant misinformation, as it's not true and portrays a version of the story that isn't true. I'm talking about AI videos. There are some really silly ones that make me laugh like the Yeti or Bigfoot running through the forest finding booger sugar and saying stupid stuff. It's not real, it's entertaining, however, it looks so real that there has to be one person out there who believes that actually happened. There's also some serious AI videos that were altered to persuade a different viewpoint and it is getting more and more difficult to tell the difference between real and AI. For example, I provided a link to an instagram video that is AI of the power going out at a racetrack and then a pile up happens. Then an image of a news channel actually doing a story on this thinking it was real.


Now when trying to think about how to mitigate between misinformation and real information online the easiest solution is turning it off.

As long as humans are in charge of providing information in any form, there will always be misinformation.  Newspapers are subject to it as well as having a basic conversation with someone they end up spewing out some nonsense and it ends up sticking with you until 10 more people finally tell you whoever told you that was wrong.

It's human nature to trust as much as it is to lie. AI and technology is just a more indirect way to do it and an easier way because you are given an audience for free instantly. 

There will never be a way to completely tell the truth because technology is always being told what to do so as long as a human is telling it what to do there will be errors and wrong information.

It's hard to ban something like AI too because that will end up letting people's guard down and thinking AI will never get through the cracks of the internet and the effect could be 10x more effective. However, letting it run rampant doesn't work either so maybe implementing little laws or just having a large global team of professionals create a software that's only job is to create a software that's only job is to fact-check and detect AI. Then mass distribute it.


Duperval, Jude


As technology continues to advance and our dependency on it grows, we’re seeing a corresponding rise in misinformation across the media. With increased competition for attention, many outlets become incentivized to promote immediacy over legitimacy and accuracy. This often leads to attention-grabbing captions (even if it means oversimplifying the topic of discussion), emotionally charged content to both distort and persuade the public eye, and content intended more so to gather clicks rather than genuine understanding. 

To help mitigate these issues, it’s useful to stay alert to content that appears driven by agenda rather than information. Some practices include seeking primary sources when possible, cross-referencing multiple reputable resources, and analyzing opposing perspectives to gain more depth. Similarly, advocating for basic media literacy and acknowledging our own susceptibility towards biases, can significantly reduce the influence that misinformation in the media has on us. 


Gemini Click bait


In simple terms, clickbait is content specifically designed to entice users to click on a link, often by using sensationalist, misleading, or exaggerated headlines.

Think of it as the "digital neon sign" of the internet. It promises a massive payoff—like a shocking secret or a life-changing tip—but the actual content rarely lives up to the hype.


Common Characteristics

You can usually spot clickbait by these recurring tropes:

  • The "Curiosity Gap": Using headlines that withhold just enough information to make you itch to find out. (e.g., "He opened the box, and you won't believe what was inside!")

  • Emotional Manipulation: Exploiting feelings of anger, fear, or extreme joy.

  • Hyperbole: Using "extreme" words like shocking, mind-blowing, devastating, or miraculous.

  • Numbered Lists: These are often "listicles" spread across 20 different pages to maximize ad revenue.

Why Does It Exist?

It all boils down to ad revenue. Most websites make money based on how many people visit their page. Even if you leave the site feeling disappointed 10 seconds later, the site owner still gets "credit" for your visit, which they can sell to advertisers.

The Downside

While it works in the short term, clickbait has a few major drawbacks:

  1. Trust Erosion: People eventually stop clicking on links from sources they know are "spammy."

  2. Algorithm Penalties: Modern search engines and social media platforms (like Google and Facebook) now actively downrank content that has high click-through rates but very low "dwell time" (meaning people leave immediately).

The Golden Rule: If a headline sounds too good to be true, or if it makes a massive claim but refuses to give you the context upfront, it's almost certainly clickbait.


Pruitt, Dante



 Some issues I have found online about misinformation relate to rampant ai videos that are getting increasingly more realistic. An example I’ve seen recently was of this town in Russia that received a large amount of snow. When I tried to find accurate videos of how the town looked there were videos showing snow slopes the size of buildings which was not the amount of snow the town actually received.



Rodrigues, Filipe


One issue I notice a lot with misinformation is on social media platforms like X and TikTok. Short clips, screenshots, and posts are often taken out of context or made to go viral without any fact checking. Because the content is quick and emotional, people share it before verifying if it is true.


Misinformation also comes from people in positions of power. For example, when government officials publicly label individuals as domestic terrorists without clear evidence, it spreads fear and false narratives very quickly. Statements like these are repeated online and treated as facts even when they are later questioned or corrected.


Some best practices to reduce misinformation include checking multiple credible sources, avoiding reposting unverified claims, and being skeptical of content designed to provoke strong reactions. Platforms should also do more to slow the spread of false information and promote corrections.


Yaucan, Marlon


One major issue that I find of misinformation is the rapid spread of unverified or misleading information to create division. Exaggerated headlines and emotionally charged content creates fear, anger, and division. This is especially apparent on social media apps where information spreads faster than most people believe it and don't bother fact checking. 

To mitigate the issue, I would verify through multiple credible sources, avoid biased news networks/influencers, and be cautious on content that is designed to create/provoke strong emotional reactions.


In the last few years, and particularly reaching a peak in early 2025, major social media companies have significantly shifted their approach to fact-checking. This trend marks a move away from centralized, third-party expert moderation toward decentralized, "crowdsourced" models.

Dias, Kevin


Misinformation is extremely common on Instagram and X, because of how loosely the platforms are moderated. Both platforms have systems in place in order to mitigate these issues, but they're often underutilized or not enough to solve the issue. For example, X has a Community Note system, where vetted people are able to mark a post as misinformative, with a short explanation as to how the post in question is not truthful in what it says. Instagram has a similar system, but it is extremely underutilized, as I believe it relies on AI detection, which hasn't really been working, in my opinion, judging from how little I've seen this function appear when using the app. I think these mitigations are good, in theory, but they should have more staff on them, with less people doing the verifications being regular people. It'd be better if, for example, the X Community Note system was managed mostly or solely by people at X. The only issue you'd have there would be conflicts of interest in regards to certain topics, but I'm not so sure it would matter either way.

Gemini

The primary steps taken by these companies include:

1. Ending Third-Party Fact-Checking Programs

The most significant shift occurred in January 2025, when Meta (Facebook and Instagram) announced it would end its long-standing third-party fact-checking program in the United States.

  • The Change: Previously, Meta partnered with independent organizations (like Reuters and PolitiFact) to review content. Now, they are phasing out these partnerships.

  • The Rationale: CEO Mark Zuckerberg stated the system had become "too restrictive" and prone to "over-enforcement," arguing that a program meant to inform had inadvertently become a tool for censorship.

2. Adoption of "Community Notes" Models

Following the path set by X (formerly Twitter), other platforms are transitioning to user-led moderation.

  • X’s Community Notes: This program relies on users to add "context" to misleading posts. A note only becomes public if it is rated as "helpful" by a diverse group of contributors with different historical rating patterns.

  • Meta’s Pivot: Meta is currently implementing a similar "Community Notes" system to replace its professional fact-checkers, essentially "crowdsourcing" the labor of identifying misinformation to its user base.

3. Policy Rollbacks on Specific Topics

Companies have softened their stance on several previously strictly moderated subjects:

  • Political Speech: Meta and X have loosened restrictions on "civic content," treating political posts from followed accounts more like standard social content rather than demoting them by default.

  • Specific Issues: Restrictions on topics like immigration and gender identity—which were previously subject to high levels of automated and human scrutiny—have been eased to allow for broader "political discourse."

  • Election Misinformation: Both X and YouTube previously rolled back policies specifically targeting "The Big Lie" or claims that the 2020 US election was fraudulent.

4. Reduction in Moderation Friction

To improve user experience and "free expression," companies have reduced the visual cues that discourage sharing:

  • Less Obtrusive Labels: Meta is moving away from "full-screen interstitial warnings" (screens you must click through to see a post) in favor of smaller, less intrusive labels.

  • Ending Demotions: Meta announced it would stop "deboosting" (reducing the reach of) content simply because it was flagged by a fact-checker, allowing such posts to spread as naturally as any other content.

5. Staffing and Structural Changes

  • Trust and Safety Layoffs: Since 2023, Meta, X, and Google have collectively laid off thousands of employees in "Trust and Safety" and "Ethical AI" departments.

  • Geographic Shifts: In 2025, Meta began moving many of its remaining content moderation teams from California to Texas, a move intended to "reduce the bias" of the moderation staff.


Comparison of Fact-Checking Approaches

+1

Feature

Traditional Model (Pre-2025)

New "Loosened" Model

Authority

Professional Journalists/Fact-checkers

Platform Users (Crowdsourced)

Enforcement

Reach reduced, labels applied

"Helpful" notes added, no reach reduction

Speed

Slow (requires professional review)

Variable (requires community consensus)

Philosophy

Protection against misinformation

Prioritization of "Free Expression"


Matthew Fletcher 


I think that an example of misinformation that have on the internet is the AI deepfakes because these videos are made to look people like doctors or politicians and they then spread information that is sometimes based in kind of half truths and then take it out of proportion and can sometime lead to people being harmed.


Mergal Souza, Ettory

Misinformation in the media is a common issue today, especially online. It often shows up as exaggerated headlines, biased reporting, or information shared without being fact-checked. Social media makes it easy for false or misleading stories to spread quickly.

To help reduce misinformation, it’s important to check facts before sharing, rely on trusted news sources, and compare information from multiple outlets. Being aware of bias and thinking critically about what we read can also make a big difference.

These practices can help people stay better informed and avoid spreading false information.

Fabricio Miranda

I will be focusing on echo chambers. An echo chamber in regards to misinformation is an environment where you are basically exposed to ideas that are similar to yours. This is very dangerous because you will just continue to be fed information that you are basically looking for, so the "facts" you believe to be fact are now being validated and you believe it to be true while it may not be true at all. These echo chambers are formed by algorithms found in many of the big social media platforms that are used today. Some ways to combat this are by diversifying where you are receiving your news, you could have a multitude of news sources from all different sides of the internet. Also you can try to put yourself in the other sides' shoes and try to view the argument from their perspective which can help reduce bias towards one side. Lastly you should always fact check, you can never truly trust something posted on the internet or from news outlets and always be on the lookout. 

Viktor Hreskiv


For this assignment, I wanted to share my perspective as a student on misinformation online, especially on social media and in digital news.


From my point of view, misinformation is very common and often difficult to recognize. For example, I frequently see edited headlines on social media that change the meaning of a news story, even though the article itself says something different. I have also seen short video clips taken out of context that make people appear to say or do things they never actually meant.


AI-generated images and videos add another layer to this problem. Some images look realistic, such as fake photos of public figures or events that never happened. Others are clearly silly or meant as jokes, but they still get shared widely and can confuse people who do not realize they are AI-generated.


I think this is dangerous because repeated exposure to false or misleading content can influence opinions without people noticing. As a student, I realize we may trust what we see quickly instead of checking sources, especially when information spreads fast and looks believable.


This shows why media literacy and critical thinking are extremely important for not only students but everyone today.


Elvis Chen

This week’s issue related to misinformation is the increasing difficulty of identifying false or misleading information in digital spaces. Misinformation can spread through social media posts, misleading headlines, edited images, and more recently, AI-generated content. Because these forms of misinformation often appear credible and are shared quickly, they can influence public understanding before accurate information becomes widely available.

I think the best practice for addressing this issue is encouraging careful evaluation of information sources. This includes checking the credibility of authors and outlets, comparing claims across multiple reliable sources, and being cautious of content designed to provoke strong emotional reactions. Developing media literacy skills can help individuals recognize misinformation, whether it comes from traditional online sources or newer tools such as AI.

Overall, reducing misinformation requires consistent critical thinking and responsible information-sharing practices across all digital platforms.

Brown, Nadia


Misinformation in the media (news outlets, digital newspapers and magazines) largely stems from websites that impersonate genuine new sources. For example, the ABC News.com.co owned by Paul Horner posed as ABC News, owned by Disney in the past. This led to false narratives being posted and shared on social media. 


Some techniques for mitigating the spread of misinformation are verifying that the information is coming from a reputable source and author. Is  the author reputable? Are they an expert? Is their reporting transparent and accurate? Another consideration is determining if bias is present in the article. 


The News Literacy Project has a great process for this which I attached. 


From GeminiAn echo chamber is a digital environment where a person only encounters information or opinions that reflect and reinforce their own. Much like a physical room where your own voice bounces back to you, a social media echo chamber ensures you rarely hear "outside" perspectives.

Here is a high-level breakdown of how they form and why they matter.


How They are Built

Echo chambers aren't usually created on purpose by the user; they are the result of three converging forces:

  • Algorithmic Curation: Platforms (like TikTok, X, or Instagram) want to keep you engaged. To do this, they show you content you’ve liked or interacted with before. If you engage with a specific political or social view, the algorithm serves you more of it.

  • Confirmation Bias: As humans, we naturally prefer information that confirms what we already believe. We are more likely to "Like" a post we agree with and "Block" or scroll past something that makes us uncomfortable.

  • The "Follow" Mechanism: We choose who we follow. Over time, we weed out the friends or accounts that annoy us, leaving a curated feed of people who think exactly like we do.

The "Filter Bubble" Effect

While "echo chamber" refers to the beliefs being reinforced, the filter bubble is the technical infrastructure causing it.

Feature

Echo Chamber

Filter Bubble

Primary Driver

Human psychology and social choice.

Algorithmic "black box" sorting.

Result

Rejection of outside opinions.

Lack of exposure to outside opinions.

User Awareness

Often conscious (choos




No comments:

Post a Comment

Assignment #1 due 2/8/26 Misinformation

  Please send me a short email on issues that you find of misinformation in the media along with best practices on how to mitigate issues. B...