Protecting Children

Jhonatan Jimenez


A primary issue regarding protecting children on social media is exposure to inappropriate content.  Privacy risks are also critical, since minors may unknowingly share personal information that can be exploited by people with bad intentions. Furthermore, algorithm-driven content recommendations can sometimes amplify harmful or age-inappropriate material.

A way we can help mitigate these issues is by having platforms implement stronger age verification systems and enforce stricter content moderation policies. Parental controls and default privacy settings for minors can help limit exposure to harmful interactions. Finally, increased transparency and accountability from social media companies can help improve safer online environments for young users.


Mehnaz Barsha


One of the biggest threats to children online today is the ease at which predators can reach them through voice and chat features on social media and gaming platforms. Unlike text messages that can be reviewed later, live voice conversations disappear the moment they end, giving parents almost no visibility into who their child is speaking with or what is being said. Predators exploit this by pretending to be other kids and being friendly to gradually build trust before pushing boundaries.


While some platforms like Roblox have made changes to better protect younger users, they do not go far enough. Most platforms were built around keeping users engaged, not keeping them safe, and updating a few features does nothing to change that problem.


To help address this, children need to be taught what manipulation actually looks like in practice, things like an adult who seems overly interested in them, pushes to keep conversations private, or tries to move the conversation to another app. Teaching kids to recognize grooming early can go a long way in keeping them safe. The Kids Online Safety Act is a step in the right direction because it holds tech companies accountable rather than leaving the responsibility on parents and children to figure it out on their own.


Dylan Moutinho


A major concern in protecting children on social media is their exposure to cyberbullying, inappropriate content, online predators, and privacy risks. Many children also often share personal information without understanding the long-term consequences, which can impact their safety and mental health. This has especially been a problem for ROBLOX, which in the recent years has been exposed for having hundreds of predators on its platform and there are little to no safety features to stop it.


To reduce these risks, parents should monitor online activity and encourage open conversations about digital safety. Social media platforms should strengthen age verification, privacy settings, and content moderation. Schools can also support digital literacy education to help children recognize and respond to online threats.



Nadia Brown


The largest issue affecting children’s social media usages is cyberbullying. According to a national survey conducted by the Cyberbullying Research Center of 3,466 students ages 13-17 58% reported experiencing cyberbullying. Cyberbullying can happen by text, email, instant messaging, and social media; however, social media is the largest contributor. Unlike traditional forms of bullying, cyberbullying can happen anywhere. Cyberbullying occurs everywhere, but Instagram is the largest contributor.

 

  

 

The most common forms of social bullying reported by adolescents include:

  • 56% hurtful comments

  • 53% exclusion

  • 53% online rumors  

  • 50% embarrassment or humiliation

  • 42% repeated unwanted contact via text or online

  • 38% direct threats through text message or direct messages

 

Cyberbullying is an international public health concern because of its impact on the mental health of adolescence. Targets of cyberbullying report increased depressive thoughts, anxiety, loneliness, and suicidal behavior. Perpetrators of cyberbullying are more likely to report increased substance abuse, aggression, and delinquent behavior.

 

To reduce cyberbullying, social media companies can implement AI to detect bullying, harassment, and offensive content or add new features to reduce unwanted attention or comments.  For example. Facebook uses AI to moderate content. If a post, comment, or story goes against their Community Standards, then it is removed from Facebook. Content that does not violate the Community Standards but is still questionable goes on to a team of human reviewers.

 

 

Another example is Instagram’s Comment Warning feature. When a user tries to post a potentially offensive comment. The user is reminded of Instagram’s community guidelines. This helps because it lets the user know that their comment may be removed or hidden if they attempt to proceed. Additionally, Hidden Words provides users the ability to create a custom word list to reduce the chance of unwanted comments on their page. Moreover, if those features are not enough, users have the option to restrict someone, limit unwanted interaction from another user for a period, or block them entirely.  


Charles Murphy


 A few issues to consider regarding the subject of protecting children in social media, are inadequate age verification protocols, no mandated parental control infrastructure, and lack of a technical roadmap for enforcement and oversight. These are facets of the social media algorithm that don’t exist, making the algorithm inherently suited to your likes, habits and rituals without the need for boundaries or regulation. The algorithm of social media does not handle the age of the user well and will autonomously suggest anything that may get a positive reaction. Within this ‘normal’ content, an occasional inappropriate item may be shown to a minor user. Furthermore, a minor can lie about their age easily with any sufficient device level solutions such as age verification in place. All of this, of course, presents itself as a problem.

 

Some proposed solutions suggest there should be child-specific versions of social media that host an array of child-friendly features, and an algorithm tailored to children. A device level solution would be to have a child-mode smartphone with mandatory parental control dashboards, activity / content monitoring, and emergency lockdown features. Some of these solutions can then be encapsulated into laws to govern the moral imperative of protecting children online, which in turn changes the way we think about and utilize social media.


Jun Li Lin


Child protection online has always been quite the huge topic, the most recent well known one I know is the Roblox Schlep controversy. Where the Youtuber Schlep a youtube channel known for conduction sting operations against child predators on the platform was sent a cease and desist letter threatening to take legal action against him if he continues. After the whole controversy finishes Roblox began implementing age verification onto the platform, which does not help at all to protect the children since they would just take their parents ID cards and use that to have access to Roblox chat. The problem with age verification is that on day one of it being implemented there was already sellers Ebay selling accounts with pre verified accounts, that combined with the fact that anyone can fake how old they are on the platform does not help with the problem, instead this made it easier for predators to message children since kids who faked their age is now being placed with people who are 18+ and also for predators to buy fake accounts are now being placed with kids. 


Child protection online is quite difficult to tackle, the main way I can think of protecting kids is to teach them in school about online safety and also implementing chat moderators for kids games, every method implemented has a way for bad actors to get around it so teaching both parents and their children awareness is the best way to protect them online. 


Elsa Shaikh

One big issue regarding protecting children on social media is their exposure to harmful content, cyberbullying, and addictive platform features that can negatively impact their mental health. The algorithms used in social media apps tend to prioritize engagement over safety, which further increases these risks.

To mitigate these issues, platforms should improve content moderation as well as age verification. Parents can also use parental controls and limit screen time. Lastly, stronger policies focused on child safety are also essential.

Soo Hee Min


 One of the biggest challenges in protecting children from social media is that excessive use can lead to decreased academic engagement and negative mental health outcomes. Indeed, a large-scale study of Finnish adolescents found that students who used the internet excessively were at a higher risk of school absence, demonstrating that online use can impact academic engagement.


https://www.theguardian.com/education/2024/apr/16/teenagers-who-use-internet-to-excess-more-likely-to-skip-school


    To address this issue, I believe realistic adjustments to the usage environment are needed, rather than relying solely on individual self-control. For example, some American schools require students to keep their phones in locked pouches or classroom lockers during class hours to improve concentration. Social media platforms could also implement practical features, such as automatic pauses after a certain amount of continuous use or reduced notification frequency, for minors' accounts.


    I believe that a combination of school-level mobile phone management and improved platform design is a more realistic solution than relying solely on individuals.



David Lin


The most important difference between an adult and children consuming content online is the fact that children are oftentimes exposed to the internet at a young age when their brains are still undergoing a period of critical development. A question I thought of when researching was whether there are any hidden benefits of screentime we can leverage to benefit the development of children. It turns out, there aren't many pros of giving children access to the internet. According to an OSF Healthcare article, pediatricians recommend 0 screen time to those under 2, no more than 1 hour for those from 2-5, and no more than 2 hours for those 5-17. It shocked me to find the recommendation to be so low because we know that the reality is that exposure to the internet is much greater for most children. I would argue the benefits of being online act as a double-edged sword. During a period of time between the ages of 9-18 when children develop and have a longing for social interaction and belonging, social media fills this void in the worst ways. As we discussed last week, algorithms are designed to be addictive, and the first topic we explored taught us that misinformation and echo chambers are all too common in the internet age. Nowadays, many teenagers and even some younger children face a bigger threat than dangerous people online looking to manipulate minors, or even the harmful effects of too much screen time. A.I. Related suicides is a topic that has become increasingly popular for people of all ages but especially children. It may seem unusual for those of us that use A.I.  as a tool for work to see responses that encourage suicides but as Sanford states in a Stanford Medicine article, "One key difference is that the large language models that form the backbone of these companions tend to be sycophantic, giving users their preferred answers. The chatbot learns more about the user’s preferences with each interaction and responds accordingly. This, of course, is because companies have a profit motive to see that you return again and again to their AI companions. The chatbots are designed to be really good at forming a bond with the user". The advancement of the amount of context a particular chat bot can hold in their memory has led to an increase in the number of people who seek these chat bots for comfort and companionship. As Sanford suggests, chat bots are designed to tell you what you want to hear, whether that be a correct response to a homework question or to sooth you when going through a rough time. This is dangerous for children who are developing their prefrontal cortex with certain mental health issues exacerbated by being chronically online to begin with. In terms of solutions to these problems, in an article in Stanford Medicine titled Screen time: The good, the healthy and the mind-numbing , Armitage says:

    "We recently published a study that followed kids from as young as 7 to as old as 15 as they received their first phone. We found, on average, getting a phone at a younger age was no better or worse than getting it at an older age, in terms of depressive symptoms, sleep and school grades. (...) I recommend parents wait to give their child a phone until they are mature enough to regulate their own use and not allow it to distract them from sleep, homework, family time, and playing and socializing with other kids in the real world"

At the end of the day, maturity of the child as well as the content that is being consumed in question creates unique cases for each child that needs to be addressed individually. Although parents may not hit the mark perfectly each time, a good rule of thumb is decreasing screen time overall which will benefit children in the long run. 



Samuel Emile


After reviewing the readings on protecting children online, one major issue that stood out to me is how social media use is linked to mental health problems among young people. Research shows that heavy use of these platforms is associated with higher rates of anxiety, depression, and even self-harm, especially among younger teens. One concern is that algorithms often push emotionally intense or harmful content because it increases engagement, even if it negatively affects users. Another issue is that current laws and protections are inconsistent. Some states and countries require parental consent or age limits, but enforcement is often weak because platforms do not have strong verification systems. This makes it easy for younger children to access content that may not be appropriate or safe. To help address these problems, I think several best practices could help. First, platforms should be required to design youth-specific versions with safer features, such as time limits, better reporting tools, and reduced algorithmic recommendations. Second, stronger age verification methods combined with parental authorization could prevent younger children from creating accounts without supervision. Finally, device-level parental controls, like monitoring dashboards and usage limits, could give parents more practical tools to guide their children’s online activity. Overall, protecting children on social media will require cooperation between lawmakers, technology companies, and families. Clear regulations combined with safer design practices can help reduce risks while still allowing young people to benefit from digital communication.


Zahra Qureshi


Social media platforms present serious risks to children, including exposure to harmful or inappropriate content, cyberbullying, online predators, and addictive algorithm-driven feeds that can negatively impact mental health. Many platforms rely on weak age verification systems, allowing underage users to bypass restrictions easily. Additionally, children often lack awareness of privacy risks, leading them to overshare personal information that can be exploited. These factors collectively create an online environment that can be unsafe and psychologically damaging for young users.


To mitigate these issues, stronger age verification systems and safer default privacy settings should be implemented by platforms. Parents and guardians should use available parental controls and maintain open communication with children about their online experiences. Schools can also promote digital literacy education to teach children how to recognize harmful content, protect their privacy, and report suspicious behavior. A combined effort from platforms, policymakers, educators, and families is essential to ensure a safer digital environment for children.


Kevin Dias


Protecting children is a difficult thing to do on social media, but I don't think the ways we are going about it currently are good for the long term. The requirement that everyone provides identification and facial confirmation when signing up for social media is a major privacy violation, and it actually makes it worse for children and people in general because these third party confirmation services typically get hacked. For example, the company that did Discord's ID verification was just hacked, and all of these photos of people and their IDs were leaked onto the internet for anyone to download. This is not the way to do things. I don't even think children should be on social media anyways, I think it's something that could seriously stunt a child's social development.


Viktor Hreskiv


I believe protecting children on social media is very important. I am concerned about cyberbullying, harmful content, online predators, mental health problems, and gambling-like features in some video games. I think many platforms do not do enough to check ages or reduce addictive features.


In my opinion, companies should use stronger age verification, better content moderation, and stricter privacy settings for minors. I also believe there should be clearer rules about gambling-style game features in games that are primarily played by young people. I think parents and schools should teach children more about online safety and monitor their activity.


I believe protecting children online requires companies, parents, schools, and policymakers to work together in order to combat this.


Ahmed Abdulghany


After reviewing the materials, I realized how complicated the issue of protecting children on social media really is. There are clear mental health concerns. The Columbia Undergraduate Law Review article discusses research showing a connection between heavy social media use and increased depression and anxiety among teens. That trend is concerning, especially given how much time young people spend online today.

At the same time, broad state bans such as those passed in Utah and Florida raise constitutional concerns. Cases like Brown v. Entertainment Merchants Association and Pierce v. Society of Sisters suggest that minors do have First Amendment protections and that parents, not the government, generally have the primary role in directing their children’s upbringing. Completely restricting access may conflict with those principles.

Another issue is that many laws focus more on limiting access than on improving platform safety. Connecticut’s SB00003 strengthens privacy protections for minors and addresses online safety, but it does not fully solve practical enforcement issues such as reliable age verification or consistent parental control systems across platforms.

In my view, a more realistic solution would focus on safety measures rather than outright bans. Social media platforms could develop youth specific versions with time limits, simplified content feeds, and stronger reporting tools. Stronger age verification combined with clear parental consent systems could also improve accountability. In addition, device level parental controls, like those available through Google Play, give parents practical tools to manage content without removing access entirely.

Overall, protecting children online requires cooperation between lawmakers, technology companies, and families. A balanced approach that strengthens safeguards while respecting parental authority and constitutional rights seems more effective than broad prohibitions.


Fabricio Miranda


The biggest issue I see regarding children's safety is online harassment/ cyberbullying. I see it very often with my niece and her playing Roblox. Sometime ago she would consistently come up to me and tell me that a person on Roblox was being mean to her and saying mean things, sometimes the things would be hash tagged so you couldn't see the actual message but from context you can tell how bad it was. Luckily she wouldn't talk back to the people she would just ignore it. This can get increasingly bad if the person being cyberbullied doesn't know how to block or remove someone, the person can continue to join the games you are in and continue to harass you, this can create stress in a place where you should be having fun playing with friends or by yourself. This is also prevalent in any other type of social media platforms a child may have access to. A child can innocently make a comment on a short they watched then people can reply hateful stuff to it. The best practice I have found is a simple one that has been around for a while which is to simply block the person harassing you. I showed my niece how to do so and now she doesn't ask me to do it for her anymore instead she'll tell me that she herself blocked someone who was being mean to her. It is simple yet effective.


Matthew Fletcher


I believe there are several major concerns when it comes to protecting children online. One of the most serious issues is protecting children from predators. A big part of the problem is that many children are not fully taught how unsafe the internet can be. In many cases, parents may not completely understand how online predators operate. People often talk about teaching kids to avoid the “creepy” stranger on the internet, and while that advice is important, it oversimplifies the issue. Most online predators do not appear creepy at first. Instead, they present themselves as kind, supportive, and trustworthy individuals in order to build relationships with children. This process, known as grooming, allows them to gain a child’s trust before manipulating them.

Another major concern is the impact of the internet on children’s mental health. There has been a noticeable rise in mental health struggles among young people, and social media plays a significant role. Many children experience cyberbullying simply because they are young or vulnerable. This constant exposure to negativity and comparison can contribute to increased rates of depression and anxiety among children and teenagers.

A third issue involves the information that children share online and, more importantly, what parents share about their children. Some parents post pictures or videos of their children for entertainment, attention, or views, without fully considering the long-term consequences. In doing so, they may unintentionally contribute to the risks children face online. A major example of this is the rise of “family vloggers” on platforms like YouTube. These parents record and post detailed videos about their children’s daily lives. For example, a video titled something like “My Daughter Had Her First Kiss” may be partially staged for views, yet it still exposes deeply personal moments. Content like this can attract the wrong kind of attention and create material that predators may misuse.

In my opinion, better protection for children online requires stronger education and possibly new regulations. There should be laws or programs in place that focus on teaching both parents and children about internet safety, with particular emphasis on educating parents. Many of the problems mentioned stem from a lack of awareness and responsible decision-making by adults. I also believe there should be stronger age verification systems online. However, implementing such systems may be difficult because many people are concerned about increased data tracking and privacy issues. Finally, platforms such as YouTube and Facebook should implement more advanced screening processes for videos that feature children. These systems should ensure that no inappropriate or exploitative content involving children is being posted.

Overall, protecting children online requires awareness, responsibility, and cooperation between parents, platforms, and policymakers.


From Gemini


1. Key Threats & Challenges

The contributors identified several recurring risks that children face in digital environments:

  • Predatory Behavior & Grooming: Multiple students (Barsha, Fletcher, Moutinho) highlighted how predators use voice chat and "friendly" personas to build trust and bypass parental supervision.

  • Mental Health & Cyberbullying: Cyberbullying is a dominant concern, with Brown citing that 58% of students have experienced it. Other issues include anxiety, depression, and even AI-related self-harm (David Lin).

  • Algorithmic Risks: Algorithms prioritize engagement over safety, often pushing minors toward inappropriate, addictive, or harmful content (Jimenez, Murphy, Emile).

  • Privacy & Data Risks: Children often unknowingly share personal information. Conversely, some users (Dias) warned that strict ID verification for age checks can lead to massive data breaches.

  • Developmental Impact: Excessive screen time is linked to decreased academic engagement and stunted social development (Min, David Lin).


2. Proposed Solutions

The responses suggest a "layered" defense involving different stakeholders:

Technical & Platform Changes

  • Youth-Specific Versions: Creation of "child-mode" platforms with tailored algorithms and restricted features (Murphy, Emile).

  • Enhanced Moderation: Using AI to detect bullying and offensive content before it reaches the user (Brown).

  • Usage Friction: Implementing "automatic pauses" after continuous use and reducing notification frequency for minors (Min).

Education & Parental Involvement

  • Digital Literacy: Teaching children to recognize the specific signs of "grooming" and manipulation, rather than just avoiding "creepy" strangers (Barsha, Fletcher).

  • Active Monitoring: Parents using locked pouches for phones at school or device-level dashboards to track activity (Min, Abdulghany).

  • Empowerment: Teaching kids how to use basic tools like the "block" and "report" buttons to handle harassment independently (Miranda).

Policy & Legislation

  • The Kids Online Safety Act (KOSA): Cited as a necessary step to hold tech companies legally accountable for child safety (Barsha).

  • Balanced Regulation: A warning that while laws are needed, they must respect constitutional rights (First Amendment) and avoid total bans that overrule parental authority (Abdulghany).


3. Critical Perspectives

Some contributors offered cautionary viewpoints on popular solutions:

  • Age Verification Flaws: Jun Li Lin noted that age verification is easily bypassed via fake accounts or using parents' IDs, often inadvertently placing kids in adult-only spaces.

  • The "Sharenting" Problem: Fletcher pointed out that parents themselves often compromise their children's privacy by posting "family vlogs" for views, creating material that predators can exploit.

Summary Table: Stakeholder Responsibilities | Stakeholder | Primary Action | | :--- | :--- | | Platforms | Implement "Safety by Design," stronger AI moderation, and youth-specific algorithms. | | Parents | Monitor usage, limit screen time, and delay smartphone access until a child is mature. | | Schools | Integrate digital literacy and "grooming" recognition into the curriculum. | | Government | Pass laws (like KOSA) that mandate transparency and accountability from tech giants. |

Based on the diverse insights and concerns shared by the students, here is a consolidated Digital Safety & Best Practices Guide for parents. This guide moves beyond the standard "don't talk to strangers" advice to address modern algorithmic and social threats.


🛡️ Parent’s Guide to Digital Safety

1. Shift the Conversation on Predators

Modern grooming is rarely "creepy" at the start. It is often friendly, supportive, and persistent.

  • Teach "Manipulation Awareness": Explain that predators often pretend to be peers or mentors. Teach your child to flag anyone who:

    • Tries to move a conversation from a public game (like Roblox) to a private app (like Discord or Snapchat).

    • Asks them to keep their friendship a secret from parents.

    • Shows excessive interest in their private life or problems.

  • Monitor "Disappearing" Audio: Voice and chat features in gaming are high-risk because they leave no trail. If your child is gaming, have them play in a common area where you can hear the tone of the conversation.

2. Focus on "Safety by Design"

Individual self-control is often no match for an algorithm designed to be addictive.

  • Implement "Child-Mode" Hardware: Use device-level controls (like Apple Screen Time or Google Family Link) to set hard limits on usage and monitor which apps are being downloaded.

  • Enable "Hidden Words" & Filters: On platforms like Instagram, use "Hidden Words" to create a custom list of terms that will be automatically filtered out of your child's comments and DMs.

  • Audit "Sharenting": Be mindful of what you post. Avoid sharing "firsts" or vulnerable moments (like a first kiss or a doctor’s visit) that could be exploited by others or cause your child future embarrassment.

3. Build Digital Resilience & Literacy

The goal is to move from monitoring to mentorship.

  • The "Block First" Rule: Empower your child to use the block and report buttons immediately at the first sign of meanness, rather than engaging. Show them that they have the power to control their digital environment.

  • Delay the Device: Consider waiting to provide a personal smartphone until the child shows the maturity to regulate their own sleep, homework, and real-world social interactions.

  • Address Mental Health Directly: Discuss the "Highlight Reel" effect—remind them that what they see on social media isn't reality and that algorithms often push intense content just to keep them clicking.

4. Create Physical Boundaries

Technology works best when it has a "home" that isn't the bedroom.

  • No Screens in Bedrooms: To protect sleep and mental health, keep charging stations in a central location like the kitchen.

  • Locked Pouches/Lockers: Follow the lead of many schools; encourage "phone-free" hours during homework or family meals to improve concentration.


Feature

Action

Age Verification

Don't help your child bypass age gates with your ID; it places them in 18+ pools.

Privacy Settings

Set all accounts to "Private" by default.

Open Dialogue

Ask "What was the weirdest thing you saw online today?" rather than "What were you doing?"

 

No comments:

Post a Comment

Assignment #3 due 2/23/26

  Please send me a short email on issues that you find regarding Protecting Children in social media along with best practices on how to mit...