by
Bec May
If we continue to develop our technology without wisdom or prudence, our servant may prove to be our executioner. - Omar Bradley
Educational technology is now at the heart of most classroom experiences. Combine this with the pervasive reach of social media, and 21st-century students have a digital footprint comparable to their physical ones.
The potential of technology in the educational space is nothing short of remarkable: it facilitates personalized, mastery-based learning, saves teachers valuable time, and equips students with the skills they will need to enter a tech-first workforce. Yet, as we all know, life online has a darker underbelly, one that exposes children to cyberbullying, harmful content, and threats to their privacy and safety.
This conflict creates significant challenges for educators, pastoral care teams and safeguarding staff as they seek to promote engagement with the digital world while mitigating potential online risks.
By looking to established frameworks and thoughtfully leveraging safeguarding technologies, schools can address the online risks associated with the 4 Cs of online safety: Content, Contact, Conduct, and Commerce.
The children in our classrooms are firmly digital natives, with most having had access to technology or 'screen time' since toddlerhood. These tech-savvy humans have a grasp on technology well before they have the emotional and intellectual understanding of the real-world risks associated with their iPads. This makes for a dangerous online playing field, where children are often out of their league.
In an online ecosystem where social media, messaging apps, and gaming platforms converge, the lines often blur between opportunities and risks.
And that's even before our kids have hit the digital books, with educational platforms being used daily by 63% of US students. While these tools offer incredible opportunities for learning, connection, and creativity, they also expose children and young people to a plethora of risks, such as harmful content, cyberbullying, and, in extreme cases, online predators. While schools can and should turn to technology solutions and the good old 'look over the shoulder' approach to monitor online interactions, any educator who has been in the game for a while knows that trying to keep track of every threat crossing students' desks is like trying to plug holes in a sinking boat with only your fingers—no matter how hard you try, you're swimming home.
This is where the four Cs of online safety come into play, equipping educators with a framework to identify, address, and mitigate online risks.
Global standards for keeping students safe online vary widely, reflecting different legal frameworks, educational priorities, and technological approaches. In Australia and Canada, there is no single mandated framework that schools must adhere to. By contrast, in the United States, laws such as the Children’s Internet Protection Act (CIPA) and the Family Educational Rights and Privacy Acts (FERPA) shape online safety laws by mandating content filtering, student privacy, and data protection in federally funded schools.
Looking across the pond, the UK has taken a more structured approach to online safety, with the Keeping Children Safe in Education (KCSIE) guidelines standing out as one of the most comprehensive safeguarding frameworks. While KCSIE is only a statutory requirement in the UK, its structured approach serves as a valuable resource for other nations striving to improve online safety in schools.
KCSIE groups online safety risks into four areas: Content, Contact, Conduct, and Commerce. These categories address exposure to harmful material, interactions with potential predators, inappropriate online behavior, and risks linked to marketing and financial exploitation. They provide educators with a framework for identifying and addressing these online safety risks with appropriate safeguarding strategies.
‘Content: being exposed to illegal, inappropriate or harmful content, for example, pornography, fake news, racism, misogyny, self-harm, suicide, anti-Semitism, radicalization and extremism’ (KCSIE 2021).
Content risks are relatively self-explanatory, referring to the vast array of information students encounter online every day, including text, images, and videos. There's no doubt that the globalized flow of information from the World Wide Web has brought with it a treasure trove of resources, one that children who grew up on Encyclopedia Brittanica could only dream of. However, it also exposes our youth to harmful content, including pornography, online gambling, fake news, racism, misogyny, self-harm propaganda, and extremist materials.
Misogynistic content, for example, has become an increasing concern in schools worldwide. This content tends to spread like wildfire due to its fervent supporters, who amplify it through social media, forums, and video platforms. This rapid dissemination infiltrates classrooms and playgrounds, influencing students' behaviors, language, and attitudes toward peers and educators alike.
Educators need to be able to recognize the online and real-world signs of exposure to such content, such as changes in language, attitude, social interaction, and group dynamics, in order to address the fallout as early as possible. Open conversations with students that promote critical thinking and inclusivity are essential to help counter these narratives before behavior escalates.
‘Contact: being subjected to harmful online interaction with other users; for example: ... adults posing as children or young adults with the intention to groom or exploit them for sexual, criminal, financial or other purposes’ (KCSIE 2021).
Contact risks refer to harmful interactions students may have with other online users, including on social media, online forums, messaging, and chat apps, as well as inappropriate advertising. These interactions can expose young people to a range of dangers, including exploitation, manipulation, and inappropriate communication, all with the click of a button.
Research shows that children online are more likely to have harmful contact with peers and adults in online circumstances than they are in the real World. While offline risks are constrained by physical boundaries like schools, neighborhoods, and social circles, the online world has no such limits, creating a vast and unregulated space where harmful interactions can persist unchecked.
Among the most concerning of these risks is sexual content. Nearly half of children between the ages of 9-16 report experiencing or witnessing regular instances of unwanted sexual content, including being pressured to send explicit images or receiving unsolicited nude photographs. Often driven by peer-to-peer pressure or orchestrated by online predators posing as peers, contact exposure to risks involving sexual exploitation is alarmingly common and frequently goes unreported due to shame, guilt, and fear of punishment.
Teaching students how to recognize suspicious behavior, confidently say no to inappropriate requests, and report incidents without fear of judgment or repercussion helps set students up for success when difficult situations arise.
‘Conduct: personal online behavior that increases the likelihood of, or causes, harm; for example, making, sending, and receiving explicit images (e.g., consensual and non-consensual sharing of nudes and semi-nudes and/or pornography, sharing other explicit images and online bullying’ (KCSIE 2021).
Conduct refers to how students behave online, encompassing their online etiquette, how they interact with others in the online space, and the lasting impact of their digital footprint. Unlike contact, which is more about avoiding harm, conduct is about promoting responsible, ethical behavior and appropriate use of online platforms in the digital landscape.
We know that children are highly suggestible and can easily fall into unhealthy and even harmful activities and behaviors if left to their own devices (pun intended).
Conduct risks fall into the following categories:
Creating and sharing explicit content, including the consensual and non-consensual creation, sharing, and receiving of explicit images, such as nudes, semi-nudes, and pornography.
Online bullying or cyberbullying is of huge concern here, with almost 27% of US teens aged 13-17 reporting having experienced some form of online bullying within a 30-day period. And it's not necessarily via the channels you would suspect. Notably, 79% of children on YouTube have reported being cyberbullied, followed by 69% on Snapchat, 64% on TikTok, and a surprising comparative low of 49% on Facebook.
Risky online challenges and dangerous trends have become a major part of youth culture, spreading at breakneck speed via social media. While some are a bit of harmless fun, others encourage reckless, illegal, and even life-threatening behaviors.
Conduct risks are not just about making mistakes online. These risks can significantly shape a student’s digital footprint, social interactions, and future opportunities. The challenge for schools here is to get students to understand the risks, take ownership of what they are putting out into the digital world, and navigate the world wide web safely and ethically.
‘Commerce – risks such as online gambling, inappropriate advertising, phishing and or financial scams’ (KCSIE 2021).
Commerce risks are often overlooked in online safety discussions, but they present significant threats. Many online platforms, including games, social media, and educational apps, incorporate monetized features exposing young people to financial scams, gambling mechanics, and aggressive advertising.
These apps often deliberately blur the lines between entertainment and financial investment, encouraging students to spend money irresponsibly and opening the door to fraudulent schemes.
Many online games and apps include gambling-like mechanics—think loot boxes, in-app currency, and pay-to-win systems. While these features may seem relatively benign on the surface, they dive a bit deeper, and the sharks start to circle. These features:
Encourage unhealthy spending habits and behaviors that can quickly become addictive.
Mimic gambling mechanics expose young users to forms of 'betting' before they are of legal age. For example, FIFA Ultimate Team packs and Fortnite V-Bucks operate like gambling loot boxes, where players spend real money for a chance of rare in-game rewards. However, these boxes offer no actual guarantee of a return on investment.
Lead students to spend large amounts of money unknowingly.
Young people are often prime targets of phishing scams, with hackers preying on their lack of experience in the hopes they won't recognize cyber security risks such as fraudulent links, fake emails, or impersonation attempts. These scams:
Trick students into giving away personal and financial information online
Often involve shiny, fake competitions and free giveaways that actually install malware
Encourage students to connect external payment methods to suspicious apps that tie them into ongoing payments.
Malicious actors specifically target these scams to students with fake scholarship emails and social media giveaway scams.
Students are exposed to a range of ad-supported apps and websites, exposing them to targeted advertising, misleading promotions, and adult content, including:
Unregulated ads promoting get-rich-quick schemes, crypto trading, or online casinos
Social media influencers from TikTok and Instagram pushing paid products disguised as 'organic recommendations', leading users to trust and purchase low-quality, overpriced products.
'Dark patterns'— deceptive design practices that trick young people into making unintended in-app purchases through misleading practices. Examples include auto-renewing subscriptions.
While most schools now have some form of online safety policy, how this policy translates into the real world is another thing entirely. Your online safety policy needs to be so much more than a manual that sits on a shelf gathering dust. It should be a living framework that determines how well your school can prevent, identify, and respond to online risk.
A well-defined policy sets clear expectations for students' digital conduct. By outlining not only acceptable use but also the potential consequences of misuse, students can begin to develop considered online habits that extend beyond the classroom, reducing their exposure to online safety risks.
By integrating online safety into the classroom curriculum, schools help foster digital literacy, embuing students with the knowledge to create, manage, communicate, and investigate data, information, and ideas. By helping students to identify credible sources, understand privacy settings, and recognize cyber threats such as phishing scams, they become equipped to make informed decisions and navigate the online world safely. Studies have shown that digital literacy positively affects self-directed learning, which in turn positively affects academic achievement.
Not only is an Online safety policy best practice, but in many regions, it's mandated. By establishing and maintaining these policies, schools remain compliant with legal requirements, which are often also a necessity for Cyber insurance.
It's clear that online policies, strong cyber safety education programs, and collaboration with parents are essential when it comes to evaluating and addressing online child safety risks. However, digital risks also need technical interventions: tools and technologies that work to protect students in real-time, providing a solid foundation for proactive safeguarding.
From firewalls that block harmful content to monitoring tools that track digital activity, technical solutions help educators respond to risks as they arise. As well as providing insight into what's happening in your school network, they also help institutions to align with statutory guidance and relevant legislation.
Firewalls and web filtering software are the first lines of defense, blocking harmful websites, unauthorized access attempts, and blocking inappropriate materials. These tools can help to prevent exposure to dangerous content, such as explicit images, fake news, and online predators looking to bypass network restrictions.
Advanced monitoring tools like Fastvue offer real-time insights into students' online activity, flagging risky behavior and identifying patterns that could indicate cyberbullying or exposure to harmful content.
Creating a safe space for students to voice their concerns is crucial. Anonymous online reporting platforms such as Stymie allow students to report bullying, harassment, and hate speech without fear of retaliation. Offering a discreet way to raise the alarm, these tools play a vital role in early intervention and support.
When safeguarding technologies flag an alert or a report raises a concern related to online safety issues, the question isn't just 'What's the risk?' but also, ‘What happens next?’. This is often where strategies come unstuck, resulting in inconsistent or delayed responses that can escalate potentially minor concerns into major issues.
Does your school have a framework that categorizes risks into low, medium, and high levels? For example:
Low-risk alerts: These alerts might involve occurrences such as a student accidentally accessing restricted content or a one-off instance of prolonged social media use during school hours.
Response strategy: a quick conversation with a teacher may be enough to resolve the issue, with no further escalation required.
Medium risk alerts: These may involve things like repeat visits to a gambling site, actively seeking out how to bypass school security using a VPN, or engaging in persistent inappropriate online communication.
Response strategy: The pastoral care or designated safeguarding should be notified. Depending on the nature of the behavior.
High-risk alerts: A student searches for self-harm methods, extremist content, or how to inflict violence.
Response strategy: Immediate intervention is required as time is of the essence. DSLs, pastoral care teams, and possibly mental health professionals and law enforcement should be involved.
A major gap we regularly see in the implementation of online safety policies is parental involvement. While most parents are aware that their school is using online monitoring systems, do parents know when and how they will be informed of safeguarding concerns? Schools can ask themselves the following questions:
When should parents be contacted immediately?
When is a parent-teacher conference warranted?
How do we balance privacy with appropriate intervention strategies, particularly with students who are young adults?
A parental notification framework ensures transparency, consistency, and clarity for everyone involved. Schools should define clear policies on when an issue requires parental involvement and how the conversation will be framed to support the student.
Schools should also communicate from the outset, ideally at enrollment, how monitoring tools like Fastvue are used so parents understand their purpose and limitations.
Recent reports from the eSafety Commissioner highlight that major tech companies are not adequately addressing issues such as child sexual exploitation material, sexual extortion, and the live streaming of abuse.
Given these shortcomings, educators need to proactively assess the risk profile of apps and online platforms that are being used within the school environment.
Strategies for evaluating potential risks
Stay informed: Apps can and do change quickly. One day, a tween is on a sweet little app called music.ly, and the next day, they're playing with the big fish in the influencer pond that is TikTok. There are some great sites that can keep you informed on what's happening in the world of apps, particularly those aimed at children. Common Sense Media, eSafety Commissioner, and ThinkUKnow all provide up-to-date insights on child-targeted apps.
Assess app features: Not all apps designed for children are built with the 4 Cs at the forefront of designers' minds. Key questions to consider:
Does the app allow anonymous messaging?
Is location tracking enabled, making users visible to strangers?
Are there live-streaming capabilities that could expose students to potential predators?
Does the app gamify user engagement with streaks, leaderboards, or other potentially addictive design features?
Are there in-app purchases ads targeting students?
Observe how students are actually using the app: While some apps may seem harmless on paper, their real-world use can present risks that are not immediately apparent. Snapchat's disappearing messages, for example, which were originally designed for quick, safe communication, quickly became a primary tool for cyberbullying and sexting amongst teens.
Set up School-Level controls: While schools can't always control what apps students use outside the school gates, they can manage risk within the school environment by:
Using firewalls to block apps known for anonymous messaging, location tracking, or high-risk content.
Enabling privacy settings and safety filters on permitted apps
Educating children on app safety, privacy, and reporting mechanisms
Review and approve new apps before they're introduced into the classroom
Monitor app usage to detect concerning patterns: Use internet monitoring and reporting tools to track how students are engaging online and identify early warning signs by:
Monitoring for searches of high-risk chat apps, which are often used for unregulated interactions, cyberbullying, or potential grooming risks
Track repeated attempts to access VPN services, an indicator that students are trying to bypass school firewalls and access restricted content.
Identify engagement with high-risk digital content and platforms such as dark web access tools, encrypted messaging services, and forums linked to extremist content.
Detect patterns of concerning search behavior such as self-harm queries, regularly visiting extremist forums, or engaging with gambling sites.
Keeping kids safe online isn't about wrapping them in cotton wool, blocking every slightly suspicious app, and putting parental controls on their phones. It's about empowering students with the knowledge and agency to make responsible decisions while monitoring inappropriate content and general internet safety. The 4 Cs provide schools with a clear framework to identify risks and educate students while real-time monitoring and clear action plans help to address risks when they do arise.
The goal shouldn't be to scare students into compliance but rather to teach them critical thinking, online responsibility, and where and how to ask for help, creating digitally aware, self-reliant students who can thrive in the online environment.
This article appears as part of Fastvue’s Safer Internet Day 2025 initiative. To help schools strengthen their online safety, we’ve also created a downloadable checklist. Use this resource to assess risks, implement best practices, and create a safer digital environment for your students. Download your checklist today.
We love nerding-out about this stuff. So get in touch with us, and let’s see how we can help.