Download from PIER24


5 minute read | June.06.2024

More than 350 leaders from child protection NGOs, victim advocacy groups, research organisations, technology providers, domestic and international police forces, and advisors convened in London last month to tackle the global challenge of sexual abuse of children online at this year’s Policing Institute for the Eastern Region (PIER) annual conference sponsored by Anglia Ruskin University. The debate was moving, ambitious, and thought-provoking. A critical summit in the global effort to improve online safety, Orrick was delighted to be part of this event.

Startling Data

A substantial amount of research focused on the use of online platforms, social media sites, online gaming, and chat rooms by individuals to obtain access to and exploit children.

  • Last year, reports to the National Center for Missing and Exploited Children's CyberTipline for suspected Child Sexual Abuse Material (CSAM) increased by 12%.
  • The Internet Watch Foundation reported that self-generated CSAM content almost doubled, with close to 20% of US children reporting having shared self-generated CSAM.
  • Organized sextortion efforts increased by almost 600%, with the highest impact on adolescent boys.

Although research is still emerging, generative AI CSAM, which is almost indistinguishable from non-AI CSAM, is a looming threat, given its ability to generate content at speed and scale.

Regulatory Overdrive

New legislation and rulemaking was a key discussion topic. The last 18 months have seen the introduction of the UK's Online Safety Act (user-to-user and search services) and the EU's Digital Services Act ("intermediary services" that display or process third-party content, including online search engines and hosting services). On the U.S. side, state and federal lawmakers have been incredibly active with the passage of the federal PROTECT Act (May 2023), more than 24 new online safety state laws coming into force by January 1, 2025, and the possibility that Congress will pass the Kids Online Safety Act and Children's Online Privacy Protection Act in the fall of 2024.

The discussion highlighted key challenges in regulating tech companies, how they can develop effective trust and safety programs, and the significant effort and resources being committed to building compliance programs, particularly for social media and ed tech companies. What’s clear is that anyone who is not on this compliance journey is going to find themselves behind the curve as regulatory enforcement and legislation continue to heat up.

An important issue in the context of legislation, and one which engaged thoughtful discussion from the audience, was how “tech can help tech”; i.e., how technology companies will be a critical part of the compliance and risk management solution for platforms that are in the crosshairs of new legislative reforms. Key will be the use of AI and personal data/biometrics to assist social media platforms in identifying the relevant risks, comply with the risk assessment requirements, and effectively implementing age assurance, which are crucial in both UK and EU regulation. Some notable solutions:

  • Project Arachnid in Canada uses a crawler that detects CSAM on a global scale and sends notices to providers worldwide and has, to date, sent over 40 million notices to service providers.
  • Yoti provides facial scanning for age estimation and assurance, along with liveness detection. Yoti has conducted over one billion age checks to date.
  • Praesidio ran prominent social media campaigns featuring influencers in Ghana and Benin to publicize and educate young people about ways to protect themselves against online sexual abuse and child sacrifice. The success of this campaign is likely to be repeated in other jurisdictions.
  • Dragon Shield, developed by researchers at Swansea University, is a training portal for child safeguarding practitioners, providing four phases of training and detailing the sub-tactics used by groomers, including live conversation simulators. Additionally, they have developed Dragon Spotter, a linguistics and AI chat classifier for law enforcement to identify the relevant chat when devices are seized from suspected offenders to detect the likelihood of online grooming conversations.
  • Securium Safeguard provides conversation analysis for chatroom and livestreaming platforms that can flag predatory behavior and grooming by combining psychology with AI. The technology was also able to accurately predict, with a threat scoring system, where chat within livestreaming was moving towards predatory behavior and provide real-time protections. Securium Discover provides content analysis, searching for text and images at speed and with a 99.8% accuracy rate of detecting victims. It can find copies of documents on either the web or a device and can search 2000 websites per hour.

The Tension Between Privacy & Online Safety

Finally, a significant theme throughout the conference was the tension between enhanced privacy features and the increased ability to protect children online. On one hand, keynote speaker Baroness Beeban Kidron highlighted how high privacy settings from commercial actors also protect children. The UK's Age Appropriate Design Code—architected by Baroness Kidron—cemented the age of an adult as 18, moved responsibility onto the companies to protect the best interests of children, and connected the topics of privacy and safety in the minds of legislators. It argues for limiting the collection of location data and access to children's friend networks.

On the other hand, the use of end-to-end encryption was largely decried for its propensity to be exploited by abusers. However, several speakers acknowledged that the argument against end-to-end encryption was being lost. The UN Human Rights Commissioner supports it: “End-to-end encrypted tools and services keep all of us safe from crime, surveillance, and other threats. Governments should promote their use rather than imposing client-side scanning and other measures undermining encryption.” Moreover, the opportunity that AI-driven technologies present can only be realized by providing AI large language models with more data, such as conversations, chats, interactions, etc.

In the context of age assurance, it was noted that while users were engaging with technology, particularly facial scanning technology, statistics shared by Yoti demonstrated that users were choosing the lower friction approaches and did not want to provide document-based evidence for age assurance. Yoti has been able to provide a service to companies that helps in achieving online safety while also protecting privacy, with facial images deleted following the liveness detection, almost instantaneously.

A Valuable Conversation

Despite the startling data and harrowing stories of abuse, the drive for change was palpable – and the value of the dialogue immense. No doubt, there will be more examples of solutions to this global threat at next year’s event, which has been rebranded as the International Policing and Public Protection Research Institute (IPRI).