5 minute read | June.06.2024
More than 350 leaders from child protection NGOs, victim advocacy groups, research organisations, technology providers, domestic and international police forces, and advisors convened in London last month to tackle the global challenge of sexual abuse of children online at this year’s Policing Institute for the Eastern Region (PIER) annual conference sponsored by Anglia Ruskin University. The debate was moving, ambitious, and thought-provoking. A critical summit in the global effort to improve online safety, Orrick was delighted to be part of this event.
A substantial amount of research focused on the use of online platforms, social media sites, online gaming, and chat rooms by individuals to obtain access to and exploit children.
Although research is still emerging, generative AI CSAM, which is almost indistinguishable from non-AI CSAM, is a looming threat, given its ability to generate content at speed and scale.
New legislation and rulemaking was a key discussion topic. The last 18 months have seen the introduction of the UK's Online Safety Act (user-to-user and search services) and the EU's Digital Services Act ("intermediary services" that display or process third-party content, including online search engines and hosting services). On the U.S. side, state and federal lawmakers have been incredibly active with the passage of the federal PROTECT Act (May 2023), more than 24 new online safety state laws coming into force by January 1, 2025, and the possibility that Congress will pass the Kids Online Safety Act and Children's Online Privacy Protection Act in the fall of 2024.
The discussion highlighted key challenges in regulating tech companies, how they can develop effective trust and safety programs, and the significant effort and resources being committed to building compliance programs, particularly for social media and ed tech companies. What’s clear is that anyone who is not on this compliance journey is going to find themselves behind the curve as regulatory enforcement and legislation continue to heat up.
An important issue in the context of legislation, and one which engaged thoughtful discussion from the audience, was how “tech can help tech”; i.e., how technology companies will be a critical part of the compliance and risk management solution for platforms that are in the crosshairs of new legislative reforms. Key will be the use of AI and personal data/biometrics to assist social media platforms in identifying the relevant risks, comply with the risk assessment requirements, and effectively implementing age assurance, which are crucial in both UK and EU regulation. Some notable solutions:
Finally, a significant theme throughout the conference was the tension between enhanced privacy features and the increased ability to protect children online. On one hand, keynote speaker Baroness Beeban Kidron highlighted how high privacy settings from commercial actors also protect children. The UK's Age Appropriate Design Code—architected by Baroness Kidron—cemented the age of an adult as 18, moved responsibility onto the companies to protect the best interests of children, and connected the topics of privacy and safety in the minds of legislators. It argues for limiting the collection of location data and access to children's friend networks.
On the other hand, the use of end-to-end encryption was largely decried for its propensity to be exploited by abusers. However, several speakers acknowledged that the argument against end-to-end encryption was being lost. The UN Human Rights Commissioner supports it: “End-to-end encrypted tools and services keep all of us safe from crime, surveillance, and other threats. Governments should promote their use rather than imposing client-side scanning and other measures undermining encryption.” Moreover, the opportunity that AI-driven technologies present can only be realized by providing AI large language models with more data, such as conversations, chats, interactions, etc.
In the context of age assurance, it was noted that while users were engaging with technology, particularly facial scanning technology, statistics shared by Yoti demonstrated that users were choosing the lower friction approaches and did not want to provide document-based evidence for age assurance. Yoti has been able to provide a service to companies that helps in achieving online safety while also protecting privacy, with facial images deleted following the liveness detection, almost instantaneously.
Despite the startling data and harrowing stories of abuse, the drive for change was palpable – and the value of the dialogue immense. No doubt, there will be more examples of solutions to this global threat at next year’s event, which has been rebranded as the International Policing and Public Protection Research Institute (IPRI).