Google, Meta, OpenAI And Other Big Tech Unite To Fight Deepfake Child Abuse Content Online: Report

TW: mention of child sexual abuse, pxrn, and pxrnxgraphy

Artificial Intelligence is no longer a ticking time bomb that can be diffused. T’s a nightmare of a landmine that is exploding with thousands of casualties costing jobs, lives, and trauma. One of the pertinent concerns is the prevalence of deepfake in misinformation and sexual abuse. While celebs are easy prey, the easier ones are those without a voice – children! Child Sexual Abuse Material (CSAM) has been historically tracked via a blend of tech and social engineering by various agencies to curb the menace, but Artificial Intelligence-Generated Child Sexual Abuse Material (AIG-CSAM) is a different beast altogether. The following screenshots are from the New York Times article A.I.-Generated Child Sexual Abuse Material May Overwhelm Tip Line dated April 22, 2024. The article delves into a scary new report by Stanford University’s Internet Observatory that alarms about the deluge of child sexual abuse material created by artificial intelligence. Reuters reports that the US receives thousands of such reports of AI-generated child sexual material online. There have been few prosecutions related to AIG-CSAM in the states, as cited in this government report.

To tackle the menace, Big Tech, AI leaders, and agencies like Thorn have united to prevent the creation and proliferation of AIG-CSAM. The companies that have pledged their supports to make a safer virtual world for children are Amazon, Meta, Google, Microsoft, OpenAI, Mistral, Stability(Dot)AI, CivitAI, Anthropic, Metaphysic, Teleperformance, All Tech Is Human, and Thorn. The alliance of these firms are publicly committing to “Safety by Design” principles to safeguard childrens from AIG-CSAM and other associated harms.

https://twitter.com/thorn/status/1782757125053198733

The Safety by Design principles incorporate the following:

Develop: Developing, building and training AI models that address child safety risk proactively by a) responsible sourcing of training datasets; b) integrating feedback loop mechanism; and c) employing content provenance with adversarial misuse as a forethought and not after.

Deploy: Releasing AI models only after properly screening and evaluating them for child safety by a) safeguarding AI products from abusive content and conduct; b) hosting AI models with utter responsibility; and c) encouraging developer ownership in safety by design.

Maintain: Maintaining the AI models and platform’s safety by relentlessly responding to child safety risk by a) preventing services from accessing harmful tools; b) investing in R&D for future technology solutions to combat CSAM online; and c) fighting CSAM, AIG-CSAM, and CSEM by preventing platforms from creating, storing, soliciting, or distributing such materials.

Read more here, here, and here

Applause for the big tech showing solidarity but the rot runs deeper and stringent regulations are required. Hopefully, only the numerous lawsuits against companies providing services like Midjourney, etc. would stifle the accelerating AGI-CSAM and other AI generated filth.

See Also: Naked Photo Of Sick Child Lands Parent In Trouble As Google’s AI Flags It As Potential Child Abuse Content

See Also: PhD Scholar Publishes Paper Masturbating To Comics Of ‘Young Boys’; University Investigates After Outrage

See Also: 16-Year-Old Girl Sexually Assaulted By Several Men In Metaverse; Police Investigates First Case Of Virtual Rape

See Also: Mark Zuckerberg Apologizes To Families Affected By Online Child Sexual Exploitation: ‘No One Should Go Through…’

See Also: Elon Musk’s X Suspends Over 2 Lakh Accounts In April For Child Exploitation Content: Report

Leave a Reply

Your email address will not be published. Required fields are marked *