TW: mention of child sexual abuse, pxrn, and pxrnxgraphy
Artificial Intelligence is no longer a ticking time bomb that can be diffused. T’s a nightmare of a landmine that is exploding with thousands of casualties costing jobs, lives, and trauma. One of the pertinent concerns is the prevalence of deepfake in misinformation and sexual abuse. While celebs are easy prey, the easier ones are those without a voice – children! Child Sexual Abuse Material (CSAM) has been historically tracked via a blend of tech and social engineering by various agencies to curb the menace, but Artificial Intelligence-Generated Child Sexual Abuse Material (AIG-CSAM) is a different beast altogether. The following screenshots are from the New York Times article A.I.-Generated Child Sexual Abuse Material May Overwhelm Tip Line dated April 22, 2024. The article delves into a scary new report by Stanford University’s Internet Observatory that alarms about the deluge of child sexual abuse material created by artificial intelligence. Reuters reports that the US receives thousands of such reports of AI-generated child sexual material online. There have been few prosecutions related to AIG-CSAM in the states, as cited in this government report.
To tackle the menace, Big Tech, AI leaders, and agencies like Thorn have united to prevent the creation and proliferation of AIG-CSAM. The companies that have pledged their supports to make a safer virtual world for children are Amazon, Meta, Google, Microsoft, OpenAI, Mistral, Stability(Dot)AI, CivitAI, Anthropic, Metaphysic, Teleperformance, All Tech Is Human, and Thorn. The alliance of these firms are publicly committing to “Safety by Design” principles to safeguard childrens from AIG-CSAM and other associated harms.
Today marks a huge step in defending children from sexual abuse in the age of #genAI. Thorn and @AllTechIsHuman lead an unprecedented alliance between @amazon @AnthropicAI @HelloCivitai @Google @Meta @Metaphysic_ai @Microsoft @MistralAI @OpenAI & @StabilityAI.
— Thorn (@thorn) April 23, 2024
Together, we’re setting industry standards with new generative AI principles. Dive into the @WSJ exclusive, featuring insights from VP of Data Science, Dr. Rebecca Portnoff, on how these commitments are a major stride to defend children from sexual abuse. https://t.co/zKgTzOZPGI
— Thorn (@thorn) April 23, 2024
https://twitter.com/thorn/status/1782757125053198733
We’re proud to support @Know2Protect in their fight to prevent and combat the online sexual exploitation and abuse of children.
Join us in promoting and advocating for a safer digital environment for our youth!#ChildSafety #OnlineSafety https://t.co/4WOOVlnjZA
— Thorn (@thorn) April 18, 2024
Join us in building a world where every child is free to simply be a kid.https://t.co/xIwRXDmaza pic.twitter.com/19rtgyXHCE
— Thorn (@thorn) April 16, 2024
We commit to @thorn and @AllTechIsHuman‘s Safety by Design principles to ensure child safety is prioritized in the development and deployment of AI tools. pic.twitter.com/bnWSt13tfj
— OpenAI (@OpenAI) April 23, 2024
We’ve teamed up with @thorn, @AllTechIsHuman and other leading tech companies to commit to implementing child safety principles into our technologies and products to guard against the creation and spread of AI-generated child sexual abuse material.
You can learn more about this… pic.twitter.com/AtkDlM7S08
— Stability AI (@StabilityAI) April 23, 2024
Stability AI は、@thorn、@AllTechIsHuman、その他の大手テクノロジー企業と提携し、AI によって生成された児童性的虐待コンテンツの作成と拡散を防ぐため、テクノロジーと製品に児童安全原則を実装することを約束しています。
この取り組みの詳細については、こちらをご覧ください。… pic.twitter.com/oJARMLx4Tv
— Stability AI Japan (@StabilityAI_JP) April 24, 2024
We’re proud today to join @thorn, @AllTechIsHuman and other major AI companies in announcing our support for new principles to address the risk that AI is misused for child sexual abuse harms https://t.co/JbWXD27mbz
— Safer Online by MSFT (@Safer_Online) April 23, 2024
Protecting children is paramount, and we’re proud to join @thorn and @AllTechIsHuman in preventing child sexual abuse.
As a safety-focused company, we’ve made it a priority to implement rigorous policies, conduct extensive red teaming, and collaborate with external experts. https://t.co/2Ufo55K3hE
— Anthropic (@AnthropicAI) April 23, 2024
The Safety by Design principles incorporate the following:
Develop: Developing, building and training AI models that address child safety risk proactively by a) responsible sourcing of training datasets; b) integrating feedback loop mechanism; and c) employing content provenance with adversarial misuse as a forethought and not after.
Deploy: Releasing AI models only after properly screening and evaluating them for child safety by a) safeguarding AI products from abusive content and conduct; b) hosting AI models with utter responsibility; and c) encouraging developer ownership in safety by design.
Maintain: Maintaining the AI models and platform’s safety by relentlessly responding to child safety risk by a) preventing services from accessing harmful tools; b) investing in R&D for future technology solutions to combat CSAM online; and c) fighting CSAM, AIG-CSAM, and CSEM by preventing platforms from creating, storing, soliciting, or distributing such materials.
Read more here, here, and here
Applause for the big tech showing solidarity but the rot runs deeper and stringent regulations are required. Hopefully, only the numerous lawsuits against companies providing services like Midjourney, etc. would stifle the accelerating AGI-CSAM and other AI generated filth.
See Also: Naked Photo Of Sick Child Lands Parent In Trouble As Google’s AI Flags It As Potential Child Abuse Content
See Also: PhD Scholar Publishes Paper Masturbating To Comics Of ‘Young Boys’; University Investigates After Outrage
See Also: 16-Year-Old Girl Sexually Assaulted By Several Men In Metaverse; Police Investigates First Case Of Virtual Rape
See Also: Mark Zuckerberg Apologizes To Families Affected By Online Child Sexual Exploitation: ‘No One Should Go Through…’
See Also: Elon Musk’s X Suspends Over 2 Lakh Accounts In April For Child Exploitation Content: Report