Meta joins the “Safety by Design” program, pledging to combat the misuse of generative AI tools for child exploitation by adopting AI development principles. The initiative, led by Thorn and All Tech is Human, focuses on responsibly sourcing training datasets, rigorous stress testing, and investing in research to enhance safety measures.
With an increasing stream of generative AI images flowing across the web, Meta has today announced that it’s signing up for a new set of AI development principles designed to prevent the misuse of generative AI tools to perpetrate child exploitation.
The “Safety by Design” program, initiated by the anti-human trafficking organization Thorn and the responsible development group All Tech is Human, outlines various key approaches that platforms can pledge to undertake as part of their generative AI development.
Those measures relate primarily to:
- Responsibly sourcing AI training datasets to safeguard them from child sexual abuse material
- Committing to stringent stress testing of generative AI products and services to detect and mitigate harmful results
- Investing in research and future technology solutions to improve such systems
As explained by Thorn: