
Nudification Stress Test: Generative AI as a Liability Risk
Regulators are reframing deepfake nudification as a product liability failure, not a moderation flaw, forcing generative AI companies to rethink duty of care and governance models.

Regulators are reframing deepfake nudification as a product liability failure, not a moderation flaw, forcing generative AI companies to rethink duty of care and governance models.

This practical guide explains how to run a capability hazard review, helping AI teams identify predictable misuse, reduce risk, and ship high-risk AI features responsibly.

The AI Systems Reliability Diagnostic helps organisations understand whether their AI systems can be trusted in real business use — and what needs to change before scaling.

Discover why real-world AI evaluations like GDPval and Inspect are replacing benchmarks and how teams can run practical capability tests for smarter model selection.