The Mercor breach is today's sharpest compliance trigger. The $10 billion AI training data supplier — a vendor to Anthropic, OpenAI, and Meta — has disclosed exposure of biometric data (facial and voice samples) and identity documents via a supply chain compromise of the LiteLLM library. Meta has suspended its engagement pending investigation. The practical question for AI procurement teams is vendor-chain liability: if a training data supplier's breach enables downstream deepfake fraud affecting enterprise or government clients, who bears exposure under applicable data protection and security laws? Security analysts suggest Mercor may be one target in a broader extortion campaign through the LiteLLM vector, which warrants immediate supply chain audit for any organization using that library.
European DPA enforcement continues at pace, with three distinct doctrinal signals worth tracking. Finland's Data Protection Ombudsman has reprimanded a credit agency for treating a self-service portal as sufficient access-request fulfillment and for charging €9.90 on repeat requests within 12 months — both inconsistent with GDPR Article 12. Spain's AEPD imposed a €30,000 fine on an energy supplier for processing unverified customer data that triggered a wrong-party supplier switch, grounding the violation in Article 5(1)(d) data accuracy rather than security obligations. Denmark's Datatilsynet, by contrast, permitted a municipality to rely on unlawfully obtained recordings in child welfare proceedings, holding that Article 5(1) fairness requires case-by-case balancing where a child's welfare is at stake — a notable departure from per se exclusionary reasoning. Taken together, these decisions suggest European DPAs are increasingly willing to frame enforcement around foundational data quality and lawful basis obligations, not just security failures.
The EU e-Privacy derogation lapse, covered yesterday, has generated new analysis from civil society confirming that platforms including Google, Meta, Microsoft, and Snap face materially elevated legal risk if they continue voluntary CSAM scanning without an alternative legal basis. No enforcement action has been announced, but the gap between platform conduct and legal cover is now explicit. The practical question is whether platforms will pause scanning pending new legislation or accept enforcement risk — a decision with significant reputational and regulatory stakes on both sides.
Child online safety is converging across jurisdictions simultaneously. Australia's eSafety Commission has published its first enforcement report under the Social Media Minimum Age Act (enacted December 2025), documenting 4.7 million age-restricted account removals but flagging active investigations against Facebook, Instagram, Snapchat, TikTok, and YouTube for potential non-compliance. Separately, Australia's ambassador to the EU has stated publicly that unilateral enforcement is insufficient and is seeking regulatory convergence — a signal that cross-border coordination on age-restriction frameworks may accelerate. Greece has introduced its own social media age-restriction bill, adding to the France-Australia pattern. For global platforms, the policy direction across these jurisdictions is consistent enough to warrant preparation even where no single law is yet enforceable against a given entity.
On AI governance infrastructure, the European Commission has opened a targeted consultation — closing May 15 — on measuring energy consumption and emissions of general-purpose AI models, directly feeding into GPAI compliance methodology under the AI Act. Separately, China's CAC has published draft rules on AI virtual humans requiring explicit consent for biometric likeness use, mandatory labeling, and authentication bypass prohibitions, with public comment closing May 6. CNIL has published its 2026 work program, committing to guidance on both GDPR and AI Act compliance. Germany's Health Ministry has drafted a framework integrating EUDI Wallet into the national patient record system by 2028, with AI governance provisions scoping clinical decision tools within permissible medical research processing. These developments collectively suggest the EU's AI Act implementation machinery is gaining operational specificity — organizations with GPAI obligations should treat the energy consultation as a participation opportunity, not a background item.
Watch level: ACT NOW (AI procurement and security teams — Mercor breach is active; LiteLLM supply chain compromise is ongoing; assess vendor exposure and downstream biometric data handling now)
Watch level: ACT NOW (platforms conducting voluntary CSAM scanning in the EU — e-Privacy derogation has lapsed; no current legal basis exists; legal teams should assess scanning continuity decisions immediately)
Watch level: PREPARE (credit bureaus, data brokers, and access-intensive controllers — Finnish DPA ruling on repeat access request fees and portal-only fulfillment is issued; review Article 12 compliance posture)
Watch level: PREPARE (organizations subject to GPAI obligations under the EU AI Act — European Commission energy consumption consultation closes May 15; participation shapes compliance methodology)
Watch level: MONITOR (global social media platforms — Australia SMMA enforcement is live with active investigations; Greece bill introduced; Australia-EU coordination signals; trajectory toward stricter age-restriction enforcement is consistent across jurisdictions)
Policy Signal · policysignalhq.com · Major privacy + AI governance moves, distilled.