The European Parliament's rejection of CSAM scanning rules is today's most consequential development. By a 311-vote margin, the Parliament blocked a temporary extension that would have allowed platforms to continue voluntarily scanning for child sexual abuse material — and without it, that voluntary scanning authority expires in early April, effectively rendering such activity unlawful under EU law. Tech platforms operating in the EU with active CSAM detection programs face an immediate compliance question: whether to suspend scanning or risk operating without legal cover. Legal and trust-and-safety teams at major platforms should engage now.
The EU's digital omnibus package — advancing to Council negotiations — extends the AI Act's high-risk compliance deadline from August to December 2027, a direct response to the Commission's failure to issue high-risk guidance on schedule. This is a meaningful reprieve for developers of biometric, critical infrastructure, and employment-sector AI, but it is not final law until Council agreement is reached. Practitioners should note the deadline shift does not alter the General-Purpose AI provisions already in force, and the omnibus also packages a ban on non-consensual nudification apps. The practical question is whether teams that began compliance mapping for August should pause, continue, or narrow scope — the safe answer is to continue at reduced intensity while monitoring Council negotiations.
The European Commission's DSA enforcement push dominated today's EU picture. Preliminary breach findings against Pornhub, Stripchat, XNXX, and XVideos — each designated as a very large online platform — signal that self-declaration of age no longer satisfies Commission standards. The formal Snapchat investigation, conducted jointly with the Dutch Digital Services Coordinator, covers five distinct compliance areas including grooming risk and illegal goods exposure. Together, these actions suggest the Commission is systematically working through its designated platform list on child safety; platforms not yet named should treat this as a leading indicator rather than a final roster. Watch level: ACT NOW (designated VLOP legal and compliance teams, EU-facing platform operators)
Apple's UK rollout of mandatory age verification via iOS 26.4 — with an EU rollout planned — is the most significant industry-led compliance move today. Users who decline verification are defaulted into web content filters, with the verification burden placed on adults rather than minors. Ofcom has welcomed the measure, but privacy advocates and industry specialists have each raised objections: one questioning whether device-level signals satisfy statutory obligations, the other flagging the coercive opt-out structure. The design choice raises a data minimization question under UK GDPR that has not yet been tested by the ICO. Separately, the ICO's Reddit ruling — finding unlawful processing of under-13 data for failure to obtain parental consent — confirms that age-appropriate data practice enforcement is active, not merely pending.
Two US items warrant attention beyond state-level children's privacy. California's data broker registry identifies 33 entities selling resident data to foreign buyers, providing a concrete compliance checklist for organizations with data broker vendor relationships. And the Fargo facial recognition case — police acknowledged a Clearview AI erroneous match was used as probable cause, yet dismissed charges remain open — illustrates the accountability gap in jurisdictions without binding biometric use standards. The White House's AI legislative recommendations signal a shift toward statutory engagement, but Congressional response timelines remain unclear. Today's briefing is weighted toward EU and UK developments; US coverage is largely state-level or agency-level.
Policy Signal · policysignalhq.com · Major privacy + AI governance moves, distilled.