Virginia has moved most decisively in the past 24 hours toward institutionalizing third-party AI oversight, with two companion bills — SB384 (99-0) and HB797 (84-14) — both advancing frameworks for independent AI verification organizations. The near-unanimous and strongly bipartisan margins signal that structured third-party auditing is consolidating as the preferred regulatory mechanism at the state level, providing an alternative to direct government rulemaking. California's SB813, which similarly establishes independent verification bodies under a proposed AI Standards and Safety Commission, remains held at the Assembly desk, suggesting California is watching the Virginia model closely. Ohio's HB628, proposing a licensing regime for AI risk mitigation organizations, extends the pattern further. Taken together, these four bills across four states represent an emerging architectural consensus: rather than prescribing AI behavior directly, legislatures are building credentialing and audit infrastructure around AI deployment — a development that will carry significant compliance obligations for both AI developers and the verification entities themselves.
AI governance in health care settings is attracting concentrated legislative attention across multiple states simultaneously. Colorado's HB1139 has cleared its House Second Reading with amendments, directly regulating AI use in health care, while Vermont's H0814 goes further by establishing neurological rights protections — placing cognitive liberty explicitly within the AI regulatory perimeter. These two bills, advancing in parallel, signal that sector-specific AI regulation in health is accelerating faster than comprehensive horizontal frameworks. Florida's HB0527, requiring human review of insurance claim denials, extends the same logic into financial health contexts. Compliance teams operating AI-enabled clinical decision support, care management, or claims processing systems face a patchwork of sector-specific obligations that is thickening rapidly, without a federal floor to rationalize it.
Transparency obligations targeting AI training data and consumer-facing content are advancing on multiple fronts in New York and Washington. New York's Senate Bill S06955 and Assembly Bill A06578 — companion versions of the Artificial Intelligence Training Data Transparency Act — have both reached third reading, imposing affirmative public disclosure requirements on generative AI developers regarding their training data. If enacted, New York would become the first U.S. state to mandate supply-chain transparency at the data layer, with national compliance implications for firms that cannot easily segment their model deployment by jurisdiction. Separately, Washington's HB1170, requiring disclosure to users when content has been AI-generated or modified, has been delivered to the Governor. Washington's HB2157, a broader high-risk AI systems bill, has been placed in the Rules 'X' file, reflecting a familiar pattern: disclosure and transparency requirements advance where comprehensive risk-tiered regulation stalls.
Beyond the dominant U.S. state-level activity — which is unusually concentrated today across roughly a dozen jurisdictions — practitioners should note two additional data points. Utah's HB0450 privacy amendments have been enrolled and await gubernatorial action, meaning modifications to the Utah Consumer Privacy Act are imminent; the enrolled text warrants close review for changes to controller obligations or consumer rights mechanisms. Oregon's passage of SB1546 governing AI companion products breaks new statutory ground for a product category — relational AI systems — that has operated without dedicated oversight, and its compliance implications for disclosure, safety, and design will depend heavily on the enrolled text. The CDT Europe's fifth brief on EU AI Act GPAI obligations, while analytical rather than regulatory, signals that civil society scrutiny of general-purpose model requirements is intensifying ahead of enforcement, a dynamic that typically presages more assertive regulatory interpretation.
Several near-term developments warrant close monitoring. Washington's SB6284 AI consumer protection bill has cleared a Ways and Means hearing, meaning a committee vote is the next inflection point. Utah's HB0320, amending the state's Office of Artificial Intelligence Policy, is at the Governor's desk alongside HB0450, making Utah a jurisdiction where the regulatory infrastructure itself may be reshaped within days. New York's AI training data disclosure bills, now at third reading in both chambers, are positioned for floor votes; their outcome will be a significant indicator of whether state-level AI supply-chain transparency becomes enforceable law in 2026. Compliance teams should also track Wyoming's enacted HB0102 on AI-generated deepfakes targeting minors and Oregon's SB1546 for implementing guidance, as both statutes are now law and enforcement clocks have begun.
Policy Signal · policysignalhq.com · Major privacy + AI governance moves, distilled.