Earlier this year I had the pleasure of co-authoring the paper Responsible AI: Balancing Regulation, Ethics, and the Future with Irene Liu and Shella Neba on behalf of the organization Women Defining AI and in partnership with Berkeley Law Executive Education. We released the paper in three parts:

Notwithstanding how quickly the AI landscape can morph, the paper has held up. Its focus on real-world developments and pragmatic thinking instead of the unknown “what ifs” of future AI policy should continue to give its community perspectives staying power.

Big Picture

Hyped expectations around AI regulation still don’t match reality. Other than the EU AI Act (which, not insignificantly, covers the European marketplace), few comprehensive AI regulations are actually in effect.

Consider the United States:

  • The U.S. abounds with talk of voluntary industry commitments, bipartisan working groups, largely symbolic executive orders, and participation in AI safety summits – important signals of how legislation could take shape if the political climate weren’t so divided and unpredictable. But the gridlock in Congress isn’t going to evaporate after Election Day. Moreover, the hurdles aren’t just political: as a matter of substantive policy AI hardly is a straightforward subject and little consensus exists regarding what to regulate, how, and by whom. So it may be a while before any uniform AI legislation at the national level passes, if at all.
  • Stepping into the void are individual states enacting a patchwork of AI-related laws, and agencies like the FTC that are relying upon their existing enforcement authority to issue policy positions and bring regulatory actions.

Looking Ahead

AI developers and deployers (i.e, those integrating third party AI systems into their own applications) will need to be prepared to synthesize different, sometimes competing, regulatory frameworks into a coherent GTM risk and compliance strategy.

  • The E.U. AI Act officially took effect on August 1, 2024, triggering the graduated deadlines discussed in Part 1 of our paper. Multinational companies engaged in European markets or considering a covered product launch in the EU should prioritize a risk analysis under the AI Act.
    • But that’s not all. Remember that pre-existing laws such as the GDPR and Digital Markets Act still apply, adding to the bloc’s regulatory complexity.
    • Citing such complexity, Meta and Apple – routine targets of regulatory action given their size, innovation, and market share – have announced they will not be releasing certain new AI products in the E.U. Don’t get too caught up in this: they’re playing a different game of chess. Not every company will feel a similar need to avoid the E.U. market (or have that luxury) nor will most startups have that kind of enforcement target on their backs.
  • Comprehensive federal AI regulation in the US isn’t imminent even though several bills have been introduced. Continue instead to watch the states.
    • IAPP’s State AI Governance Tracker is a great tool for businesses because it “spotlights legislation directly impacting private sector organizations, excluding government-only bills.”
    • Though currently only Utah and Colorado have passed such laws (each rooted in consumer protection), California’s pending but controversial Safe and Secure Innovation for Frontier Artificial Intelligence Models Act is on everyone’s watch list since most of today’s large-scale AI models are developed by Silicon Valley-based organizations and thus any statewide legislation has the potential to shape developers’ business practices worldwide. Governor Newsom has until September 30, 2024 to sign.