Home > Great Legal Mind > Technology > Legislative and Regulatory Landscape for AI/ML in the US and Europe

Legislative and Regulatory Landscape for AI/ML in the US and Europe

GLM Announcement 4 October FINAL 1 1 | International Network of Boutique and Independent Law Firms GLM Announcement 4 October FINAL 1 | International Network of Boutique and Independent Law Firms


Speakers:
Marty Zoltick and Magali Feys

Webinar Date: October 4th, 2023

Legislatures, regulators, and policymakers in the US, Europe, and throughout the world are keenly aware of the proliferation of artificial intelligence (AI) and the disruptive power of AI-enabled technologies that employ ever-evolving and increasingly improving machine learning (ML) algorithms, neural network architectures, natural language processing tools, large language models, and more. These AI-enabled technologies are ushering in a new era for enterprises, businesses, and the public at large, bringing AI into everyday life, from the content we read to the artwork we admire, the music we enjoy, the routes we travel, where and what we eat, what we buy, and when and how we wake up and wind down. The need for a regulatory framework for AI is clear. The challenge, of course, is to design a framework that addresses the risks of harm from AI and achieves the proper balance of supporting innovation with the need for safety and security, data privacy, trustworthiness, accountability, and transparency. During this session, Marty Zoltick and Magali Feys will provide an overview of the legislative and regulatory landscape for AI/ML in the US and Europe.

About the Webinar:

US: With no current comprehensive Federal legislation in the US to govern the use of AI, congress, the White House, and numerous government agencies are hard at work crafting legislation, regulations, policies, and pledges for regulating AI. Marty Zoltick will explore with you the details of the congressional effort, spearheaded by Senate Majority Leader Chuck Schumer, to craft legislation regulating AI, the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, published in October 2022 by the White House Office of Science and Technology Policy (OSTP), National Institute of Standards and Technology’s (NIST) recently released Artificial Intelligence Risk Management Framework 1.0, and other initiatives underway at the Federal level. No US state has yet enacted comprehensive AI legislation, but many states have enacted privacy laws or are in the process of considering legislation regulating the use of AI in, for example, automated decision-making, profiling, employment decisions, providing mental health services, determining insurance coverage, and determining credit or financial assistance. Marty will address how the currently enacted State privacy laws may be used in regulating AI, what States are currently considering proposed AI-related legislation, and the interplay between the anticipated Federal and State laws.

Europe: Europe is a different story. In April 2021, the European Commission proposed the first regulatory framework for AI. This regulatory framework further builds upon the Ethics guidelines for trustworthy AI issued by the High-Level Expert Group on AI and upon the European Commission’s White paper on AI: a European Approach to Excellence and Trust. The main goal of the AI Act is to establish rules that promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights, and democracy from its potentially harmful effects. The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being. June 14th, 2023, the EU parliament adopted a negotiating position on the AI Act and further negotiations within the Council are currently taking place in order to establish the final form of the law. The aim is to reach an agreement by the end of 2023. During the course of the presentation, Magali Feys will first of all guide you through the different ethical principles on which the AI Act is built and what their implications are for the development and use of AI systems within the European Union. Subsequently, Magali will further elaborate on the risk-based approach that was adopted by the EU legislator in the AI Act, as well as on the distinction between the different “types” of AI systems distinguished under the act and how this distinction is reflected in the different obligations throughout the AI Act. The goal of the presentation is to give a clear overview on the AI Act and the European Union’s stance towards creating a legal framework for trustworthy AI.