The rapid evolution of Artificial Intelligence (AI) and particularly Generative AI has opened up new opportunities, and prospects of development and inclusive growth. But, in the wrong hands, AI can unleash fraud, discrimination, disinformation, stifle healthy competition, disenfranchise workers, and even threaten national security. The United States of America, the European Union, and the United Kingdom have taken the first steps to regulate the development of AI with a strong focus on data privacy, transparency, accountability, security, and ethics.
Here is a quick overview of the key regulations being implemented in these three regions, highlighting the main points to note.
The European Parliament adopted the Artificial Intelligence Act in March 2024, which will be fully applicable 2 years after entry into force. The objective of this Act is to standardize a technology-neutral definition for AI for future reference. Furthermore, the Act aims to ensure that AI systems within the EU are safe, transparent, traceable, non-discriminatory, environmentally friendly and monitored by people and not automation. The law uses a risk-based approach, with different requirements based on the level of risk.
Risk level definition: It defines 2 levels of risk and states obligations for providers and users depending on the risk level:
Unacceptable Risk AI Systems - These are considered to be harmful for people and will be banned:
There are some exceptions and rules established for law enforcement agencies.
High Risk AI Systems - AI systems that can negatively impact fundamental rights and / or safety of people:
AI systems used in products covered by the EU’s product safety legislation, such as toys, aviation devices and systems, cars, medical devices and elevators.
AI systems in specific areas that have to be registered with an EU database:
High-risk AI systems will have to be assessed before they can reach the market and will be assessed throughout their lifecycle. EU residents can file complaints with relevant national authorities.
Transparency requirements – While the Act does not classify Generative AI as high risk, it mandates transparency requirements and compliance with EU copyright laws:
Supporting Innovation – The Act aims to help startups and small to medium businesses leverage AI with opportunities to develop and train AI algorithms before public release. National authorities have to provide companies with suitable testing conditions that simulate real-world conditions.
In February 2024, the UK Government announced its response to the 2023 whitepaper consultation on AI regulation. Its pro-innovation stance on AI follows an outcome-based approach with a focus on 2 key characteristics – adaptivity and autonomy – that will guide domain specific interpretation.
It provides preliminary definitions for 3 powerful AI systems that are integrated into downstream AI systems:
It sets out five cross-sectoral principles for regulators to use when driving responsible AI design, development, and application:
The principles are to be implemented on the basis of three foundational pillars:
The White House Office of Science and Technology Policy has formulated the Blueprint for an AI Bill of Rights with five principles to guide the design, use, and deployment of AI systems. The includes:
Safe and Effective Systems
Algorithmic Discrimination Protections
Data Privacy
Notice and Explanation
Human Alternatives, Consideration, and Fallback
In addition to these federal guidelines, several states are also formulating their own regulations. 17 states (California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia and Washington) have enacted 29 bills on AI regulation over the last five years.
AI technologies are here to stay and the world has to learn to use them safely for the betterment of humanity. Regulations for AI development and use are critical to protect populations from bias, discrimination and breach of privacy. AI technologies are evolving at an unprecedented pace, and regulators across the world are following suit with quick updates or new frameworks. Organizations need automated compliance platforms to keep pace with this rapidly changing regulatory landscape.
MetricStream’s Compliance Management can simplify and fortify enterprise compliance initiatives amidst a rapidly changing regulatory landscape. Gain greater visibility into control effectiveness and quick issue remediation with streamlined:
Even as compliance management is simplified and streamlined, it is important to have a mechanism in place to keep track of rapidly evolving regulations. MetricStream’s Regulatory Change Management platform is a centralized framework that can help organizations capture, curate, identify, extract, consolidate, and manage regulatory changes and updates sourced from diverse providers.
Find out more. Request a personalized demo today!