Technology has always advanced faster than the laws we rely on to protect ourselves from technology’s misuse. Fears of a super-intelligent artificial intelligence arising that manipulates machines and humans to do its will sound like science fiction, but the incredible speed of AI advances make them anything but.
The more immediate threats that AI poses to individuals and businesses are more down-to-earth and far less apocalyptic. A recent survey of chief risk officers (CRO) by the World Economic Forum identified malicious use of AI as the greatest risk posed by the technology. The CROs point out that it has become easy for people to apply AI to disseminate misinformation, commit cyber attacks, and access sensitive personal data.
The European Union has responded to the dangers of AI by enacting regulations requiring AI developers and deployers to implement safeguards to protect against misuse of the technology. The EU’s AI Act matters to individuals and businesses in countries around the world as the first step in creating a comprehensive set of protections for individuals and businesses against AI-related loss of rights and assets.
What Is the Purpose of the EU AI Act?
The AI Act is intended to build trust in AI among residents of the EU. The lack of transparency in AI systems makes it difficult to confirm that they don’t treat anyone unfairly or put them at an unfair disadvantage. Two examples are the use of AI in making hiring decisions, and to qualify people for public benefits.
Regulations in the AI Act strengthen protections already in place and introduce rules that address issues related specifically to artificial intelligence:
- The act outlaws AI practices that pose unacceptable risks, which are at the top of a four-level Regulatory Framework. The three lower risk levels are minimal, limited, and high.
- It identifies high-risk applications and the requirements for their safe use, including the specific obligations of the parties deploying and providing AI applications.
- It mandates a conformity assessment prior to an AI system’s release to the public, and it stipulates an enforcement mechanism for testing AI products for conformity with the law after their release.
The EU’s Regulatory Framework is intended to help consumers and businesses understand the nature of the different risks posed by AI technology, as well as to establish requirements for AI applications it identifies as unacceptable or high risk.
- Unacceptable risk is defined as one that poses “a clear threat to the safety, livelihoods, and rights of people.” These systems are banned from use in the EU.
- High-risk AI applications are those used in critical infrastructure and systems that could jeopardize the health and safety of citizens, as well as those used in education and vocational training, employee management, and provision of essential public services, such as credit scoring to qualify for a loan. Other high-risk areas are law enforcement as it relates to fundamental rights, migration and border control, and the administration of justice and political processes.
- Limited risk relates to the transparency of AI applications. It requires that people are informed whenever they interact with AI systems such as chatbots, and that they have the opportunity to opt out of using the systems. Providers are responsible for making their use of AI easy to identify, including AI-generated material intended to inform or influence matters of public interest, such as deep fakes.
- Minimal or no risk applies to use of AI in video games, spam filters, and other situations that pose no inherent threat to the public.
Under the EU AI Act, high-risk AI applications must meet specific criteria before being made available in EU countries. In addition to having adequate risk assessment and mitigation features, providers and deployers must confirm that the systems are based on high-quality datasets that are as free of bias as possible. Activity must be logged and traceable, and the system documentation thorough enough to confirm compliance with the regulations.
Other requirements are that deployers receive clear and adequate information about the products from providers, that AI systems receive the appropriate level of human oversight, and that they are robust, secure, and accurate.
Who Will Be Affected by the EU AI Act?
The EU AI Act applies to providers, deployers, importers, distributors, and others involved in the production and dissemination of AI products. The three groups affected the most by the law are providers, deployers, and importers:
- Providers are the entities that develop AI systems and general-purpose AI (GPAI) models, or hire others to develop the systems for them. The products feature the provider’s name or trademark, and are either placed on the market directly or made available for use as a component in a service. Providers can be based inside or outside the EU: those located outside the EU must designate authorized representatives inside the EU to confirm their compliance.
- Deployers are any people and organizations inside or outside the EU that make AI systems available for use in the EU market. The systems include chatbots that respond to customers’ support requests, for example.
- Importers are parties located in the EU that bring AI systems originating outside the EU into the market.
The AI Act defines an AI system broadly as one that processes input with some level of autonomy, and that generates outputs such as predictions, recommendations, decisions, or content that can affect physical or virtual environments. GPAI includes any AI model that is applied to a range of topics and is capable of performing a variety of tasks. GPAI models are designed to integrate seamlessly with other AI applications.
When Will the Act Have an Impact?
Even though the EU AI Act took effect on August 1, 2024, many of its requirements for high-risk systems and other sections will apply at the end of a two-year transitional period, although some rules will take effect after six months and others after one year. In addition, the rules for AI systems that are embedded in regulated products don’t apply for three years.
To assist organizations in preparing for implementation of the AI Act’s regulations, the EU has created the AI Pact to promote voluntary commitments to compliance. The first of the two pillars of the compliance framework is designed to serve as a forum for organizations exchanging information and best practices for implementation. The second is intended to help providers and deployers lay the groundwork for their compliance efforts.
How Will the EU AI Act Benefit Consumers and Businesses?
The EU AI Act can be seen as a product safety law aimed broadly at technology providers. However, the act broadens its protections beyond traditional product health and safety to cover risks to the fundamental rights of citizens, including fair elections and transparency in government and business. For example, the AI Act requires that people receive a meaningful explanation of the reasoning applied by the AI system in making its decisions.
The key to realizing the protections promised by the act is its enforcement provisions, which will be carried out by national market surveillance officials under the authority of EU Regulation 2019-1020. Market surveillance authorities are empowered to intervene at the site of the compliance violation. This differs from the enforcement provisions of the EU’s General Data Privacy Regulation (GDPR), which apply enforcement where the provider is located. The change reduces the likelihood of one EU jurisdiction being overwhelmed by infringement cases.
Basing AI protections on product safety laws introduces some inherent flaws in the regulatory structure, however. For example, unlike most products, AI systems are dynamic and designed to change over time. In addition, their value and purpose can’t always be predicted or measured accurately, and the unpredictability of an AI system’s use cases makes it nearly impossible to identify and quantify all the potential risks it poses. Some analysts point out that it may be more effective to regulate the internal processes of private and public entities involved in AI production rather than testing the resulting products once they’re ready to be released.
What AI Regulations Are in Place or Pending in the U.S.?
In October 2023, the White House issued an executive order that requires government agencies to develop standards for AI safety and security. The order calls for developers of large-format AI systems to share safety and test results with federal agencies, and it establishes a partnership between the Department of Justice and federal civil rights offices to identify and prosecute AI-related violations. This order was followed in May 2024 by new guidance from the White House that focuses on protecting workers from risks posed by their employers’ use of AI.
California is one of several states to propose laws relating to AI safety. Senate Bill 1047 requires that AI developers assess the likelihood of their products causing harm and allows the California attorney general to seek court injunctions against developers that don’t comply with the bill’s safety measures.
The challenge for U.S. lawmakers is to protect against the misuse of AI without stifling development of the technology. Three bills introduced recently in the U.S. Senate illustrate the struggle to find the appropriate balance.
- Promoting United States Leadership in Standards Act of 2024 calls for the National Institute of Standards and Technology (NIST) to survey existing AI standards in the U.S. and other countries to enhance the existing voluntary approach to AI best practices.
- Future of Artificial Intelligence Innovation Act of 2024 promotes the NIST’s new U.S. AI Safety Institute to develop voluntary collaborative standards along with other federal agencies. The standards would include metrics, benchmarks, and evaluation frameworks for measuring AI safety in various use cases.
- Artificial Intelligence Research, Innovation, and Accountability Act of 2023 would create a tiered evaluation process for AI systems categorized as “critical impact” or as less sensitive “high impact.”
The dynamic nature of AI regulation is evident in the U.S. government’s response to use of AI by market competitors to fix prices. When the U.S. Justice Department charged RealPage for using algorithmic pricing software to decrease competition among landlords, it applied federal antitrust laws. However, potential loopholes in antitrust laws led U.S. Senator Amy Klobuchar (D-MN) to introduce legislation that outlaws the use of algorithms to fix prices.
The Preventing Algorithmic Collusion Act is intended to block direct competitors from sharing “competitively sensitive information” to which a pricing algorithm is applied to determine the highest price consumers would be willing to pay. This type of price fixing may fall through a loophole in current consumer law, according to the bill’s sponsors.
AI shows tremendous promise in boosting the lives and livelihoods of people around the world, but only if the technology can be used safely and fairly. Consumers and businesses alike want to avoid a patchwork of AI regulations that turn enforcement into a whack-a-mole process. The EU AI Act is a step toward establishing a single comprehensive set of protections that apply broadly and promote rather than hinder the technology’s possibilities.