Legal Blogs and Updates

Print PDF
EU’s Proposed AI Act is at the Forefront of International AI Regulation
EU’s Proposed AI Act is at the Forefront of International AI Regulation

By voting to adopt the latest version of the AI Act, EU lawmakers have ratified strict regulations governing how companies may use the most recent advances in generative artificial intelligence (AI) models. Considered stricter than the United States’ and the United Kingdom’s AI regulations, the latest version of the EU’s AI Act also widens the purview of the restrictions it imposes on “high-risk AI systems” described in Title III of the proposed Act.

First drafted two years ago, the AI Act is slated to become the new benchmark for global tech leaders, who would do well to abide by the stricter requirements in the Act’s latest version and its application to ChatGPT. The Act includes a broad definition of AI, allowing the definition to cover future techniques yet to be developed.

This latest iteration of the Act establishes categories of AI models in anticipation of eventually registering them in an EU database that will be publicly accessible and easy to navigate. The Act requires that AI models be assigned levels of risk, and provides that “low-” and “minimal-risk” AI tools are exempt from regulation. Limited risk AI tools must be transparent.

The AI Act allows the use of “high-risk AI systems;” however, the Act would require users to implement a robust risk-management system and perform a “fundamental rights impact assessment” to evaluate whether the use of the high-risk AI system would violate “fundamental rights” recognized in the EU. Further, the AI Act would impose additional requirements through regulations requiring testing, documentation, and human oversight.[1]

Title II of the proposed Act prohibits certain kinds of AI that present “unacceptable risk,” including systems that use subliminal messaging to manipulate individuals or exploit vulnerable groups such as children or disabled individuals. Examples of banned systems will include predictive policing tools as in seen in the United States and social scoring systems such as those used in China. The penalties for noncompliance include fines up to $32 million or 6% of global profits.

The Act governs “developers, service providers, and businesses” not located within the EU when using AI output produced by systems within the EU or from systems collecting and processing personal data of individuals located within the EU.

While the US and other countries have taken a more conservative approach to AI regulation in the interest of increased public concern, the EU AI Act may serve as a benchmark and have a global impact on policymaking. The European Parliament has adopted its negotiation position on the proposed legislation, but the draft still needs to pass final review from additional lawmakers and EU member states. Enactment is projected by the end of 2023.

As generative AI becomes progressively more sophisticated and suitable for use within the workplace, thoughtful planning is necessary to ensure that this new technology is implemented safely and effectively.


[1]Proposal for an Artificial Intelligence Act, European Commission, 4 (available at https://artificialintelligenceact.eu/the-act/).

  • Karen Painter  Randall
    Partner

    Karen Painter Randall, formerly Certified by the Supreme Court of New Jersey as a Civil Trial Attorney and a partner at Connell Foley LLP, where she chairs the Cybersecurity, Data Privacy, and Incident Response Group. With extensive ...

Archives

Back to Page