The past year marked a significant milestone in AI development, with Generative AI hailed as the most substantial breakthrough since Steve Jobs introduced the iPhone at Macworld in 2007. Tools like DALL-E, Midjourney, ChatGPT, and Google Bard have transformed our thinking, eliciting both admiration and fear due to their immense potential.
These factors underscored the need for legislative bodies to proactively establish a regulatory framework for AI development. Once again, the EU has demonstrated global leadership by introducing the Regulation laying down harmonized rules on artificial intelligence, known as the AI Act (AIA). The final draft of this legislation, now public, is anticipated to be approved on February 2nd, 2024.
Global Impact
The EU AI Act will affect not only AI providers and developers within the EU but also those situated in other jurisdictions, including the Western Balkans, if their AI systems impact individuals residing in the EU. This extraterritorial reach has prompted comparisons between the AI Act and the General Data Protection Regulation (GDPR).
AI Defined
The Act describes AI systems as machine-based systems designed to operate with different autonomy levels. These systems can adapt after deployment and, with explicit or implicit objectives, deduce how to generate outputs — such as content, predictions, recommendations, or decisions—from the input they receive, influencing physical or virtual environments.
Risk-based Categorization
AI systems are categorized as Prohibited, High Risk and Low Risk.
AI practices that are forbidden under the AI Act include subliminal techniques, exploiting vulnerabilities, inferring sensitive information, evaluating individuals based on social behavior, conducting ‘real-time’ remote biometric identification for law enforcement, assessing criminal risks through profiling, creating facial recognition databases through untargeted scraping, and inferring emotions in workplaces or educational institutions, except for medical or safety reasons.
High-risk AI systems are classified based on potential harm and require documentation, registration, continuous risk assessment and mandatory human oversight. Notably, AI systems will not be classified as high-risk if they do not pose substantial harm, given certain criteria are met, if even they are initially listed as high-risk.
Testing of high-risk AI systems in real-world conditions is allowed, subject to stringent ethical and consent guidelines. Providers of high-risk models must have agreements with third-party suppliers as per AI Office standards. Employers using high-risk AI in the workplace must inform their workers, complying with EU and national laws.
However, irrespective of their risk level, all AI systems are mandated to meet certain minimum obligations. These encompass assessment documentation and essential transparency requirements, guaranteeing a baseline of clarity and understanding for everyone involved, as users must be informed when interacting with an AI system.
General Purpose AI Models
The AI Act also introduces a special category of General Purpose AI (GPAI) models, defining them as capable models with ability to perform diverse tasks (such as ChatGPT, Google Bard, LLaMA etc.). For GPAI models with systemic risk, providers must meet specific obligations, including standardized evaluations, risk assessments, incident tracking, and cybersecurity measures, with compliance demonstrated through Codes of Practice or European harmonized standards. Providers outside the EU must appoint a representative for compliance, with obligations such as verifying technical documentation and cooperating with authorities.
Additionally, the providers of GPAI systems shall maintain and update technical documentation, sharing it with AI system providers looking to integrate the model. The documentation should include development details, activities and estimated energy consumption. Importantly, the providers will also need to adhere to EU copyright law and publicly release a detailed training data summary.
AI Office and AI Board
The newly established AI Office will play a crucial role in crafting Codes of Practice for implementing the AI Act, addressing issues like information maintenance, EU-level systemic risks, and effective risk management in collaboration with AI model providers and stakeholders. AI Board, on the other hand, will conduct research, provide opinions and play an advisory role in AI regulation.
Deep Fakes
Deployers of an AI system that produces or alters image, audio, or video content through deep fake techniques are required to openly disclose that the content has been artificially generated or manipulated. Disclosure of deep fakes as AI-generated or “watermarking” is mandatory, with some exceptions (e.g., criminal prosecution).
Data Protection
In the terms of data governance, providers of AI systems must now take into account data collection processes, data origin, and, particularly concerning personal data, adhere to the original purpose of data collection, in line with GDPR principles.
Penalties for Non-Compliance
Non-compliance with prohibited AI system obligations may result in fines of up to EUR 35 million or 7% of global annual turnover for companies, whichever is higher. Failure to comply with high-risk system requirements could lead to fines up to EUR 15 million or 3% of global annual turnover for companies, whichever is higher. Providing incorrect information to authorities may result in fines up to EUR 7.5 million or 1% of global annual turnover for companies, with lower fines for SMEs and start-ups as specified in the AI Act.
Deadlines for Implementation
Upon enactment, the AI Act will become applicable to prohibited systems after three months, to General Purpose AI after 12 months, and to high-risk AI systems (e.g., AI used in education, law enforcement, etc.) after 24 months, and 36 months if it constitutes a safety component of regulated products. Codes of practice must be prepared within nine months of the AI Act coming into effect.
***
For additional information, please contact Mr. Matija Markovic, attorney-at-law, or Mr. Slobodan Doklestic, Managing Partner at Doklestic Repic & Gajin.