In August 2024, the new AI Act came into force, introducing new requirements for businesses using AI within the EU. The regulation aims to ensure responsible AI development and protect public rights.
The goal is to create a secure AI ecosystem in the EU.
The key question is: what does this mean for businesses, and what actions are required to comply with the new rules?
Which Businesses Are Affected by the AI Act?
The AI Act applies to companies that develop, sell, or use AI systems within the EU, regardless of size. It also affects businesses outside the EU that provide AI services to the EU market. The primary impacted stakeholders include:
- Developers and providers of AI systems that sell or create AI solutions, especially high-risk AI.
- Distributors and resellers of AI solutions within the EU, even if they do not develop the technology themselves.
- End users of high-risk AI systems in sectors such as healthcare, finance, and HR.
- Companies handling data for AI training, which must meet data quality and GDPR compliance requirements.
AI System Classification in the AI Act
AI systems are categorized into three risk levels:
- High-risk AI systems: Used in critical sectors such as healthcare, transportation, and justice. These systems must meet strict safety requirements and undergo continuous monitoring.
- Limited risk: Systems that may impact user rights, such as chatbots. Basic transparency is required, including informing users they are interacting with AI.
- Minimal or no risk: AI systems with low impact on safety or rights, such as spam filters. These are not subject to specific AI Act requirements.
CE Marking and the AI Act
For high-risk AI systems, CE marking is mandatory, proving compliance with EU safety standards and allowing them to be sold within the EU. Examples of high-risk AI products requiring CE marking include:
- AI-powered medical devices
- Safety systems in vehicles
- AI solutions for education and recruitment
To obtain CE marking, these systems must undergo rigorous reviews and risk assessments.
Summary of the AI Act
The AI Act took effect on August 1, 2024, setting requirements for businesses that develop, sell, or use AI systems within the EU, including foreign entities targeting the EU market. AI systems are classified by risk level, with high-risk AI requiring CE marking and continuous oversight.
The AI Act does not override existing EU legislation such as GDPR, meaning data protection requirements and rights remain unchanged for AI systems handling personal data.
Need Expert Guidance?
If you want to learn more about these regulations or need advice on how to ensure compliance, don’t hesitate to contact Certify & Comply for expert guidance.