Avanteam Blog · April 2026 The countdown has begun. Documented obligations, potential audits, and fines of up to €35 million—here’s everything you need to prepare for. On August 2, 2026, the European AI Act will come into full effect. For thousands of quality managers, this date is not some distant regulatory abstraction—it is a concrete deadline, with documented obligations, potential audits, and penalties of up to €35 million or 7% of global revenue. Adopted on June 13, 2024, and effective as of August 1, 2024,the AI Act (EU Regulation 2024/1689) is the world’s first legal framework governing artificial intelligence. Its principle is simple yet fundamental: the greater the potential for harm a system of AI can cause, the greater the obligations placed on those who develop or use it. Why quality managers specifically? Because the AI Act is based on a quality-driven approach: risk mapping, technical documentation, process control, human oversight, and continuous improvement. It is not a document intended solely for lawyers or IT specialists; rather, it is a compliance management framework that quality departments are naturally equipped to oversee. ⚠️ Watch out for Shadow AI In most organizations, AI tools have been deployed without formal validation: HR chatbots, scoring tools, and scheduling algorithms. The first step under the AI Act is precisely to identify what you are actually using, not just what you have officially approved. The AI Act is being implemented in phases. Here are the key milestones to keep in mind: 🔴 The critical point for 2026 Most organizations in the industrial, healthcare, agri-food, and service sectors use at least one system that can be classified as “high-risk”: recruitment tools, predictive quality scoring systems, preventive maintenance algorithms, or AI for automated quality control. These systems must be documented, validated, and registered by August 2, 2026. The AI Act does not apply only to companies that develop AI tools. It applies to any organization that places AI systems on the market or uses them. The obligations vary depending on your role in the AI value chain. ⚡ The deployer may become a supplier (Art. 25) The regulation provides for automatic transfer of ownership if you put your name on the AI system, make substantial modifications to it, or change its original purpose. This point is critical for organizations that integrate AI components (such as the ChatGPT API, Copilot, etc.) into their own business processes. The AI Act classifies all AI systems into four categories. This classification determines the full scope of your obligations: An algorithm that automatically detects packaging defects and can trigger a batch rejection is classified as high risk (critical infrastructure + health impact). A complete technical dossier, validation, documented human supervision, and European registration are required. An inventory forecasting tool is, in principle, associated with minimal or limited risk. However, caution is advised: if this tool influences food safety decisions (such as the detection of contamination), the risk classification may be elevated to high risk. For each AI system classified as high-risk, here is what your organization needs to implement: ℹ️ The AI Act is a shared responsibility If your organization deploys an AI system developed by a third party (SaaS provider, integrator), you have specific obligations as the deployer: verifying the supplier’s compliance, ensuring compliance with the terms of use, performing human oversight, and reporting incidents. Identify all tools that use AI, including those built into off-the-shelf software. Don’t forget about Shadow AI. A spreadsheet isn’t enough—you need a structured, traceable, and up-to-date inventory. For each identified system, determine its risk level based on the criteria set forth in the AI Act: industry sector, use case, type of data processed, and potential impact. When in doubt, err on the side of caution and choose the higher risk level. For each high-risk system, assess the gap between your current situation and the requirements of the AI Act. Is there existing technical documentation? Qualified data? A formalized human oversight mechanism? Compile or update technical documentation, formalize human oversight procedures, document risk management, and set up traceability logs. This step is the most time-consuming, so start planning for it now. For high-risk systems, register them in the European AI Systems Database. Depending on the nature of the system, a CE marking process may be required prior to marketing or deployment. AI Act compliance is not a one-time project; it is an ongoing process. Establish post-deployment monitoring mechanisms, an AI incident reporting system, and a periodic review of classifications. 🚨 Deadline approaching If you haven’t started your AI mapping yet, there are only a few months left before the deadline of August 2, 2026. Every week of delay reduces your window of opportunity to make the necessary corrections. Compliance with the AI Act is based on fundamentals that quality teams are already familiar with: risk mapping and scoring, action plan tracking, document management, decision traceability, and validation workflows. Avanteam Risk Manager and Avanteam Quality Manager are specifically designed to centralize and streamline these processes. Using Avanteam Risk Manager, the quality manager was able to create a centralized AI registry in less than a day, generate the technical dossier required by the AI Act based on existing quality records, formalize the human oversight procedure within a verifiable workflow, and set up automatic alerts for the annual renewal of assessments. After completing the risk mapping in Avanteam Risk Manager, the team identified that 2 out of 8 systems were classified as high risk. The technical documentation and human-controlled workflows were implemented within three weeks, directly integrating existing HACCP data and quality records. The AI Act comes at just the right time. At a time when artificial intelligence is finding its way into all business processes—sometimes without management being fully aware of it—this regulation requires organizations to take a hard look at how they are actually using AI. For quality managers, this is a unique opportunity to strengthen their strategic position: by overseeing AI mapping, organizing compliance documentation, and implementing robust AI governance, they can demonstrate in concrete terms that quality is not merely an administrative burden, but the foundation of trust in critical systems. Don’t just comply with the AI Act—take the lead. With the right tools and approach, you can achieve compliance well before the August 2026 deadline. Richard Garcia Director of OperationsWhat is the AI Act, and why are quality managers on the front lines?
Implementation timeline: what’s already in effect, what’s coming up
Due date Step What this means Feb. 2025 Effective bans AI systems posing an unacceptable risk (social scoring, manipulation) are banned. Fines of up to €35 million or 7% of global revenue. August 2025 GPAI & Governance Requirements for general-purpose AI models (GPT-like). Establishment of the European AI Office. August 2026 High-risk AI Mandatory CE marking, technical documentation, human oversight, and risk management for all high-risk systems. August 2027 Sector-specific extension Full integration for certain high-risk systems related to regulated products (medical devices, industrial equipment). What is your role regarding the AI Act?
The 5 roles defined by the regulations
Role Definition (Article 3, EU Regulation 2024/1689) A real-life example from your organization 🏭 Supplier Develops or commissions the development of an AI system and markets it under its own name, whether for a fee or free of charge. A software publisher specializing in AI integration, an IT services company developing an AI tool, and an IT department developing a proprietary algorithm. 💼 Deployer Uses an AI system independently in a professional setting. This is the most common role in organizations that use such systems. A company that uses an AI-powered recruitment tool, a quality scoring system, an HR chatbot purchased from a third-party vendor, and a predictive maintenance algorithm purchased from a third-party vendor. 📋 Agent A person established in the EU who has been authorized in writing by a supplier established outside the EU to act on its behalf. European subsidiary representing an American or Asian AI provider in the EU market. 🚢 Importer A person established in the EU who places an AI system on the market bearing the mark of a third party established outside the EU. A distributor marketing an AI tool developed in the United States or Asia in Europe. 🛒 Vending machine A supply chain entity that makes an AI system available on the EU market without being the supplier. An integrator or reseller that sells third-party AI solutions without making substantial modifications. The 4 risk levels: Where do your AI systems fall?
Level Examples Obligations Maximum penalty 🚫 Unacceptable Social credit AI, behavioral manipulation, and real-time biometric identification in public spaces PROHIBITED: Must stop immediately €35 million / 7% of revenue 🔴 High risk AI in HR recruitment, healthcare, education, the justice system, lending, and critical infrastructure CE marking, technical documentation, human inspection, EU registration €15 million / 3% of revenue 🟡 Limited risk Chatbots, generative AI, emotion analysis systems, deepfakes Transparency requirement: informing users that they are interacting with AI €7.5 million / 1.5% of revenue 🟢 Minimal risk Spam filters, recommendations, non-critical decision support systems No specific requirements; voluntary codes of conduct are encouraged None Specific examples by industry
Automated quality control algorithm
AI for inventory forecasting
The 5 Specific Requirements for High-Risk Systems
How to Ensure Compliance with the AI Act in 6 Steps
Comprehensive mapping of your AI systems
Classification by risk level
Audit of High-Risk Systems
Document Compliance
Registration and CE Marking
Continuous monitoring and incident management
How Avanteam Risk Manager and Quality Manager Help You Comply with the AI Act
Use case: industrial testing laboratory (ISO 17025)
Use case: food and beverage company
Conclusion: The AI Act—a challenge that reveals the maturity of your organization’s quality standards
⏱ Critical deadline: August 2, 2026
Pharmaceutical industry
Agri-food
Complete technical documentation: System architecture, training data, measured performance, known limitations, and testing and validation procedures.
Risk Management System: Identification, assessment, and mitigation of risks associated with the use of the AI system, updated throughout its lifecycle.
Data Governance: The quality, relevance, and lack of bias of the data used to train or operate the system.
Formalized human oversight: Procedures that ensure a qualified person can monitor, correct, or interrupt the AI system at any time.
Recording & Traceability: Automatic logs of operations; recording in the European database of high-risk AI systems.
1
2
3
4
5
6
AI Usage Mapping: Catalog all your AI systems in a centralized registry, including risk level, purpose, and compliance status.
Action Plans & Prevention: Implementation of preventive measures and monitoring of action plans.
AI Incident Management: Incident reporting, traceability, and tracking with an integrated CAPA process.
Document Management: Comprehensive technical files for each high-risk system, including specifications, test results, validations, and audit records.
Human oversight workflows: Formalized processes that ensure human supervision of critical AI decisions prior to execution.
Post-deployment monitoring: Continuous monitoring of performance and model drift to anticipate risks.
AI Act Compliance Dashboards: Real-time visibility into the status of each AI system, regulatory deadlines, and required actions.
Author
