
For months, many companies have felt reassured by the timetable of the European Union’s Artificial Intelligence Regulation, the AI Act.
‘This will take years.’ ‘We’ll see in 2027.’ ‘It doesn’t affect us yet.’
Nothing could be further from the truth.
If 2025 has been the year of prohibitions and 2027 will mark full harmonisation, 2026 is the decisive year: the moment when artificial intelligence stops being an experimental matter and becomes a regulatory, strategic and genuine business risk issue.
At Certus Legal Firm, we are seeing this clearly. Many companies use AI on a daily basis — in recruitment, scoring systems, customer service, data analysis, marketing or process automation — without yet having a clear understanding of what type of AI they are using, which risk category it falls into, and what specific obligations will apply from August 2026 onwards.
This article is not intended to alarm. Its purpose is to bring order, clarity and practical insight to the AI Act: what is coming, who it affects, and what companies that do not want to be chasing the regulator in two years’ time should already be doing.
The AI Act in plain terms, without unnecessary jargon
The EU Artificial Intelligence Act introduces a paradigm shift in how technology is regulated in Europe. Rather than focusing on the tool itself, it focuses on the risk generated by its use.
The approach is reminiscent of the General Data Protection Regulation (GDPR), but with a crucial difference: here, the regulation is less about data and more about the potential impact of AI systems on individuals, organisations and the market itself.
The starting point of the AI Act is therefore very clear: not all artificial intelligence is the same, and consequently not all AI should be treated the same from a legal perspective.
On this basis, the Regulation classifies AI systems into four main risk categories:
-
Prohibited AI, banned outright due to its unacceptable impact on fundamental rights
-
High-risk AI, permitted but subject to strict controls
-
Limited-risk AI, primarily subject to transparency obligations
-
Minimal-risk AI, freely usable, although subject to voluntary codes of conduct
The key point — and where many companies misjudge the situation — is that the AI Act does not remain at the level of abstract principles or ethical statements. It introduces very specific technical, organisational and documentary obligations, backed by a sanctions regime that can reach tens of millions of euros or significant percentages of global turnover.
In other words, this is not a best-practice guide. It is binding EU law, with direct implications for how companies design, purchase and use AI-based solutions.
The real timetable: why waiting until 2027 is a mistake
One of the most common errors we encounter is reading the AI Act timeline ‘diagonally’. The Regulation does not enter into force all at once.
The key milestones are as follows:
February 2025
The full prohibition of certain AI practices begins, including:
-
Generalised social scoring, in the style of the Chinese system
-
Subliminal manipulation causing harm
-
Large-scale scraping of biometric data
-
Real-time remote biometric identification in public spaces (with very limited exceptions)
If detected, these practices may lead to immediate sanctions.
August 2025
Obligations for general-purpose AI (GPAI) providers, including foundation models, come into force. These include duties relating to:
-
Transparency
-
Training data documentation
-
Risk management
-
Enhanced obligations where the model is deemed to pose systemic risk
By this point, Member States must also have fully operational national authorities in place.
August 2026 – the critical moment
This is when the core obligations for high-risk AI systems begin to apply, particularly those listed in Annex III of the Regulation.
And this is where many companies will find themselves affected — often without realising it.
August 2027
Full application for high-risk systems embedded in regulated products (Annex I): machinery, medical devices, industrial products, and similar.
If you use high-risk AI in 2026, you already have full legal obligations, even if you do not develop technology yourself or consider your organisation to be ‘tech-focused’.
Who does the AI Act really affect?
A common misconception is that the AI Act is aimed solely at major technology companies or advanced model developers. This is not the case.
The Regulation follows a clear logic: AI is regulated where it is used and where it produces effects, not only where it is developed. As a result, its scope extends across the entire AI value chain, covering a wide range of business profiles.
In practice, the AI Act distinguishes between several legal roles, each with specific obligations:
-
Providers, who develop an AI system and place it on the market or put it into service under their own name or brand
-
Deployers, meaning organisations that use AI systems in their internal processes or services (for example, in HR, financial scoring, customer service, marketing or data analytics)
-
Importers and distributors, who facilitate the entry and commercialisation of AI systems in the EU market
-
Manufacturers, who integrate AI systems into regulated products or broader technological solutions
The direct consequence for many businesses is clear: using AI also creates legal obligations, even where the technology has not been developed in-house.
In addition, the AI Act adopts an extraterritorial approach, similar to that of the GDPR. It also applies to providers and users located outside the EU where:
-
The AI system is used within EU territory, or
-
Its outputs produce effects on individuals located in the EU
This so-called ‘Brussels effect’ turns the AI Act into a de facto global standard. For international companies, globally minded startups or non-EU technology providers, ignoring this dimension may represent a significant legal risk.
The real issue: high-risk AI
The true impact of the AI Act does not lie in generic chatbots, but in high-risk AI systems.
What qualifies as high-risk AI?
Primarily, systems listed in Annex III, used in areas such as:
-
Biometrics
-
Critical infrastructure
-
Education and vocational training
-
Employment and human resources
-
Access to essential services (credit, insurance, housing)
-
Justice and administrative processes
-
Migration and border control
Many companies already use such tools without labelling them as ‘high-risk AI’, including:
-
Automated CV screening
-
Internal scoring systems
-
Performance evaluation tools
-
Customer or user prioritisation systems
-
Fraud detection tools
In 2026, these uses will no longer be experimental.
What does the AI Act require from 2026 onwards?
For high-risk AI providers
From August 2026, providers must have in place:
-
A formal risk management system
-
Guarantees on data quality and representativeness
-
Traceable technical documentation
-
Human oversight measures
-
Cybersecurity and robustness safeguards
-
Post-market monitoring
-
In many cases, conformity assessment by a notified body
-
CE marking as a high-risk AI system
-
Registration in the EU AI database
These are legal obligations, not optional best practices.
For user companies (deployers)
This is the blind spot for many organisations.
From 2026, companies using high-risk AI must:
-
Use the system in accordance with the provider’s instructions
-
Ensure effective human oversight
-
Guarantee the adequacy of input data
-
Monitor system performance
-
Suspend use where serious risks are detected
-
Report incidents to the provider and the competent authority
In practice, this requires internal AI governance, not simply purchasing a tool and letting it run.
Spain as a regulatory testing ground: why the AI Act will be felt earlier here
Unlike some Member States, Spain has not adopted a wait-and-see approach to the AI Act. Its strategy has been clearly anticipatory, with direct consequences for businesses.
Since 2024, the Agencia Española de Supervisión de la Inteligencia Artificial (AESIA) has been fully operational. Its role goes beyond enforcement; at this early stage, the focus has been on identifying problematic practices, guiding operators and developing interpretative criteria ahead of full enforceability.
This has materialised, among other initiatives, in the launch of a national regulatory sandbox involving real-world projects in particularly sensitive AI Act areas such as employment, biometrics and financial services.
This is complemented by increasing coordination with other authorities, including the Agencia Española de Protección de Datos (AEPD) and the Comisión Nacional de los Mercados y la Competencia (CNMC), reinforcing a cross-cutting supervisory approach that links AI with data protection, competition, consumer law and fundamental rights.
In parallel, Spain’s forthcoming national AI legislation is expected to align fully with the AI Act’s sanctions regime, with fines of up to €35 million or 7% of global annual turnover for the most serious infringements.
The message is clear: Spain is positioning itself as one of the first environments where AI Act compliance will be closely scrutinised. For companies operating here — or whose AI systems impact the Spanish market — early action and specialised advice represent a strategic advantage, not a cost.
So… what should companies be doing now?
From a practical perspective, several lines of action should be addressed without delay.
First, companies need a clear overview of their actual AI uses, both internal and external. Many obligations arise from tools embedded in HR, data analysis, marketing or customer service processes that are often overlooked.
Second, these uses must be classified by risk level. Identifying which systems may qualify as high-risk under the AI Act is essential to understanding which obligations will apply from August 2026.
Next, relationships with AI providers should be reviewed. The Regulation introduces contractual implications relating to documentation access, usage instructions, allocation of responsibilities, incident notification mechanisms and system withdrawal in the event of serious risks.
Internal governance is another key pillar. The AI Act assumes that organisations have clearly designated oversight responsibilities, with real authority to intervene, suspend or correct AI systems. This connects directly with compliance, technology, HR and senior management.
Finally, AI use requires ongoing monitoring and response procedures. Compliance is not a one-off exercise, but a continuous process of detecting deviations, incidents or unforeseen impacts and responding with speed and traceability.
All of this leads to a central conclusion: the AI Act is not merely a technical regulation. It is a strategic decision about how a company integrates artificial intelligence into its business model without assuming unnecessary legal, reputational or economic risks.
