5 de December, 2025

It is a fact that artificial intelligence ceased to be a futuristic concept years ago. It has become an everyday presence in the daily lives of many professionals (and individuals).

However, while technology advances at a speed that surprises even those of us who work in it, the ability of institutions to set reasonable limits is being tested every day. It is against this backdrop that the European Union decided last year to take a firm step forward and present what is perhaps the most ambitious and comprehensive regulation that has been attempted to date: the AI Act, the new regulation on artificial intelligence that is setting and will continue to set the course for the coming years.

A framework for Artificial Intelligence born out of a need

The relevant point is that the most interesting aspect of this regulation is that it does not arise as a purely bureaucratic exercise, but as a response to a tangible reality. Artificial intelligence is making decisions that previously depended on human judgement. This makes it necessary to establish a framework that guarantees a technological transition that does not undermine the basic principles on which our democracies are based. Europe has sought to anticipate the problem and not wait for the technology itself, or the companies that develop it, to determine these limits.

This approach explains why the regulation begins with something as straightforward as establishing which uses of artificial intelligence will simply have no place in the European market. This is not a question of demonising technology, but of recognising that certain applications, such as mass biometric identification or systems designed to manipulate vulnerable behaviour, can be deeply offensive to the fundamental rights of the population. And here, Europe leaves no room for doubt.

The middle ground of Artificial Intelligence, where most companies operate

Even so, the truly complex part of the regulation is not in the prohibited systems, but in those that it regulates with special attention: the so-called high-risk systems. This is where the conversation becomes particularly relevant for businesses. The regulation of artificial intelligence does not seek to prevent its use in sensitive areas (such as recruitment, credit assessment or critical infrastructure management), but it does require that it be used with real oversight, robust internal controls and the ability to explain why an algorithm makes a particular decision.

What this regulation essentially requires is that companies truly understand the technology they use. And we are not just talking about developers or large corporations: many companies use AI indirectly, through third-party solutions integrated into their internal processes. The biggest difference from previous stages is that they must now be able to demonstrate that they know what this technology does and with what guarantees they are using it.

It is a cultural and legal change. For years, AI has been a kind of ‘catch-all’ that solved problems and automated processes without too many questions. Today, Europe has decided that this is no longer enough.

Across the Atlantic: The Artificial Intelligence Debate

The publication of the AI Act coincides with a much less orderly global debate. The recent column by Certus CEO Jorge del Valle, focusing on the confrontation between Elon Musk and OpenAI, offers a fairly clear picture of how this conversation is being played out elsewhere. Whereas Europe is discussing oversight and transparency, in the United States the debate seems to revolve more around the power that those leading the development of artificial intelligence can accumulate.

The clash between Musk and OpenAI is not just another anecdote in the newspapers: it reflects a battle to define who controls the most influential technologies. And, even more so, what values will prevail when these tools are integrated into all sectors, from education to politics. For the European reader, this contrast is particularly revealing. While here we are trying to build a framework of guarantees, elsewhere internal conflicts and business interests largely determine the course of innovation.

An attempt to shape the future of AI before it is too late

The great contribution of this regulation, beyond its technical details, is that it brings back an essential element to the public conversation: the idea that technology cannot advance without a clear ethical framework. The regulation of artificial intelligence does not seek to stifle creativity or competitiveness; it seeks something deeper, which is to ensure that digital transformation does not undermine precisely what makes it valuable: trust.

In this sense, Europe is sending an unequivocal message. The development of AI is welcome and necessary, but it must take place within an ecosystem where citizens and businesses know what to expect and where the limits lie. And, above all, an ecosystem where no one can act without accountability.

While the rest of the world continues to debate who should control artificial intelligence, Europe has taken a step forward. It may not be the definitive answer, but it is an essential starting point.

Comparte el post

Leave A Comment