
On 29 October 2025, the Centre Pompidou Málaga became a meeting point for managers, legal experts and technology professionals during the event El Español Meeting – OPPLUS: The regulation of AI in companies, where the regulatory framework for Artificial Intelligence (AI) and its impact on organisations was discussed.
This meeting proved essential for understanding the role that this framework plays in the strategy and governance of European companies.
A space for dialogue on the AI legal framework
The meeting began with a welcome from Ángel Recio, Delegate of El Español in Málaga, and an opening session led by Pablo Benavides, CEO of OPPLUS, together with Patricia Navarro, Delegate of the Government of the Junta de Andalucía.
The central focus was the round table ‘Managing AI with rules’, which brought together Francisco López (Director of the Legal Area at OPPLUS), Rocío Ramírez (Senior Technology Product Manager at Wolters Kluwer Legal Software) and Jorge del Valle, Partner at Certus and specialist in technology law and digital governance.
The contribution of our partner, Jorge del Valle, provided a practical and strategic perspective on how companies should prepare for the new AI legal framework developed by the European Union.
The meeting concluded with a networking breakfast, which enabled participants to share experiences, challenges and strategies in response to the new regulatory environment.
The event made it clear that collaboration between the public sector, companies and technology-law experts will be essential to building ethical, safe and sustainable AI.
The Artificial Intelligence Act: timeframes and business challenges
During his presentation, Jorge del Valle recalled that the EU Artificial Intelligence Act (AIA) will be fully applicable from 2 August 2026, but with earlier deadlines that require companies to act now:
- February 2, 2025: entry into force of the chapters on general provisions and prohibited AI systems.
- August 2, 2025: application of the sections on notifying authorities, general-purpose AI, governance, sanctions and confidentiality.
- August 2, 2027: high-risk AI systems.
According to our partner, this schedule marks a turning point: the challenge is not only to comply with the regulation, but to learn how to manage AI responsibly and transparently.
Jorge del Valle emphasised that organisations must keep ahead of legal requirements if they wish to maintain the trust of customers, employees and regulatory authorities. The aim is not to limit innovation, but to make it sustainable and ethical.
Business risks in the adoption of generative AI
Throughout his presentation, Jorge del Valle detailed the main risks associated with generative AI according to business function, showing that digital transformation also requires transforming risk management across the following areas:
- Management and strategy: excessive technological dependence; decisions based on biased data; reputational risks linked to a lack of transparency.
- Legal and compliance: conflicts with the GDPR and the AI Act; difficulties in explaining automated decisions in litigation; risks of intellectual property infringement.
- Information technologies: vulnerabilities in third-party tools (22% of SMEs have suffered cyberattacks through external platforms) and a lack of traceability in the use of AI models.
- Human Resources: algorithmic biases in recruitment and assessment; only 26% of employees have received training in the responsible use of AI.
- Commercial and marketing: misuse of customer data; creation of deepfakes or misinformation; exposure of sensitive information on generative platforms.
These points demonstrated that AI governance is now a key element of competitiveness. AI adoption must be accompanied by internal policies, ethical controls and human review protocols that mitigate biases and ensure traceability.
Essential measures for responsible AI
In the final part of his speech, Jorge del Valle proposed a risk management plan, supported by our team at Certus, designed to offer companies a roadmap for implementing responsible AI:
- Risk diagnosis and assessment: inventory AI systems; analyse the data they process; verify high-risk data before use.
- Governance and responsibility: create an AI committee connected to data-protection and cybersecurity areas.
- Continuous training: promote a culture of ethical AI through training programmes and significant human oversight.
- Transparency and clear notices: inform users when a decision or interaction involves an AI system.
- Rigorous selection of tools: choose providers that guarantee security, confidentiality and regulatory compliance.
- Technical security and audits: regularly review algorithms, data sources and licences, documenting each process.
- Risk coverage: take out adequate cyber policies to support both protection and incident response.
Our expert recalled that AI management must be integrated into the overall corporate compliance and sustainability strategy, and not treated as an isolated or experimental initiative. Generative AI must go hand in hand with generative responsibility. Innovating is compatible with regulating yourself well.
Towards a business culture of digital responsibility
The final message of the session was clear: AI regulation does not seek to curb innovation, but to ensure that its impact is positive and reliable.
For Jorge del Valle, the real challenge is to transform corporate culture by adopting technological governance models that inspire confidence in both the market and society.
