15 de December, 2025

The public dispute between Elon Musk and OpenAI, crystallised in cross-claims and accusations about commercial drift and alleged betrayal of the organisation’s founding mission, has generated more headlines than certainties. However, beyond the media hype, the conflict highlights a fundamental debate about the configuration of the global Artificial Intelligence ecosystem and the type of governance needed to ensure that its development is safe, responsible and compatible with the general interest. As Jorge del Valle pointed out in his column in El Español, this is not just a personal or corporate disagreement, but rather the visible manifestation of a structural tension: that which exists between the founding ideals of AI and the economic dynamics that, in practice, condition its evolution.

Anyone who has followed OpenAI’s trajectory even minimally knows that the entity was initially presented as a unique project in recent technological history: a laboratory focused on developing advanced systems without being immediately subject to the logic of financial return. This approach arose in an environment dominated by large technology corporations with very clearly defined business objectives. However, the actual deployment of models such as GPT‑4 or GPT‑5 has demonstrated the extent to which cutting-edge artificial intelligence requires extremely costly material structures: distributed data centres, intensive energy consumption, access to specialised hardware, and a highly competitive global talent market.

For the non-specialist citizen, AI is often identified with specific applications (a conversational assistant, an automatic translator, a recommendation algorithm), but the infrastructural scale that supports these services is rarely perceived. We are talking about multi-million pound investments, highly concentrated technology supply chains and a growing dependence on a small number of suppliers capable of training and maintaining large models. This context helps to understand why initiatives that were initially almost altruistic in nature have ended up adopting complex corporate structures aligned with private capital.

It is precisely at this point that the discussion between Musk and OpenAI takes on public relevance. The underlying question is not only whether internal commitments were betrayed, but whether it was inevitable to abandon the original idealism or whether there were alternatives, perhaps slower, but more in line with the founding mission. This dilemma raises an even greater question, clearly legal and political in nature: if the development of high-impact artificial intelligence is only viable within private economic frameworks, how do we prevent its effects from eroding fundamental rights or creating systemic risks for society?

From Foundational Idealism to Market Logic: A Structural Transit

OpenAI was born as a moral counterweight to the growing concentration of technological power in Silicon Valley. It brought together top-level technical profiles, investors concerned about the risks associated with AI, and public figures with the ability to influence global conversation. Its initial proposal combined cutting-edge research, knowledge dissemination, and an explicit desire to become an ethical benchmark in a field where innovation was advancing with little external control.

The subsequent adoption of a hybrid structure, the capped profit model, marked a turning point. It meant recognising that philanthropic funding and traditional contributions were not enough to sustain the race for frontier models. To remain relevant, it was necessary to open the door to private capital and compete in a fast-moving global market. The implicit message was clear: the mission was still valid, but it could no longer be sustained outside the logic of business.

It is this shift that Musk interprets as a broken promise. OpenAI, for its part, presents it as an inevitable adaptation to environmental conditions. Both readings capture part of the reality. On the one hand, it is undeniable that cutting-edge AI requires colossal resources; on the other, it is also true that an altruistic mission comes under strain when it enters the corporate game and begins to share time, language and priorities with venture capital.

From a perspective of technological policy analysis, this dispute reveals an uncomfortable reality, and that is that the development of advanced artificial intelligence is no longer in the hands of technological NGOs or academic laboratories. The frontiers of development are now decided in boardrooms and at negotiating tables involving large companies, investment funds and strategic players. University research and public centres remain essential, but they no longer set the direction of the ecosystem on their own.

This observation has very concrete consequences. It determines the conditions under which personal data is processed, how content is moderated on digital platforms, and the criteria that determine what a user sees when searching for a job, credit or health information. Artificial Intelligence ceases to be a mere technical resource and becomes an infrastructure that can affect access to opportunities and rights.

Ethics and Artificial Intelligence: theoretical reflection and institutional architecture

One of the few positive effects of the conflict between Musk and OpenAI is that it has reignited a debate that is often overshadowed by technological enthusiasm: that of ethics versus the development of artificial intelligence. It is no longer just a matter of identifying biases in specific models, but of analysing the power structure, information asymmetries and economic incentives surrounding the development of AI.

In academic and professional circles, the relevant questions are not limited to superficial criticism, but include issues such as:

  • What transparency obligations should be imposed on models trained with large volumes of data, especially when using personal, confidential or sensitive data?
  • How is accountability articulated when an AI system generates misinformation and causes economic, reputational or even physical damage?
  • What institutional structure is needed to ensure independent oversight with real technical capacity to audit models and demand corrections?
  • Is a highly centralised global AI infrastructure compatible with the principles of pluralism, effective competition and consumer protection?

Added to all these problems is the impact of AI on the labour market and the redistribution of generated value. The automation of cognitive tasks, the transformation of skilled professions and the emergence of new job profiles pose challenges that cannot be addressed solely from a business perspective. Without a coherent framework of AI legislation and public policies to accompany this transition, there is a risk that the benefits will be concentrated among a few players while the costs are distributed.

Furthermore, the geopolitical dimension is becoming increasingly evident. While the United States relies heavily on market self-regulation and China integrates AI into its state strategy for control and economic development, the European Union is trying to articulate a middle ground through AI legislation that prioritises legal certainty, traceability and the protection of fundamental rights. This difference in approach is not merely technical, as it defines the type of relationship that is established between public authorities, technology companies and citizens.

The AI Act: a regulatory framework for an ecosystem that is advancing faster than the law

In this context, the European AI Act is presented as the first attempt at comprehensive regulation of Artificial Intelligence in a large economic area. Its aim is not to stifle innovation, but to introduce a risk management approach proportional to the potential impact of each system. To this end, the regulation classifies AI applications into different categories based on the risk they pose (minimal, limited, high and prohibited) and assigns different obligations according to this classification.

From a legal and academic perspective, several key elements of the AI Act are worth highlighting:

  • The definition of high-risk systems in particularly sensitive areas, such as employment, education, access to essential services, migration, and the administration of justice.
  • The imposition of documentation, traceability, and data governance requirements that seek to make the design, training, and deployment processes of systems auditable.
  • The obligation to clearly inform users when they interact with generative AI systems or interfaces that may be misleading.
  • The strengthening of the role of national authorities as supervisory bodies and the creation of coordination mechanisms at European level.

The AI Act is not without its limitations, and its practical application will give rise to interpretative doubts and progressive adjustments. However, it introduces a key element that has been missing until now: a common frame of reference that requires ethics and risk management to be integrated into system design, not as a voluntary commitment, but as a regulatory requirement with legal consequences.

In contrast, episodes such as the dispute between Musk and OpenAI illustrate what happens in the absence of clear rules: AI governance is conditioned by the internal dynamics of companies with enormous market power, whose objectives do not always coincide with the interests of society as a whole.

AI regulations: the limits of self-regulation

Experience in the digital sphere has highlighted the limits of self-regulation. The tensions between Musk and OpenAI are yet another example of how initially ambitious ethical commitments can be diluted when they conflict with the need to grow, attract investment or respond to competitive pressure. In the field of artificial intelligence, these limits are amplified due to the potential impact of systems on rights, welfare and social structures.

Hence, AI regulations must go beyond corporate codes of conduct. To be effective, they require:

  • Independent oversight with sufficient technical and legal capabilities.
  • Verifiable standards to assess regulatory compliance.
  • Regular technical audits that also focus on social impact.
  • A clear regime of legal liability.
  • Proportionate mechanisms for intervention in high-risk uses.

Relying exclusively on the goodwill of private actors would mean replicating a model that has already proven insufficient in other areas of the digital economy. The Musk v. OpenAI case is not an isolated anomaly, but a reminder of the fragility of ethical commitments when there are no solid regulatory checks and balances.

A personal dispute with structural implications

From a broader perspective, the confrontation between Musk and OpenAI should not be interpreted solely as a conflict between former allies, but as a symptom of the transformation of AI into critical infrastructure for the contemporary economy and social life. In this scenario, the logic of ‘move fast and break things’ is no longer acceptable. What is at stake are no longer marginal applications, but systems that influence financial markets, democratic processes and information flows.

At this point in the analysis, and after positioning AI as critical infrastructure, it is important to emphasise a central idea: artificial intelligence no longer belongs to its creators. Once deployed on a large scale, it becomes integrated into value chains, regulatory frameworks and social expectations that go beyond any specific company. Its impact is cross-cutting and changes the power relations between states, companies and citizens.

The Musk v. OpenAI case reinforces the intuition that underpins the European response: the future of AI cannot depend exclusively on business decisions or personal disputes. It requires rules, institutions and principles capable of providing stability beyond technological cycles and media hype.

A fledgling conversation

It is likely that, with the passage of time, the dispute between Musk and OpenAI will be perceived as just another episode in the history of technology. Today, however, it serves the important educational purpose of forcing us to analyse Artificial Intelligence not only as a set of useful tools, but also as a social, economic and legal phenomenon of enormous scope.

The question is no longer whether we will use AI systems (something we already do on a daily basis), but under what conditions, with what guarantees and according to what rules of the game. This conversation cannot be reserved for engineers or executives. It requires the participation of legal, economic, educational profiles and an informed citizenry.

Legislation for AI, the various AI regulations and debates on ethics and Artificial Intelligence are not abstract issues. They determine how decisions are made that affect access to services, employment, information and, ultimately, social trust in technology.

Europe has taken a decisive step forward with the AI Act. Its success will depend on its proper implementation and on the collective ability to understand that regulation is not a brake on innovation, but a condition for it to be sustainable and legitimate. The alternative, AI without clear rules, increases risks and erodes trust.

Ultimately, the lesson is clear: beyond the names involved, the real challenge is to build Artificial Intelligence that serves society. AI that advances with ambition, but also with discernment; that innovates without losing sight of the rights and values that underpin our democracies.

Comparte el post

Leave A Comment