HomeNews & EventsOpinion: Why the AI act is a huge triumph and how we can make it work

Opinion: Why the AI act is a huge triumph and how we can make it work

21 March 2024 | Last week, the European Parliament adopted the Artificial Intelligence Act, clearing the last major hurdle for this EU regulation to enter into force. It was a historic moment, making Europe the first continent to reign in the transformative and fast-developing technology.

Whether the attempt will succeed remains to be seen. But this is a big first step—not just for the EU, but for the world—towards a moral and ethical compass for future machine intelligence. The next step is to make sure the act passes its reality check.

Questions answered

The AI Act answers fundamental questions: what needs to be regulated and what doesn’t? Who is accountable? The EU has an obligation to protect its citizens, and with the AI Act, it has certainly erected some defences.

Tatiana Panteli
Susan Hommerson
Carlo van de Weijer
Carlo van de Weijer

Other parts of the world are likely to follow suit, either via legislation or voluntary codes. Even if they do not, the act will have global influence: if abiding by the AI Act is your ticket to the single market, you’ll probably comply.

Another positive is that the regulation gives a lot of leeway to open-source AI, where code can be scrutinised and modified by all. It allows developers worldwide to collaborate and improve upon existing technologies, making them more democratic and accelerating innovation. This approach should smooth the act’s reception in the tech community and beyond.

Many researchers are pleased the act exempts AI developed for scientific research from some of its provisions. However, this requires careful delineation to ensure it does not become a loophole for evading compliance. Furthermore, the distinction between research and commercial use, especially in areas like foundation models, will need clarification.

Ideally, research leads eventually to commercialisation—so the provisions will start to apply at a certain point during the innovation process. It means distinguishing between research and products on the market might actually hamper innovation.

Losing competitiveness?

European AI companies have already expressed concerns that the regulation will blunt their competitive edge, and that Europe will become more reliant on technology developed elsewhere, jeopardising tech sovereignty. But what is the alternative? The need for regulation is a no-brainer; to mitigate the risks, it should come with increased efforts to foster innovation. The EU can support its tech industry by, for example, avoiding legal uncertainties and working collaboratively on regulatory sandboxes.

Dedicated support

Alongside this, the new rules need to come with dedicated support. The AI Act is the product of years of tough negotiations; if policy experts struggle to digest it, imagine how difficult it will be for researchers, startups and small and medium-sized enterprises.

The European Commission has established an AI Office, but support on the ground is also needed. Researchers and companies need clear guidelines and training on which categories their AI systems fall into, and the corresponding obligations​​.

The AI Act will have to prove that it is fit for purpose when theory clashes with reality in universities, companies, households and the public sphere. The rapid pace of development will provide another stress test.

Responsive and adaptive regulation will require constant evaluation and frequent updating, especially in areas developing as rapidly as AI. The act was drafted before most people had heard of generative AI, let alone used ChatGPT. Next to all the benefits, the potential risks of such large language models need to be anticipated.

Clear feedback loops and timely adjustments will be needed to make sure neither consumer protection nor innovation is hampered. More could be done to consult stakeholders from all spheres of life. Let’s make sure their feedback is heard.

Human-machine interface

Finally, a world where machines help humans take decisions, sometimes in ways beyond our understanding, needs educated, literate citizens who are critical thinkers. This education must start young and continue throughout life.

When maths, writing and even programming are done more and more by machines, we also need people able to handle AI at a professional level—data scientists, engineers, ethicists, regulators and lawyers. The human-machine interface will take centre stage. Europe needs frameworks and means that can make society as a whole fit for this new world.

The AI Act is a pioneering effort and the EU deserves credit for its creation. Yet it will take a collective effort to make sure the shoes are suited for this giant leap for the world.

Tatiana Panteli is head of the EuroTech Universities Alliance Brussels Office. Susan Hommerson is a policy officer at Eindhoven University of Technology. Carlo van de Weijer is general manager at the Eindhoven AI Systems Institute, and leads the EuroTech focus area AI for Engineering Systems.

This op-ed was published in Research Europe and on www.researchprofessionalnews.com on 21 March 2024.