HomeNews & Events“There is a strong political will to act on the topic of generative AI”

“There is a strong political will to act on the topic of generative AI”

25 July 2023 | The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Member of the European Parliament Eva Maydell is one of the main actors in the legislative process. In this interview with EuroTech, she outlines Parliament’s approach, explains the role of research and education in the AI Act, and defends the proposal against “too little, too late” concerns.

Interview with Eva Maydell MEP

The EU Artificial Intelligence Act, as most European legislative initiatives, has been drafted by the European Commission and is now being negotiated between the European Council and the European Parliament. On Parliament side, several committees are included in the process, amongst them the Committee on Industry, Research and Energy (ITRE).

Eva Maydell MEP is the ITRE rapporteur for the EU AI Act and has as such been in charge of drafting a report on the proposal of the EU Commission, reflecting the opinion of the ITRE members. In drafting her report, Eva consulted with relevant experts and stakeholders. She has also been responsible for the drafting of compromise amendments.

On 14 June 2023, MEPs adopted Parliaments negotiating position on the AI Act. The talks will now begin with EU countries in the Council on the final form of the law. The aim is to reach an agreement by the end of this year.

Eva Maydell MEP, Rapporteur on the AI Act, voting in the European Parliament
© Eva Maydell, June 2023

Eva, the AI Act proposal of the European Commission basically does not mention research at all. Your amendments bring research a bit more into the picture, but its role remains marginal. Why is that?

When the Commission proposed the AI Act, many were surprised to see research not being mentioned. Presumably, the reason for that was that the Regulation applies to products upon placement on the market, not those in a research phase. To me, it was important that this “implicit” research exemption be made clear in the text. This is why in the ITRE Committee, where I am Rapporteur, we included clear language stating that the Regulation shall not affect research activities in the Union. We were the first to adopt this language back in June 2022 and I am glad to see the research exemption has stayed in the final European Parliament position that was adopted in June 2023. Seeing that the Council has made similar changes to their text, it is my hope that at the end of the trilogue phase, this research exemption will be maintained.

The AI Act differentiates between research on the one hand side and marketable products on the other. However, there is rarely a clear line between research and application – fundamental research, in the best of cases, will eventually feed into real-world solutions. Does the AI Act take that sufficiently into account?

Indeed, from my conversations with many researchers from European universities, they stress the importance of collaborating with different actors – sometimes in the private sector – to carry out their research activities. No situation is perfectly black and white, but I do believe we took this situation sufficiently into account. Our adopted Parliament version states that the Regulation shall not apply to “research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service, provided that these activities are conducted respecting fundamental rights and the applicable Union law.” We have heard positive feedback on this approach but remain open to further insight during the trilogue phase.

The proposal for the EU AI Act was presented by the European Commission in April 2021. Now, in July 2023, we know a lot more about AI than we did two years ago. Is the AI Act in its current form still fit for purpose?

When the initial AI Act was presented by the Commission back in 2021, we had not seen many of the breakthroughs that we are seeing today. With the AI race heating up, we have seen the debate around generative AI intensify. There is a strong political will to act on the topic of generative AI and in particular on powerful foundation models. In addressing these latest developments, my philosophy has always been that we should not play “whack-a-mole” or just aim to regulate the “next big thing.” Instead, we should focus on a set of underlying principles and mechanisms to address high-risk AI such as data checks, risk assessments, transparency measures, etc. On the European Parliament side, we presented an entirely new text on generative AI and foundation models. I believe we have struck this balance well with and we will aim to maintain this targeted flexible approach during the trilogue negotiations.

An open letter published in March 2023 and signed by AI experts from all over world, including tech leaders like Elon Musk and Steve Wozniak, urges labs to put the brakes for at least six months on AI research, to prevent AI too powerful to control. Is this letter worrying you?

While I can sympathize with some of the concerns stated in the letter, it is too alarmist for my personal views and I would not fully subscribe to it. For one, it is not realistic: How would it be enforced? By who? More importantly, however, we should be focusing on risk mitigation measures rather than calling for complete pauses in technological development. It is possible to have technological progress in a safe and transparent manner. What I do think has merit in the letter, though, is the idea that we need to properly reflect on all that is happening around us and engage in more discussions about it. Such discussions should not happen in silos, they should happen between industry, tech players, academia and civil society.

What role does education play in the field of AI? What would you expect from universities in that regard?

Universities are the beating hearts of our societies – they help to shape the next generation. They push students to be curious, to learn, to engage in honest and thought-provoking debates. It is crucial that we continue to foster these skills like critical thinking, teamwork and other soft skills that will become even more valuable as AI becomes more pervasive. Universities will be the ones to prepare people for the jobs of the future, but more than that, they should be a place that fosters debate about the type of world in which we want to live. The more powerful these systems become, the tougher the questions we will be facing. We are speaking about AI systems that could massively transform society as we know it. So, we need to already be engaging in discussions about the shared vision we have for the future and what this means for social trust, democracy and society more broadly.

Thank you very much for this interview!


Eva Maydell has been a Member of the European Parliament for Bulgaria since 2014. She is a member of the European People’s Party (EPP) group. Her parliamentary work focuses mainly on improving the quality of education for young people, expanding opportunities for entrepreneurs and promoting technology and digitalization.

On 25 May 2023, we organised a EuroTech Policy Deep Dive on the EU AI Act and its implications for researchers and innovators, with speakers from the European Commission and the EuroTech Universities sharing their views. You can find the recording on Youtube. For more information on the event, incl. the speakers, click here.