Artificial lack of transparency
AI is changing how information is analyzed and knowledge is created. Although it shows enormous potential, without certain rules there is a risk of even more misinformation and disinformation, as well as restrictions on our civil liberties. The EU regulation on AI is the world’s first attempt at a legislative solution to steer developments in the right direction. A researcher from the ECONtribute Cluster of Excellence is exploring how ‘Brussels’ is transforming from a regulatory monster into an export success.
Jan Voelkel
If you ask ChatGPT who Indra Spiecker is and what her field of research includes, you might get the impression that she is an IT expert or a computer scientist. According to the chatbot from the American software company OpenAI, Spiecker is a scholar who “primarily deals with topics relating to computer science and artificial intelligence. Her research often focuses on machine learning, data analysis and the development of algorithms. She is also interested in the ethical and social implications of AI technologies”.
Professor Spiecker is actually a lawyer by training. She holds the Chair of Digitalization Law and is the Director of the Institute for Digitalization at the University of Cologne. ChatGPT has not completely missed the mark, but the answer is misleading because it omits the actual academic field. Yet if you do an internet search yourself, her work as a lawyer is almost the first thing you’ll find. “This is a prime example of how impossible it is to understand how artificial intelligence arrives at its results,” Spiecker herself says.
For the researcher, whose work focuses on the regulation of digital developments, this is a core problem of artificial intelligence. The example shows that AI makes many hidden decisions about how it collects, processes and analyzes data. From the vast amount of data available, ChatGPT and other AI systems select which information the software considers relevant and includes it in the result. AI is designed to recognize patterns and make connections. But it doesn’t necessarily come up with the ‘correct’ answer based on the database, but rather the most likely answer from its point of view. “In the case of my personal bio, I obviously know whether it’s plausible or not. But if I don’t have that information, it becomes a problem,” says Spiecker. How does AI handle gaps, uncertainties or contradictory information? The researcher sees considerable value judgements here.
AI Doesn't Like to Talk About Itself
This is particularly obvious when you ask ChatGPT’s Chinese competitor DeepSeek about certain content. It is particularly tight-lipped when it comes to political issues. When asked about the Chinese military’s violent suppression of the Tiananmen Square protests in 1989, DeepSeek says: “Let's talk about something else instead.”
This phenomenon can be problematic in view of the potential areas of AI application, because its use at all levels – private, business and public – changes the way knowledge is generated and decisions are made. From evaluating information for search engines to analyzing research data, autonomous driving, real-time face recognition and decision-making in court cases, the potential of AI seems almost limitless.
For us users, the decisions and underlying values of language-based AIs are incomprehensible. Creating transparency seems impossible, as the underlying algorithms are a ‘black box’. Although there are models of artificial intelligence that supposedly explain themselves – so-called explainable AI – this approach is also unreliable. Again, it is impossible to determine whether the explanation an AI spits out is true, or whether it is programmed to make an answer sound plausible and simulate transparency. Such ‘explanations’ are always based on the decision and therefore don’t offer true transparency. This is where Spiecker’s research comes in. “Where we are no longer able to control exactly what happens, we need regulation,” says the legal scholar, who is a member of the ECONtribute Cluster of Excellence.
In the Cluster of Excellence, scientists conduct joint research on markets at the intersections of business, policy and society. As generative and language-based models have increasingly become part of our everyday lives, the field of artificial intelligence has become extremely dynamic. This makes it difficult to maintain an overview, predict possible developments and assess their consequences, and in turn, for us as a society to decide what we actually want and can allow, which creates a perfect scenario for the researchers at ECONtribute: they offer advice on how policymakers can utilize their opportunities to contribute to efficient and socially acceptable market developments. “Regulation helps to sharpen our focus and assess where a technology is actually heading,” says Spiecker.
Good rules that are hardly enforceable
According to the Federal Statistical Office, every second large company in Germany now uses AI. For medium-sized companies with 50 to 249 employees, the figure is still more than a quarter. Expectations fluctuate somewhere between the promise of salvation and dystopia. The European Parliament, for example, writes that Europe’s prosperity and economic growth depend largely on how the new technologies are used. Artificial intelligence has “the potential to fundamentally change our lives, both for the better and for the worse”.
Together with Philosopher Judith Simon and Computer Scientist Ulrike von Luxburg, Spiecker has written a discussion paper on the social and ethical challenges posed by generative AI for the German National Academy of Sciences Leopoldina. The Leopoldina provides independent advice on important political and societal topics for the future. In this paper, Spiecker and her colleagues also take a critical look at the European Union’s Artificial Intelligence Act (EU AI Act) – the world’s first law on the use of artificial intelligence.
The regulation was adopted by all EU member states and came into force on 1 August 2024. Among other things, it stipulates certain transparency and documentation requirements. The AI Act aims to create a standardized legal framework in the EU and classifies AI applications into different risk categories, from minimal to high risk. There are bans on particularly problematic areas of application. The EU is putting a stop to social scoring, for example. Social scoring is a system that collects data from individuals and uses artificial intelligence to reward desirable behaviour or sanction undesirable behaviour, based on a point system. The score obtained is used to measure access to infrastructure and resources, for example in the allocation of university or kindergarten spots, the authorization of entries and exits or, at an economic level, the granting of loans.
In addition, applications in particularly sensitive areas, such as security, critical infrastructure or health, must meet more stringent requirements. For a minimum level of transparency, the regulation requires that users be informed about the use of AI and that AI-generated images and videos must be labelled as such. But Spiecker is sceptical about how the regulation will be enforced to protect EU citizens: ‘That all sounds very good, but it is hardly feasible in practice. In addition to the problem of transparency, there are many other things missing, such as clear content standards or effective monitoring, because individuals are not in a position to check compliance with the rules.’
The EU is paving the legal path – will the world follow suit?
Despite such problems, she is convinced that moving towards a common European regulatory approach is a very important step. That’s because the EU is playing a pioneering role with this regulation. Nowhere else is there a framework as comprehensive as this one on how to use and handle AI. “The world is aware of this and is watching very closely,” says Spiecker. And it’s probably not the first time. “There are countless examples of European law becoming a real export success. Europe is a strong player, so the new regulation could also have a so-called ‘Brussels effect’.”
This effect refers to the phenomenon of European legislation being adopted in other regions due to Europe’s market strength in the world. Brazil, for example, is working on its own version of the AI regulation. The experts there have intensively analysed the European legal situation and have based their research on the EU AI Act to develop their own regulatory framework. “That’s why positioning ourselves in Europe and creating momentum is so good and important,” says Spiecker. The current regulation is a starting point that can be further developed and adapted in the future.
Although the regulations have now been adopted, they will not all apply at the same time, but are expected to be phased in by 2027. It is becoming apparent that this could be a slow process. Even the appointment of supervisory authorities is a challenge. In Germany, for example, it looked as if the last coalition government would assign this task to the Federal Network Agency. However, that agency’s core task is actually to promote and maintain competition on so-called network markets, for example in the area of telecommunications and broadcasting networks. This is a different approach to that of the Federal Commissioner for Data Protection, which is also under discussion. Its primary task is data protection regulation and the implementation of the General Data Protection Regulation. “To me, AI seems to be less of a network infrastructure and more of a technology from the field of digital transformation that calls human autonomy, freedom and possibly democracy into question. So it is a matter of data protection,” says Spiecker.
Which authority is responsible for overseeing the implementation of the EU regulation will have an impact on what the regulation may look like. According to Spiecker, questions of efficiency and effectiveness must be considered when making this decision: can the state use its available resources to ensure that regulation is implemented? After all, the law is only as good as its enforcement.
Does heavy regulation turn companies away?
There is also a clash of interests when it comes to the regulation of AI. Heavy regulation is not in the interests of commercial enterprises because it restricts their room for manoeuvre, imposes documentation requirements and may result in procedural costs. The argument that is often presented at both the business and political level is therefore that heavy regulation makes the location unattractive; companies move elsewhere, the economy is weakened and innovation falls behind.
But practice has already shown that regulation does not necessarily lead to a standstill in innovation. Germany was an early adopter of relatively strict environmental legislation – resulting in the development of filter technology in Germany to meet legal requirements. Meanwhile, in China with its more than 120 cities with over a million inhabitants, massive urbanization has increased the need for action on air pollution. “When it became clear that there was harmful smog in the air, people started looking for technologies on the global market,” says Spiecker. “And what they found was German technology.”
In this case, rather than hindering innovation, environmental legislation paved the way for technological development and exports. To some extent, ChatGPT even recognizes that implementing legal guardrails to guide the path of AI development can be beneficial. When asked whether artificial intelligence should be regulated, the software replies: “A balanced approach could help maximize the benefits of AI while managing potential negative impacts.”
ECONtribute: Märkte & Public Policy is the only Cluster of Excellence in economics and related disciplines in Germany funded by the German Research Foundation (DFG). Its research focuses on markets at the intersection of economics, policy and society. More than seventy researchers from various disciplines strive to find answers to fundamental social and technological challenges such as digital transformation, global financial crises, economic inequality and political polarization.
Since 2019, ECONtribute has been jointly supported by the University of Bonn and the University of Cologne. All research activities are organized under the roof of the joint Reinhard Selten Institute (RSI).