For us users, the decisions and underlying values of language-based AIs are incomprehensible. Creating transparency seems impossible, as the underlying algorithms are a ‘black box’. Although there are models of artificial intelligence that supposedly explain themselves – so-called explainable AI – this approach is also unreliable. Again, it is impossible to determine whether the explanation an AI spits out is true, or whether it is programmed to make an answer sound plausible and simulate transparency. Such ‘explanations’ are always based on the decision and therefore don’t offer true transparency. This is where Spiecker’s research comes in. “Where we are no longer able to control exactly what happens, we need regulation,” says the legal scholar, who is a member of the ECONtribute Cluster of Excellence.
In the Cluster of Excellence, scientists conduct joint research on markets at the intersections of business, policy and society. As generative and language-based models have increasingly become part of our everyday lives, the field of artificial intelligence has become extremely dynamic. This makes it difficult to maintain an overview, predict possible developments and assess their consequences, and in turn, for us as a society to decide what we actually want and can allow, which creates a perfect scenario for the researchers at ECONtribute: they offer advice on how policymakers can utilize their opportunities to contribute to efficient and socially acceptable market developments. “Regulation helps to sharpen our focus and assess where a technology is actually heading,” says Spiecker.
Good rules that are hardly enforceable
According to the Federal Statistical Office, every second large company in Germany now uses AI. For medium-sized companies with 50 to 249 employees, the figure is still more than a quarter. Expectations fluctuate somewhere between the promise of salvation and dystopia. The European Parliament, for example, writes that Europe’s prosperity and economic growth depend largely on how the new technologies are used. Artificial intelligence has “the potential to fundamentally change our lives, both for the better and for the worse”.
Together with Philosopher Judith Simon and Computer Scientist Ulrike von Luxburg, Spiecker has written a discussion paper on the social and ethical challenges posed by generative AI for the German National Academy of Sciences Leopoldina. The Leopoldina provides independent advice on important political and societal topics for the future. In this paper, Spiecker and her colleagues also take a critical look at the European Union’s Artificial Intelligence Act (EU AI Act) – the world’s first law on the use of artificial intelligence.
The regulation was adopted by all EU member states and came into force on 1 August 2024. Among other things, it stipulates certain transparency and documentation requirements. The AI Act aims to create a standardized legal framework in the EU and classifies AI applications into different risk categories, from minimal to high risk. There are bans on particularly problematic areas of application. The EU is putting a stop to social scoring, for example. Social scoring is a system that collects data from individuals and uses artificial intelligence to reward desirable behaviour or sanction undesirable behaviour, based on a point system. The score obtained is used to measure access to infrastructure and resources, for example in the allocation of university or kindergarten spots, the authorization of entries and exits or, at an economic level, the granting of loans.