Thursday, March 6, 2025
HomeEntertainmentBooksAI – Limits and Prospects of Artificial Intelligence

AI – Limits and Prospects of Artificial Intelligence


In AI – Limits and Prospects of Artificial Intelligence, editors Peter Klimczak and Christer Petersen compile research that explores AI’s technical constraints, societal impacts and ethical dilemmas. Broad and interdisciplinary in scope, this book makes a strong case for the necessity of understanding AI’s limitations as the pace of its advancement and adoption accelerates, writes Scott Timcke.

AI – Limits and Prospects of Artificial Intelligence. Peter Klimczak and Christer Petersen (eds.). transcript Verlag. 2023.


Limits AI coverAs Peter Klimczak and Christer Petersen note in AI – Limits and Prospects of Artificial Intelligence, “[n]o other topic in recent years has triggered such a storm of enthusiasm and simultaneously such a wave of uncertainty” (7). Their edited volume examines the tension between “a global revolution” (7) in business, manufacturing, and scientific domains, while assessing AI’s current limitations. The emphasis on limitations is especially welcome, given that unrestrained enthusiasm tends to carry the day. 

One of the volume’s key contributions is its examination of technical constraints, ranging from engineering to jurisprudence. In their opening remarks, Klimczak and Petersen highlight how deep learning systems are hampered by the availability and quality of training data. More fundamentally, current AI systems remain dependent on human input for data labelling and symbolic processing. This human element introduces significant complications, as social biases and discriminatory practices can become embedded in AI systems through data selection and categorisation processes. Perhaps most concerningly, the “black box” nature of deep learning systems means that even creators cannot fully comprehend or explain decision-making processes in these systems, raising serious questions about accountability. 

Klimczak and Petersen set out two main objectives for the volume. First, to explicitly examine AI’s limitations, technical requirements, and associated problems; and second, to explore the social hopes, fears, and competing visions surrounding development trajectories.  

AI’s harmful decisions may be impossible to distinguish from random chance, making accountability extremely difficult.

Turning to the chapters, Rainer Berkemer and Markus Grottke examine what we can conclude about AI’s capabilities from its achievements, using AlphaZero as a case study. They emphasise that we do not really understand why or how AI systems learn and make decisions, which raises questions about responsibility and liability. “What happens when an artificial intelligence is trained – in an industrial context – to maximise profit but not to maximise the adequacy of the products with respect to the customer’s safety, needs or health?” they ask. “Anyone familiar with today’s business world is also aware that such things could happen” (36).  

The concern is that AI’s harmful decisions may be impossible to distinguish from random chance, making accountability extremely difficult. Moreover, they contend that if we cannot effectively audit AI systems’ decision-making processes, we cannot properly assess the responsibility of humans who act on these AI-generated recommendations. They counsel us to “stay very cautious in our judgements” (37). 

In their chapter, Carsten Hartmann and Lorenz Richter argue that treating AI/deep learning models as inscrutable ‘black boxes’ is dangerous, and we need to develop better mathematical understanding to make these systems more robust and reliable. They advocate for using Bayesian probability theory to explain deep learning in a statistical sense, rather than abandoning the goal of explainability entirely. “A comprehensive theory should guide us towards coping with the potential drawbacks of neural networks,” although Hartmann and Richter are reluctant to “[go] beyond the mathematical framework and [explore] the epistemological implications of this framework.” They explain that the “epistemology of machine learning algorithms is a relatively new and dynamic field”, intimating that discussions about implications are premature (45-46). 

These two opening chapters offer much, although I do not fully share their conclusions about caution and premature judgements. Much of my ambivalence is captured in the spirit of Peter Klimczak’s chapter, which discusses the jurisprudence of “accident algorithms”. Also known as algorithm-driven collision avoidance systems, these technical systems can make pragmatic decisions faster than a human could.  

Klimczak’s case study is automated vehicles when use involves mixed traffic with cars, cyclists and pedestrians. Accidents will happen, bringing forth court rulings that involve legal and ethical epistemology. These rulings will likely grapple with questions of algorithmic decision-making and moral responsibility, whether an automated vehicle’s split-second choice to swerve toward one type of road user versus another reflects appropriate programming, and how to assign liability when the vehicle’s decisions result in harm. 

Despite the era of big data, small data approaches are becoming more important for AI research, as they allow smaller organisations to create AI solutions without massive infrastructure requirements.

This chapter is paired productively with the final chapter by Ulrich Schade, Albert Pritzkau, Daniel Claeser, and Steffen Winandy who focus on the mathematical and technical aspects of generating cybersecurity adversarial attacks, as well as the linguistic and human information processing factors that contribute to their success. Whether accidental or adversarial, questions of liability can and will emerge soon. “Although technical development can continue in isolation from such questions, any failure to answer these ethical and legal questions will inhibit operation and sales”, Klimczak writes (86). 

The chapters by Ivan Kraljevski, Constanze Tschöpe, and Matthias Wolff as well as Elise Özalp, Katrin Hartwig, Christian Reuter offer much to non-experts, providing a lexicon and outlining the major stakes in expandable AI and small data models respectively. Kraljevski, Tschöpe, and Wolff’s chapter may be of greatest interest to those based in less resourced environments who may be inundated with stark headlines about the prohibitive cost of designing AI. Despite the era of big data, small data approaches are becoming more important for AI research, as they allow smaller organisations to create AI solutions without massive infrastructure requirements. Small data is typically data that can be processed by a single machine or person. If correct, this finding would be welcome news for many. 

In their respective chapters Isabel Kusche and Kati Nowack focus on some ethical dimensions of AI. Kusche argues that while AI promises to revolutionise risk prediction by moving from statistical group-level analysis to individualised predictions, this shift faces practical limitations and raises new risks that need to be considered before implementation. Indeed, individuation of this kind may break the social solidarity principle behind insurance and other burden sharing initiatives, where the whole point is about the ethics of collective action. 

As the AI race accelerates, this kind of rich, multidisciplinary analysis becomes increasingly essential for understanding the promise and pitfalls of AI products. 

Although Nowack’s chapter reports on a small study (a sample of 64 adult German speakers), the findings point to ethical priorities, where users care most about participation and care, less about privacy, and show concerning gaps between what they say is important versus what violations worry them. Some ethical challenges only arise when we know of them, and this is a concern given that even developers do not know the details about the outputs of AI models they’re creating. 

Anthropological analysis of AI-shaped lifeworlds permeates the book, although it is best executed by Petersen and Stefan Rieger. Petersen’s chapter discusses (western) cultural depictions of robots resembling women and male hierarchies which grant the status of personhood: “for biological as well as for artificial women everything depends on the extent to which ‘man’ attributes to them not only consciousness but above all also sentience” (200). Conversely, Rieger’s suggests that AI has “become a warning sign for a modern era that, in a certain way, no longer seems to need the human at all” (245). 

What makes this work particularly valuable is its intentionally interdisciplinary approach, bringing together perspectives from computer scientists, engineers, mathematicians, media studies, and social scientists. As the AI race accelerates, this kind of rich, multidisciplinary analysis becomes increasingly essential for understanding the promise and pitfalls of AI products. 


Note: This review gives the views of the author, not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science.

Main image: Summit Art Creations on Shutterstock.

Enjoyed this post? Subscribe to our newsletter for a round-up of the latest reviews sent straight to your inbox every other Tuesday.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

Skip to toolbar