Artificial Intelligence: Two books to get a (good) idea

Artificial intelligence affects us. It offers researchers unimagined opportunities, especially in biology and medicine. It offers companies development opportunities that are increasingly believed to be similar to those made possible by the invention of the steam engine. After all, it offers society the opportunity to benefit from all these advances universally and directly and on an unprecedented scale. The Italian university system is rapidly preparing to face the sudden increase in professional demand in this field and to ensure the technical-scientific development of this sector in the medium term. Thus, in a few years, courses with a multidisciplinary and international orientation in artificial intelligence have been opened, while the application deadlines for the first year of the national doctorate in artificial intelligence have expired a few days ago.

The major social impact of the above-mentioned opportunities and the associated dangers have increasingly polarized the public debate about artificial intelligence, which often consists of uncritical positions, on the one hand from excessive enthusiasm for technology that promises to solve all scientific problems, social and human problems in one fell swoop , and on the other hand a corresponding and pronounced defeatism of the Luddites. It goes without saying that the real opportunities for progress as well as a proper understanding of the risks involved must be sought in between. But what lies in between is not always easy to understand.

Fortunately, two recently published essays are available to anyone who wants to go beyond polarization. It’s about Artificially intelligence. A guide for thinking peopleby Melanie Mitchell (Pelican, 2020, 448 pages) e The Road to Conscious Machines The History of AIby Michael Wooldridge (Pelican, 2021, 416 pages).

I recommend not reading both because they embody (the) two antagonistic factions. Far from it. These are essays carefully written by authoritative people but addressing the subject with marked cuts. In both cases, the editorial work is valuable and opens up the core of the question to anyone who reads carefully, which concerns the state of affairs in relation to the scientific-economic opportunities and the social dangers associated with artificial intelligence.

The plurality of perspectives is a fundamental condition for understanding artificial intelligence, which fortunately, as the authors show, defies any attempt at definition. Indeed, both pay attention to remembering as the choice of expression artificial intelligence was considered unsatisfactory by its proponent, the mathematician (by training) John McCarthy, who commented in this regard: “We had to give it a name”. This plural also includes Claude Shannon, the best-known figure and co-organizer of the 1956 Dartmouth Summer School, which is considered the official birth certificate of artificial intelligence – although Wooldridge recalls at the beginning of the article Computer machines and intelligencepublished in 1950 by Alan Mathison Turing, is undoubtedly the first programmatic reflection on the subject of mechanical intelligence.

Wooldridge’s cut is expressly historical. The author comes from the logical tradition of artificial intelligence, which he tells about with an interesting mixture of self-criticism and nostalgia for the long winter. Understandable for those who started their careers in a research field where they quickly understood the unattainability of expectations fueled by expectationsexaggeration their scientific patrons and politicians, which for various reasons have at various times proved sensitive to this excess of optimism. But as in all fields, exceeding expectations is the surest path to failure.

The history of artificial intelligence (AI) is indeed traversed by long winters, followed by periods of sudden resurgences of interest (and funding) motivated by often unexpected successes – the sources of AI. The most recent of these began a few years ago, when two sets of conditions occurred simultaneously: the (nearly) universal spread of the Internet and the availability of graphics processors capable of processing an unprecedented amount of data at an unprecedented speed. . In this context, an idea that many at the end of the twentieth century now regarded as patently sterile ushered in the first season, in which many promises are quickly fulfilled. The idea in question is that of neural networks, the leading tool in mechanical learning technology (machine learning). It is not surprising, therefore, that both volumes devote a great deal of space to illustration, and it is not surprising that these are the chapters in which reading requires the greatest effort, which in both cases is very well rewarded.

An important difference between the two volumes can be observed in the comparison between artificial intelligence and human intelligence. As the subtitle suggests, this is the cut chosen by Melanie Mitchell, a leading exponent of the cognitivist approach to artificial intelligence. Here we take seriously the goal of creating computer systems capable of ascribing meanings comparable to those of humans to relevant aspects of the world. This puts the (mis)understanding at the center of the discussion, which characterizes even the most extraordinary and until a few years ago unthinkable goals of machine learning. Mitchell devotes very passionate pages to illustrating the reasons why current techniques do not allow us to feel close to solving the problem of “meanings”: from linguistic meanings to understanding images at the human level.

Both volumes help to capture a certainly subtle point: the achievements of the new springtime of artificial intelligence, while surprising, are partial, in the sense of being limited to specific problems. When we think of artificial intelligence as a scientific discipline, this is nothing new – scientific solutions are often rather local and improvable. Things change radically when we consider the applications of these “imperfect” systems in fields other than research, of which algorithms used in medicine and self-driving cars are two particularly relevant examples.

Wooldridge devotes a very interesting reflection to the first medical applications, starting from a joke by Geoffrey Hinton, responsible for the original idea of ​​the technology known as convolutional neural network and which has provided excellent results in image classification. During a class (at a hospital) in 2016, Hinton compared radiology staff to Willy the Coyote on the precipice before looking down. To this popular image he added a sentence that still moves people today: “We should stop training people in radiology”. The provocation is obvious: Automatic learning systems already guarantee a performance that is superior to that of humans, so why remain in a suboptimal training line?

The reason, well illustrated by Wooldridge in the specific case and more generally by Mitchell, is the fragility of the technology compared to the so-called contradictory attacks (Opponent). In a hypothetical scenario where diagnostic imaging would be entrusted to machine learning algorithms, there would be an incentive to strategically alter the images in a way that misleads the system, perhaps to the benefit of private health insurance companies or public collaborations.

Similar problems arise with autonomous vehicles: it is very easy to sabotage traffic signs to set automatic navigation systems into potentially fatal errors. So the following pattern emerges: If algorithmic image recognition is one of the reasons why artificial intelligence is experiencing a new and economically very prosperous spring, the application of this technology in the real world reveals limits and dangers that we still have no way to today quantify.

To do this, it is important to avoid the Frankenstein effect. Wooldridge tackles without anything special understatement the question of the so-called singularity, which is very welcome in certain communication channels, ie the emergence of a superintelligence that will quickly destroy humanity. The section dedicated to the topic has a title that leaves no doubt: “The singularity is bullshit”. Instead of the threat of superintelligence, Mitchell is concerned with phenomena that are already widespread, such as that of the deep fake to spread false content very quickly and worldwide, which is practically indistinguishable from real. In this sense, artificial intelligence does not pose any radically new ethical problems, but poses new technological challenges to ethical principles and, even more directly, to legal ones. But once again we are talking about a technology that is still very little understood. The problem therefore arises of first arriving at a scientific understanding of the instrument that will allow us to regulate it.

It would be desirable to see the Italian translations of these two articles in bookshops soon. This would ensure that their work demystifying artificial intelligence can reach the widest possible audience.

Leave a Comment