IA-LOGO_400x300_Transparent_Recoloré_Ombrage-blanc ISSUE 1 ⎯ FEBRUARY 15, 2020
CREATIVE MACHINES

____________________________

 

All about emerging artists exploring the creative power of AI
and the most prominent personalities in the field of AI and generative art. 

 

 

WHAT IF WE SUCCEED?

by Anne-Marie Boisvert

Review of Human Compatible: Artificial Intelligence and the Problem of Control, de Stuart Russell (Viking, 2019).

 

 

Stuart Russell is a computer scientist known for his contributions to artificial intelligence. He is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco. He holds the Smith-Zadeh Chair in Engineering at University of California Berkeley. He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley, which aims to develop advanced security methods in artificial intelligence. Russell is the co-author of the most popular textbook in the field of artificial intelligence: Artificial Intelligence: A Modern Approach used in more than 1,400 universities in 128 countries. (Source: Wikipedia)

 

Publications (selection): 

 

The threat posed to the future of humanity by the advent of a hypothetical “singularity” in artificial intelligence – that is, the advent of systems with “superintelligence” and getting out of control – has often made headlines in recent years …

 

So-called “narrow” or “weak” artificial intelligence is the only form of artificial intelligence that exists today. Such systems perform certain precise tasks extremely well in a very limited context. A system designed to recognize images, for example, cannot also make purchase suggestions. So-called “general” or “strong” artificial intelligence is the type of artificial intelligence that can understand its environment, reason and accomplish multiple tasks like a human would. Experts say we are probably far from it. But once this threshold is reached, the threat would be that the growth in intelligence of systems becomes exponential, with more and more intelligent systems giving “birth” to more and more intelligent systems escaping all human control. “What will happen if we succeed? (“What if we succeed?”) asks Stuart Russell.

 

Respected scientists, including Stephen Hawking, as well as artificial intelligence experts, have taken this threat seriously enough to sound the alarm in the public sphere. They thus published in January 2015 an open letter entitled Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter calling on researchers to address the societal impacts of artificial intelligence. Other artificial intelligence experts downplayed or downright derided these fears. Yann LeCun, one of the inventors of deep learning, co-signed an article with Anthony Zador entitled “Don’t Fear the Terminator” (Scientific American Blog Network, September 27, 2019).

 

Terminator (T-800) (Creative Commons)Terminator (T-800)

 

The title of this article is of course a reference to the way in which artificial intelligence is understood by the general public. In science fiction scenarios, robots or advanced computers that have become superintelligent gain consciousness and then turn against their human creators in order to enslave their former masters. But, as Stuart Russell points out, this view is wrong:

 

“Suppose I give you a program and ask, “Does this present a threat to humanity?” You analyze the code and indeed, when run, the code will form and carry out a plan whose result will be the destruction of the human race, just as a chess program will form and carry out a plan whose result will be the defeat of any human who faces it. Now suppose I tell you that the code, when run, also creates a form of machine consciousness. Will that change your prediction? Not at all. It makes absolutely no difference.3 Your prediction about its behavior is exactly the same, because the prediction is based on the code. All those Hollywood plots about machines mysteriously becoming conscious and hating humans are really missing the point: it’s competence, not consciousness, that matters.” (pp. 16-17).

 

Stuart Russell is an IT expert. In 1995, he co-wrote with Peter Norvig Artificial Intelligence: A Modern Approach which has become one of the classic works on the subject and has been republished many times. His opinion is therefore more difficult to dismiss. In his book, Russell  identifies what he considers to be the big problem in the past and actual mode of development of artificial intelligence systems, and proposes a solution.

 

The problem is what he calls the “standard model” in the creation of systems, based on a “standard” conception of human intelligence. Just as humans can be considered intelligent to the extent that their actions serve them to achieve their objectives, machines can be considered intelligent to the extent that their actions serve them to achieve their objectives. But because machines, unlike humans, do not have their own objectives, we are the ones who assign them. We build machines whose aim is to optimize the objectives that we have assigned to them. The problem is that the assigned objectives may be too narrow. This is what Russell calls the “Midas problem”: just like King Midas in mythology who wished to have the power to change everything into gold and who after seeing his wish fulfilled died of hunger and thirst, it is possible (and often happens) that goals that seemed desirable in the short term prove disastrous in the long term. However, a machine designed according to the standard model will blindly pursue the achievement of any objective assigned to it; and the more competent (“intelligent”) it is, the more it will succeed in achieving it, in the way it deems to be the most effective possible, regardless of the consequences. As a hypothetical example, Russell imagines a system that would be responsible for finding a solution to the problem of the accumulation of CO2 in the atmosphere. The most effective solution he could choose could be to eradicate the human race that caused the problem.

 

Russell’s solution is to replace this standard model with one that leads to the creation of “beneficial machines” for humans, that is, machines whose actions serve to achieve our goals. In chapter seven of his book (p. 173), Russell proposes three principles leading to the realization of such machines:

 

  1. The machine’s only objective is to maximize the realization of human preferences. (Note that the term “preference” is understood here in the sense that it has in decision philosophy, that is to say in the sense of an attitude, favorable or unfavorable, that a person can maintain towards such or such thing, idea, person or practice).
  2. The machine is initially uncertain about what those preferences are.
  3. The ultimate source of information about human preferences is human behavior.

 

Thus, the machine will be aiming to achieve, not a narrow objective fixed in advance, but a necessarily broader and more vague objective of “maximizing the realization of human preferences” that is not fixed inadvance. It will instead observe human behavior to learn what these preferences may be. And when in doubt, it will refrain from taking action and seek to gather more information.

 

Russell’s book has the merit of shedding light on the all-too-passionate and narrowly partisan debate surrounding the development of artificial intelligence. Avoiding technical jargon, it is fascinating to read and easily accessible to non-specialists.

 

 

N.B. On the concept of “superintelligence”, see also Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2014.

[embedyt] https://www.youtube.com/embed?listType=playlist&list=PLW1MsxpG6Rx3lAARV63_HqSZ2Qer29faG&layout=gallery[/embedyt] Edmond_de_Belamy_case

WHAT DO MACHINES WANT?

by Anne-Marie Boisvert

“The devil, commenting on the first man’s first drawing on Earth, whispered into his ear: “That’s good, but… is it art?”. Throughout the twentieth century, many artists have taken on the task of questioning and constantly stretching the limits of what art is, or could be…

Read more Terreursublime_case

NEWS

Publication: Olliver DYENS, La terreur et le sublime : humaniser l’intelligence artificielle pour construire un nouveau monde.

 

Event: The AI Art Lab, an international multidisciplinary laboratory using artificial intelligence (AI) in the creation of audiovisual works.

Read more