IA-LOGO_400x300_Transparent_Recoloré_Ombrage-blanc ISSUE 3 ⎯ APRIL 15, 2020
THE NEW AI ART SCENE

____________________________

 

All about emerging artists exploring the creative power of AI
and the most prominent personalities in the field of AI and generative art.

 

 

INTERVIEW WITH OLLIVIER DYENS

by Anne-Marie Boisvert

Olliver Dyens is a professor in the French Literature Department at McGill University. He is the author of numerous publications relating to new technologies, including Virus, parasites et ordinateurs : le troisième hémisphère du cerveau (PUM, 2014), La condition inhumaine. Essai sur l’effroi technologique (Flammarion, 2008) et Chair et métal, l’évolution de l’homme : la technologie prend le relais (VLB, 2000 ; English edition: MIT Press, 2001). La terreur et le sublime : humaniser l’intelligence artificielle pour construire un nouveau monde (XYZ, 2019) is his most recent essay.

 

 

Anne-Marie Boisvert (A.-M. B.) Can you tell us about your background? How did a professor of literature come to be interested in new technologies?

 

Ollivier Dyens (O. D.) A very special moment marks my journey. During my master’s degree in cinema in the early 90s, I had the chance to hear a researcher talk to us about virtual reality. If this technology fascinates us today, you can imagine the surprise, the shock, the fascination that it sparked in me in 1991 when the World Wide Web had not yet emerged (if HTML and HTTP had created in 1989 by Tim Berners-Lee, it was not until 1994 that a first browser, Netscape, allowed us to enjoy it). This new “reality” offered hope for a new world, to explore, understand and create. I then changed the subject of my dissertation, which I did on the emerging impact of digital images on cinema, then launched myself headlong into a doctorate in cyberspace. But this one still being only embryonic, it is the department of comparative literature which appeared open to an exploration of an astonishing subject which seemed sometimes to fall into science fiction.

 

Since then, I haven’t stopped studying the impact of technologies on representation. What does it mean to be human when we can no longer clearly distinguish the boundaries that separate it from the machine, whether at the philosophical, political, social or artistic level? What does it mean to be human when the programming reveals to us that at the base of our most sensitive and emotionally heavy activities (art, human relations, behavior and social attitudes, gender identity) do not seem to exist than predictable and reproducible patterns?

 

I do this research through essays, like this book, but also through research-creation that raises the question of poetry manipulated, filtered, interpreted by technologies, and by this famous virtual reality that an individual, like me, can create today (an example here: https://www.ollivier-dyens.com/gallery). How to make literature on, by and with digital platforms, software and artificial intelligence without simply “sticking” a text on a website, or animating it with a few links, images or sounds? How to translate the word, the language, the code into a visual imagination? How to create an imaginary (digital) territory on the basis of linguistic structures (human and computer)? How to represent the word, the verb, the text, not only in image, but also through a space? How, in fact, to find a language proper not to multimedia works but rather to a new poetry that expresses and crystallizes in immersive virtual reality environments? Does the poetic experience in virtual reality allow us to better understand the human / machine entanglement in which we live? Does it offer us clues of meaning as to the fundamental challenges that this world poses to us (such as the programming of ethics; all-control and all-knowledge; the autonomy of artificial intelligence; burial of prejudices in databases; the transformation of socialization, etc.)?

 

(A.-M. B.) Your new work is called La terreur et le sublime. What is the genesis of this book, and what is its purpose? And what do the terms “terror” and “sublime” mean to you?

 

(O. D.) This book was born from a quote from William Gibson, the great science fiction author of the 80s and 90s whose book Neuromancer marked the culture and especially the aesthetics of this word ‘cyberspace’ which he coined in 1984. In a fascinating documentary about this author, No Maps for these Territories, Gibson makes the following observation when talking about our current society, drowned in a technological world:

 

“We feel both what we have lost and what we are gaining, it is a contradictory feeling of melancholy and excitement at the same time, an amazing feeling of mourning and Christmas morning at the same time.

 

In fact, I think we all have these dizzying and awfully exciting and very scary moments, where we fully realize the extraordinary scale of the contemporary, and I think they provoke in us an emotional reaction of terror and sublime all at the times and at the same time”.

 

These words struck me, marked and fascinated me, a bit like this first discovery of virtual reality in the 90s. They made me want to explore this world in which I have been sailing for years in search of a new meaning, that of the human-machine, that of the human machine, and to undertake this research by the lens of these contradictory feelings of which Gibson speaks: The feeling of mourning and Christmas morning, of terror and sublime. Because in these few words is summed up, I felt it with force, our strange relationship with machines and technologies whose intensity has not stopped increasing for a century and a half. Technologies protect us from disease, increase our life expectancy, give us access to extraordinary information and knowledge, allow us to better manage our energy expenditure, make us see the immensely large and small and force us to rethink our models of the world. They also allow us to be extraordinarily creative. But they also distort us, forcing us to see, feel, feel the world through their filters, their models, their linguistic structures (the code). They offer us a new and magnificent world, but they distance us from our atavisms, from our ancestral and immemorial needs, marked by a body that has hardly changed in 200,000 years. This is what I call “the distance between the worlds” in the book: the technological world which calls us and enchants us and the biological or evolutionary world which constantly reminds us of it. Like Gibson, I believe that these tensions provoke in us contradictory feelings of terror and sublime. The aim of the book is to explore this space between these two territories.

 

(A.-M. B.) The body of your book opens and closes on the famous duel in the game of go that took place in 2016 between artificial intelligence AlphaGo designed by Google and the South Korean world champion Lee Sedol. This duel, which saw the defeat of the human champion against the machine, as well as his unexpected victory in the fourth match, had a great impact in the press. Can you briefly recount the history of this duel for the benefit of our readers, and tell us why you consider this event to be significant?

 

AlphaGo’s divine move

 

(O. D.) The story is quite long. I would refer your readers to a short vidéo I produced on this event that shows in detail why the victory of artificial intelligence Alpha Go over world champion Lee Sedol was a dramatic event that upset our understanding of the limits of artificial intelligence.

 

(A.-M. B.) aced with the rise of artificial intelligence more and more present in our societies, and in order to “make this world inundated with technologies more human and more equitable”, you recommend as a main solution track “an unprecedented alliance between our intelligence and that of machines which would open the door not only to a humanization of artificial intelligence but also to an amplification of our cognition ”(back cover). How do you more concretely envisage this “fusion” which you call “Human and Artificial Intelligence (IHA (HAI in English))”?

 

(O. D.) We have to stop seeing technology as something placed there, at arm’s length, on a table. Technology is not just a series of sometimes beneficial, sometimes dangerous objects and machines. Technology co-evolves with humans, its propagation is inseparable from ours. And this co-evolution is rapid and dramatic. The World Wide Web is just over 25 years old and now it has violently interfered in all phases of our lives, to the point that the wars of the future, and in some cases those of the present, as the example of Russia’s attack on Estonia in 2007 proved to us, will jeopardize the ability of countries to attack and protect this network.

 

We live, interact, we socialize, we almost exclusively flirt through the technologies, networks and machines that surround us. To such an extent that I named this phenomenon ‘the third hemisphere of the brain’ in my previous book Viruses, parasites and computers (http://www.numoursparites-pum.ca/virusparasitesetordinateurs). We think, reflect, analyze and make tangled judgments about our technologies, the data they provide, the analysis tools they provide, the language structures (codes) they invite us to use. And this phenomenon continues to grow with the revolution that artificial intelligence offers us. How is it different from other technologies? Artificial intelligence offers us the deciphering, reading and understanding of literally monstrous sums of data, and allows us to discover, in the countless interactions of living things, all living things, and all the activities that take place on the planet. , patterns that reveal unsuspected structures. So it allows us to explore amazing areas of the space of possibilities, areas that are inaccessible to us without the help of AI. Not only is AI an essential element of this third hemisphere of our brain, it is an accelerator, an amplifier of it. Researcher Janelle Shane demonstrates this when she asks her AI to find a way to cover a distance and that this, instead of exploring how to create legs, arms, tentacles, wings and recreate the walking, running or flying then gives him a collapsing tower. Shane’s AI, like all AIs, reveals unique prisms of analysis and understanding. Thanks to these we can explore absolutely extraordinary facets of reality.

 

But these programs do not have the intrinsic motivations which are ours, they cannot grasp the uncertainty, it is besides one of their major weaknesses, they are not able to function effectively in the gray zones, and they do not understand the indefinable (what we call the sublime, the touching, the sensitive, the compassion). And this is where we come in.

My suggestion is that we stop seeing AI, or technologies, as foreign and that we accept, use and celebrate, their deep entanglement in us. The total would then be greater than the sum of its parts. We would gain amazingly powerful analytical skills, understanding, and perceptions that we could treat with sensitivity, emotion and compassion. This is what HAI is.

When Kapsparov lost to Chess against Deep Blue in 1996, many predicted the end of this millennial game. A few years later, Kasparov created a new kind of failure, in which teams of humans and computer programs played against each other. The result is amazing because, according to experts, these matches matchs are beautiful. The quality of the game is higher, the “noise” of human errors is reduced, everything gives way to a pure game. Kasparov calls these teams “centaurs”. This is another way of defining what I call the HAI.

 

(A.-M. B.) In your work, you very often use the term “ontology”. Can you clarify what you mean by this term, and the impact of the advent of HAI on said ontology?

 

(O. D.) What is being? What is being in general and singular terms? In abstract and essential terms? This is what technology, and more specifically AI, forces us to consider and reconsider. Never before have we had to think and rethink the human model, in all its dimensions, in all its physical, biological but also spiritual scope. What is a human being when its most fundamental, most specific characteristics, language, intelligence, reflection, the capacity for self-reference are also present, sometimes in an embryonic way, but present nevertheless, in these immense sums of codes that we call artificial intelligence? What is a human being when his ability to transform the violent and hurtful materials of the world around him into a moving and magnificent form that we call art is also revealed in these programs and codes? What is a human being, how is it unique, specific, present in the world when this presence is no longer possible, only manifests itself through filters, machine spaces, technological and IT? We must deeply rethink our universals and our philosophical bases because there is no human, there is no more, there will never be again without the IHA, without the third hemisphere that are machines. How can we even think about humans while knowing that this very act of thinking is partly that of computer programs? How can we analyze the human and try to extract from it a first, unique, fundamental essence when the very act of turning to it, of looking at it, of analyzing it is only possible through the filter of technologies ? The IHA defeats and remakes us, it amplifies and distorts us, in the literal sense of the term. It allows us to see what this 200,000 year old body is not made to see, experience, understand and grasp. This is a magnificent and frightening phenomenon, full of terrors and sublime…

 

(A.-M. B.) The notion of “singularity” that you retain is that of Ray Kurzweil, taken from his work entitled The Singularity Is Near. When Humans Transcend Biology, published in 2005. I quote: “According to Raymond Kurzweil, this technological singularity corresponds to this precise point in time and space where the permeability between humans and machines becomes so important and so finely woven that the symbiosis between both are inevitable ”(p. 18).

 

However, this notion has a much older origin, as Kurzweil himself points out in his work (p. 31). The description of the phenomenon known as “technological singularity” which is most generally used in the computer science and philosophy literature is that of the statistician I. J. Good in 1965:

 

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”, In F. Alt & M. Rubino, eds, Advances in Computers, vol 6, Academic Press, 1965).

 

And it was the American science fiction author Vernon Vinge who baptized this phenomenon by the name of “singularity” in an essay published in 1993, entitled The Coming Technological Singularity: How To Survive In the Post-Human Era (Whole Earth Review, 1993). Closer to home, the work of English philosopher Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, published in 2014, which focused on this darker concept of singularity, had a great impact. Several public figures have expressed their apprehension at this threat, notably Stephen Hawking, Elon Musk and Bill Gates. Why did you choose to use only the more controversial definition of Kurzweil? Would you accept to describe yourself as a transhumanist?

 

(O. D.) I don’t define myself as a transhumanist, on the contrary. Transhumanists regard death as a disease that can be “combated” and from which the human species may one day be “freed”. I don’t subscribe to this thought and this conception of the world at all, quite the contrary. I consider our relationship with technology as historic in the sense that there is no humanity without technology (from flint to cathedral to computer through the alphabet, printing and today electricity , there is homo sapiens only entangled with technology). But neither is there humanity without the inevitable forces of evolution and physics. As Geoffrey West emphasizes very well (https://www.santafe.edu/news-center/news/geoffrey-wests-long-anticipated-book-scale-emerges), it is very difficult for us to transcend and to exceed the physical limits (in the sense of the laws of physics) of our state and that of the world. We are pushing these boundaries through inventions, inventiveness and innovation, but we cannot exceed them. And in this sense, death is not a disease, it is only the normal end of a process (that of a constant fight against entropy by negaentropy that are capable of producing the bodies of living beings , but only for a limited time). Technology certainly changes our relationship to the time allotted to us, as we can see with the meteoric increase in life expectancy, but like any other phenomenon existing on this planet, its impact is also subject to the laws of physics. As for the definition of Kurzweil, its influence on our conception of technology since its appearance in 2000 is important. In addition, if it sometimes seems exaggerated to me, which seems to correspond to Kurzweil’s provocative personality, it is also based on undeniable data. I don’t subscribe to Kurzweil’s ultimate vision, a little too teleological for my taste, but several of these predictions (like that of the meteoric increase in the number of innovations) if they do not turn out to be absolutely correct, point however in the right direction.

 

(A.-M. B.) You seem quite critical of the Montreal Declaration, the purpose of which was to promote “responsible” AI. What for?

 

(O. D.) As I mention in the book, I consider the Montreal Declaration to be extremely important, and I also highlight the wonderful work done by its signatories. I am critical not in the face of effort, seriously or the desire not to lose control of this powerful technology, I am critical of the starting point (this technology is dangerous), of the examples given (which do not highlight negative scenarios), and the disproportionate importance given to what could be called ‘the human brake’. I emphasize in the book, technology changes human culture less than it amplifies, multiplies and reveals it. To focus on the negative potential of this technology is to forget that it is only the result of our human culture, our behaviors and our slippages. Artificial intelligence is powerful and will be even more so tomorrow. But this was also the case with the computer in its infancy. Between the ENIAC, which was designed to calculate the trajectory of enemy missiles during the 2nd World War, and a smartphone, technological progress is breathtaking. What brought us this technological progress of the computer? A number of challenges and problems, of course, but also a number of extraordinary transformations that have improved our lives. Of course, computers are used to better guide missiles, but they are also used to better pilot planes, to have access to the knowledge of the world in the comfort of one’s home, to detect and cure cancer, to produce magnificent works. Why ? Because we have not focused on this technology for 70 years, but have set up an education system which is certainly not perfect, but which has given us the ethical, philosophical, social and political means which allow us to control (in part) this technology. This is why I am critical of the Montreal declaration: it demonizes technology and idealizes the human, the same human who, if he is capable of kindness, generosity and selflessness, is also subject to loss of control, the endless cruelty and the monstrous massacres that have marked our history. The Montreal declaration stresses the importance of keeping people in the decision-making process in order to avoid slippages, the same slippages that we have championed throughout our history. We didn’t need AI to massacre entire populations, torture the weakest and create concentration camps. The ethical challenge is therefore not technological, it is human. What brakes, and above all what capacity to ignore these same brakes, will we program in the AI of tomorrow? If we program prohibitions in an AI these will not be crossed, while we can never guarantee our restraint, especially in times of immense tension, fear or anger. AI is not under this pressure, it will obey the code we have given it.

 

Recently, many writers have rightly pointed out the prejudice displayed by AI systems. But these prejudices are the ones we have introduced into it. Humans are the ultimate black box, their prejudices being rarely articulated. Worse, they are not always aware. We know that skin color, gender, social status, education and age greatly influence how institutions treat people (examples are many and often dramatic). AI could be programmed to ignore these biases and assess humans based on much more objective data. AI can be objective what we never will be.

 

Do we really think that being filled with prejudice, anger, frustration and opinion that we are is the key to controlling AIs? Yes, humans are an essential part of the decision-making process, but is it really what we want to base all guarantees on? That is why I am critical of the Montreal Declaration.

 

(A.-M. B.) In your work, you stress the importance of the role of art in the advent and development of HAI. Can you tell us a little about it?

 

(O. D.) Art is not a strange and mysterious phenomenon. It is, the research on this subject is clear, a structure that boosts survival. We do art to disseminate effective and useful tactics and survival strategies as Denis Dutton and Nancy Etcoff are saying. We make art in order to collect immense amounts of information into a moving, exchangeable and distributable “object” (think of Guernica, for example, which sums up the devastating immensity of war in an image). We tell stories, says New Zealand researcher Briand Boyd, in order to “practice” new social situations and thus better control them when they occur in reality (see On the Origin of Stories, 2009, Harvard University Press). We make art to give meaning to the incomprehensible and thus calm and reassure us. We do art to understand the amazing, unique and heartbreaking moments of the world around us and to share these so that the group can survive better. Besides, the beauty, the sublime, the touching of a work of art are universal, despite what we believe, and here too the research on the subject is clear.

 

In short, art is an exceptional survival mechanism that allows us to probe the intrinsic, to explore deep questioning and to give a fundamental meaning to the world around us, all these things that software cannot do despite the millions of lines of code that form them. Now imagine infusing these software with these human survival tactics, introducing into them the emotion, the intrinsic, the search for meaning and the need for reverence that drive us all, and to use their ability to read crazy sums of data, to find patterns in what seems incomprehensible to us and to give the works autonomy (as suggested by artist Ian Cheng. If the functioning of intelligence requires sensitivity (as proven by neurologist Antonio Damasio), imagine the power of an HAI whose cognitive capacity would be amplified by human sensitivity? Imagine, on the other hand, a poet using AI to give his poetry unexpected depth (as does David Jhave Johnston)? Imagine what this art and this HIA would allow us to understand, to study, to grasp?

 

(A.-M. B.) How do you respond to criticisms that could criticize you for optimistic sin and to minimize the risks linked to the rise of AI? How do you plan, for example, to solve the “control problem”, that is, “the question of how to build a superintelligent agent that will help its creators and avoid inadvertently building a superintelligence that will harm its creators” , as summarized by Wikipedia, a problem that concerns many scientists and computer scientists in artificial intelligence (in particular the computer scientist Stuart Russell, who published in the fall of 2019 a book on this subject, entitled Human Compatible. Artificial Intelligence and the Problem of Control)?

 

(O. D.) I will tell them that they have read the book wrong. Terror and the sublime examine the question you speak of several times and from many angles. The warnings are numerous. Yes, the dangers are present and threatening, but neither more nor less than the benefits and progress of these technologies are, as I discuss in detail in the section on the seven major challenges. I find it strange, moreover, that no one is accused of pessimism when the subject is focused solely on technological dangers. I also call this fascination for dystopia a fantasy of the apocalypse (it is also strange that no one criticizes the failure of the catastrophic forecasts and projections of previous decades, the vast majority of which have never materialized).

 

However, the progress made by mankind over the past two hundred years is absolutely undeniable. How can we deny this fact in the face of the dramatic reduction in poverty, the extraordinary increase in access to education, the dramatic decrease in infant mortality, and the effective fight against the grip of disease infectious diseases that we have witnessed for two centuries? Is everything alright? No, certainly not. Are there many challenges? Of course. Barely twenty years ago, 29% of the world’s population lived below the threshold of extreme poverty. Today that percentage is 9%, but 9% nevertheless represents almost 700 million people. The challenges are immense, but the progress is also undeniable. It is however difficult to understand this phenomenon since humans tend to deny all data, no matter the amount and the amount thereof, which goes against their emotional perceptions (such as refusing to believe in warming or the danger posed by GMOs).

 

I stress once again: La terreur et le sublime clearly demonstrate that the challenges we face and that we will face are first and foremost human problems, they are first and foremost ethical challenges that the technology only highlights. Will we lose control over technology and AI in particular? Only if we want to. Will AI kill without brakes and subtlety of human beings on the battlefield? Only if that’s what we will program and accept. Will it allow us to live better, in a fairer and more equitable world? Only if we demand it. The challenges are immense, but they belong to us.

 

Note :

[1] Art By Algorithm, Aeon, 27 septembre 2017.

 

 

 

 

Edmond_de_Belamy_1200

BEYOND “GANISM”: AI ART AS CONCEPTUAL ART

by Pau Waelder

A pivotal moment for the current wave of AI art took place on October 25th, 2018 at the auction house Christie’s in New York. Portrait of Edmond de Belamy (2018), a print on canvas depicting a blurry image of a man in a black suit, was sold for $432,500.

Read more AICAN, Psychedelic Wisteria, 2018

AICAN: CREATED ARTIST

by Vincent Godin-Filion

AICAN stands out for his abstract expressionism with vibrant colors, where intertwine artistic blurs and modulated lines. His works always have that same typical shine, that vivid brightness that gives the color scheme an almost hypnotic floral quality…

Read more Robbie Barrat, Nudes

NEWS

 

Spotlight: AIArtists.org

CQAM luncheon meeting to be seen online

AI Ethics Conference on April 30 (on Zoom)

Read more