____________________________
All about emerging artists exploring the creative power of AI
A pivotal moment for the current wave of AI art took place on October 25th, 2018 at the auction house Christie’s in New York. Portrait of Edmond de Belamy (2018), a print on canvas depicting a blurry image of a man in a black suit, was sold for $432,500, more than forty times its pre-sale estimate of $10,000. The artwork had been described by Christie’s as “[t]he first piece of AI-generated art to come to auction”[1] and consequently attracted the attention of collectors and the media, ultimately boosting the hammer price. In a press release, the auction house triumphantly signaled the sale as a milestone in the history of the art market and mistakenly described the artwork as created by a computer program instead of an artist:
Obvious, Portrait of Edmond of Belamy, 2018
This portrait, however, is not the product of a human mind. It was created by an artificial intelligence […] Portrait of Edmond Belamy sold for an incredible $432,500, signaling the arrival of AI art on the world auction stage.[2]
Portrait of Edmond de Belamy is one of several artworks created by artists and Machine Learning researchers Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier under the collective name Obvious. It was generated using a computing system known as a Generative Adversarial Network (GAN), which is composed of two parts: a Generator and a Discriminator. The artists provided the system with a dataset of 15,000 portraits from the 14th to 20th centuries, which is used by the Generator to create new images imitating the features found in the portraits, and by the Discriminator to discern whether the images provided by the Generator are real (actual artworks such as those in the dataset) or fake (compositions made by the algorithm). The output of the system are those images that fooled the Discriminator into labeling them as “real.” The artists evaluated these images and changed certain variables in the algorithms to obtain better results. This process led to the Belamy Family[3], a series of eleven portraits, printed on a 70×70 cm canvas and presented in a golden wooden frame, with a mathematical formula as a signature. Edmond was not the first portrait to be sold, but it is certainly the most well-known, given the abnormal price it fetched at auction. This sale can be considered a pivotal moment for current AI art not because of the relevance of the artwork, but precisely because it was created in a rather straightforward manner using a freely available GAN, and managed to get the attention of the mainstream contemporary art world, apparently announcing a trend of highly profitable sales for artworks created with artificial intelligence. Obvious used the code and dataset from artist Robbie Barrat’s art-DCGAN, which is freely distributed on the software development platform GitHub, and apparently asked him to modify it in order to obtain the output they were looking for. While this is quite common among artists working with code, who share their projects and build on each other’s libraries[4], the sale of Edmond de Belamy was an indicator that working with GANs, in itself a relatively accessible way of using AI, could lead to a successful career in the art world.
Interpreting GANism
Over the last year, artificial intelligence has attracted unprecedented attention in the media and notably in the art world, with dozens of group exhibitions devoted to AI featuring the work of a growing roster of artists, many of which explore the aesthetic possibilities of GANs. At the same time, the art market has shown a declining interest in AI art for the moment, with the sale of Mario Klingemann’s Memories of Passersby I (2018) fetching a relatively modest $51,000 at an auction that took place at Sotheby’s in London on March, 2019. While technically superior to the Belamy portraits, this generative artwork did not fetch six figures[5], although it also used classical paintings as source material and suggested the idea of the machine (or rather, the code) as an autonomous creative agent. Despite their differences, both artworks use artificial intelligence as a tool to generate an image that intends to be interpreted as a painting, while revealing its artificial nature. In this manner, they play with the notion of the machine as an artist, which has been a subject of controversy ever since the first exhibitions of algorithmic art in the mid-1960s. The artificial neural networks are therefore used both as a tool for the production of the artwork and as a conceptual element, even if only to indicate that the image was generated by programming code and that it has never existed before. In the case of Obvious, the GAN has such a central role that the artists identified their work with “GANism,” a term introduced in 2017 by software engineer and AI researcher François Chollet in a brief tweet that read:
GANism (the specific look and feel of seemingly GAN-generated images) may yet become a significant modern art trend.[6]
It must be noted that Chollet refers to a particular aesthetic (a “look and feel”) that resembles GAN-generated images, therefore not necessarily art made using AI but that looks like the output of an artificial neural network, much in the same way as software glitches or pixelated images have been used by artists before. However, the trend he identified has been interpreted by Obvious as a full-fledged art movement that focuses on the question of whether an algorithm is capable of creativity[7]. This misunderstanding leads to considering two aspects of the use of GANs in artistic projects: on the one hand, the frequent use of Generative Adversarial Networks by artists has effectively led to an aesthetic trend that associates blurry, impossible images with AI (GANism as understood by Chollet); and on the other hand, artists often expect the use of these artificial neural networks to convey a reflection on the machine as a creator or to effectively expand the possibilities of artistic creation using AI (GANism as understood by Obvious).
Research around GANs has been focused on producing increasingly realistic, high definition images, to the point where they cannot be discerned from a real photograph, and this has caught the attention of artists who want to explore the creative possibilities of having a machine generate visual compositions on its own. Training artificial neural networks and producing high definition images, still, demands considerable computational resources, and therefore what most artists can achieve with the hardware at their disposal are semi-abstract, viscous images that remind of oil paintings, such as the Belamy portraits. In many cases, this technological limitation has been adopted as an aesthetic choice, leading to a proliferation of “GANny”[8] artworks that signal their use of artificial intelligence by producing very similar and recognizable imagery. As much as the process carried out by the system confronting two artificial neural networks is fascinating, its use alone does not justify the relevance of the artwork. Simply feeding the GAN a certain dataset and taking what comes out of it reduces its possibilities to a sort of sophisticated Photoshop filter. The output then has no other conceptual grounds than the fact that it was generated by the computer semi-autonomously, and falls back on arguments such as “the machine dreamt it” or “these are the faces of people who never existed,” which end up being shallow by repetition.
GANism may therefore describe an initial stage in the current use of artificial neural networks to create art, where the focus is placed on the technology itself rather than on the content that the artwork intends to communicate. This, in fact, is a recurring criticism of all “new media” art, from algorithmic art to interactive installations, bioart, or AI art: the technology and the possibilities it opens are so overwhelming that many artists do not seek to develop a concept beyond the outputs that their experimentation provides. In this sense, a “GANist” artwork mostly stays on the surface of the technology, providing a framework that justifies its visual output and relies on the relative novelty of the medium to assert its relevance. This happens both at a time when the technology is somewhat “new” and when an artist first incorporates it into their creative practice: although art using artificial intelligence has a long history, the popularization of GANs dates back to roughly five years, with a strong increase in the number of artists using artificial neural networks over the last two years. It is likely, then, that the concepts associated with training the neural network may seem less relevant as the technology becomes commonplace, and that, as it becomes easier to generate more realistic, high-resolution images with a GAN, these are no longer identifiable with a particular aesthetic. In the meantime, the work of artists who use artificial intelligence as a concept in addition to (or instead of) a tool provides an indication of the paths that AI art may follow.
AI art as conceptual art
Shortly after the Belamy sale, Prof. Ahmed Elgammal, Director of the Art and Artificial Intelligence Laboratory at Rutgers University, wrote an article in which he stressed that AI art should be understood as conceptual art:
The art is not just in the outcome, the art is in the process that leads to that, including the curated dataset, the choice of the algorithm and its parameters, and the post-curation.[9]
From this perspective, working with artificial intelligence implies much more than staging the possibility of machines replacing artists while ignoring the very human decisions that take place in every step of the process. Memo Akten has dedicated his PhD research to investigate how Machine Learning can enhance artistic expression, with a focus on understanding how the trained neural network learns, and in particular how to “steer the model to produce not just anything that resembles the training data, but what *we* want it to produce, *when* we want it to produce it”[10]. Part of the output of this research is illustrated by the series Learning to See, in which Akten explores how the neural network interprets what it sees based on its previous knowledge (a large dataset of images). For instance, in a interactive installation, a camera captures a live feed of a tabletop with several everyday objects, and an AI model generates an interpretation of this feed based on the contents of the dataset: if the dataset is made of images of flowers, the neural network will translate any shape into a flower; if the dataset contains seascapes, every shape will be a mass of water. The audience can therefore manipulate the objects and see how the image is transformed in real time. Akten aims to show, through this use of the neural network, how our perception of the world is shaped by our knowledge, cultural biases, and expectations. Furthermore, in Optimising for Beauty (2017), the artist effectively steers the model to produce an eerie form of perfect beauty using more than 200,000 images of celebrities from the CelebFaces Attributes (CelebA) dataset. Akten stresses that the generation of fictitious portraits is not the subject of the artwork, nor the dataset itself, which shows that celebrities tend to have a similar facial structure. What he is interested in is the way in which he can manipulate the algorithm to obtain the results he is looking for, therefore using the artificial neural network as an instrument of his artistic expression. Ultimately, the artwork shows how the algorithm can be biased and reflects on a way of understanding the world that is increasingly polarized and blind to diversity.
Memo Akten, Optimising for Beauty (2017)
Anna Ridler has also addressed the process behind the output of a GAN, and particularly one of its constitutive elements: the dataset. She points out that the dataset is central to the outcome, yet it is rarely discussed[11], as it is commonly outsourced and compiled without paying much attention to how the images are collected or tagged. In The Fall of the House of Usher (2017), she decided to create her own dataset by making a series of 200 drawings of the main scenes from a 1928 short film by James Sibley Watson and Melville Webber, based on Edgar Allan Poe’s celebrated story. She then fed her drawings to a GAN, its output then fed to another artificial neural network, and then again to a third one. The artwork presents an animation in which the outputs of the three GANs are displayed simultaneously, alongside the original drawings. Similarly to Akten, Ridler has steered the model, but instead of working on the algorithm she has taken control of the dataset. Additionally, the recursive use of GANs leads to a deterioration of the original image (itself a visual interpretation of a cinematic version of a literary text) that speaks of the decadence described in Poe’s tale and questions where is creativity to be found, as everything in this piece is based on a previously created content. The artist expresses her interest in understanding how to integrate artificial intelligence in her work by questioning it rather than simply using it as a tool:
Anna Ridler, The Fall of the House of Usher II (2017)
I do not see a GAN as a tool like I would think of say a photoshop filter but neither would I see it is as true creative partner. I’m not really quite sure what it is.[12]
The uncertainty described by Ridler is actually part of what makes GANs interesting for artists and fascinating for viewers, even when the argument “the machine dreamt it” loses its appeal. The dialectical nature of a generative adversarial network suggests that there is an exchange taking place between the machines (a “machine” being any computational system), in which humans are excluded. Jake Elwes exposes this type of “conversation” in Closed Loop (2017) by using two artificial neural networks, one trained with a dataset of 4,1 millions of images with descriptions, and another one trained with 14,2 million photographs. The dialogue starts with the first machine presenting a description of an image, to which the second machine responds by generating an image matching the description. Then the first machine describes the generated image, and so it continues in an endless feedback loop. As stressed by the artist, the artificial neural networks do not receive any external inputs, they generate their own texts and images. This exchange sheds light on the functioning of the algorithms but also confronts viewers with an autopoietic system that they must try to understand. The output of each artificial neural network is less important than how the contents relate to each other, how a certain logic can be extracted by comparing them. In this piece, it can be claimed that the machines have actually created the text and images without human intervention, but this is no longer the subject of the artwork, since Elwes is looking deeper into how the algorithms work, as do Akten and Ridler.
Jake Elwes, Closed Loop (2017)
Finally, another way of exploring GANs deals with the concept of automation. While it may be discussed whether or not the computer is capable of creativity and if it will replace the artist, it is obvious that, at this moment, it can save artists a lot of work. Guido Segni (the alter ego of Clemente Pestelli) has dedicated most of his artistic research to considering artistic production as labor and exploring the ways in which digital technologies can have an impact on it. Demand Full Laziness (2018-2023) is an ongoing project designed as a five-year plan to partially automate his artistic production by having a series of deep-learning algorithms generate artworks that are distributed among patrons. Segni trained several GANs to produce images based on video footage of his moments of rest (sleeping, reading, doing nothing), which are regularly distributed among supporters who pay a monthly fee on the platform Patreon for the duration of the project. In this manner, the artist manages to produce artworks while apparently doing nothing, and then obtains a certain amount of money that helps him sustain his practice. The whole project is thus a comment on the fact that the artist must produce items to be sold in order to financially support his own production. Artificial intelligence intervenes with the promise of automating the production, so that the artist can rest. Segni’s work adds a humorous twist to the fear of losing one’s job to AI and subversively suggests that most of the cultural products that we enjoy are or will be actually created by an algorithm.
Guido Segni, Demand Full Laziness (2018-2023)
There is much more to artificial intelligence (and AI art) than GANs, but the sudden and remarkable impact that they have had on artistic production over the last years deserve close attention, particularly in terms of how AI has been exclusively associated, in the visual arts, with GAN-generated images. This GANism (as described by François Chollet) must be overcome, as it will quickly become obsolete, and only those artworks that delve deeper into what AI can do and what it means to us, will still be relevant in the future.
All images taken at the exhibition Deus Ex Machina, curated by Karin Ohlenschläger and Pau Waelder. LABoral Centro de Arte y Creación Industrial, Gijón, Spain, 2019. Photo: Marcos Morilla.
Notes:
[1]“Is artificial intelligence set to become art’s next medium?”, Christie’s, 16 October 2018. Retrieved from: https://www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-9332-1.aspx
[2] “Is artificial intelligence set to become art’s next medium?”, op. cit.
[3] The name Belamy was chosen by Obvious as a tribute to Ian Goodfellow, who first conceived Generative Adversarial Networks in 2014 and whose surname roughly translates to “bel ami” in French.
[4] For instance, artist Rafael Lozano-Hemmer shares some of the code developed for his projects on Github and advocates for the sharing of code libraries as a form of preservation. See: https://github.com/antimodular/Best-practices-for-conservation-of-media-art
[5] Arguably, the price paid for Portrait of Edmond de Belamy five months earlier was considered excessively high and this lead collectors to be less confident on the value of AI art, which consequently had a negative impact on the sale of Klingemann’s piece.
[6] “AI: The Rise of a New Art Movement”, Obvious, 14 February 2018. Retrieved from: https://obvious-art.com/blog-post.html
[7] “AI: The Rise of a New Art Movement”, op. cit.
[8] I first heard this term from curator DooEun Choi in 2019.
[9] Ahmed Elgammal, “What the Art World Is Failing to Grasp about Christie’s AI Portrait Coup.” Artsy, 29 October, 2018. Retrieved from: https://www.artsy.net/article/artsy-editorial-art-failing-grasp-christies-ai-portrait-coup
[10] Memo Akten, “PhD Research”, Memo Akten | Mehmet Selim Akten | The Mega Super Awesome Visuals Company. Retrieved from: https://www.memo.tv/info/#bio
[11] Anna Ridler, “Fall of the House of Usher. Datasets and Decay”, Victoria and Albert Museum, 17 September, 2018. Retrieved from: https://www.vam.ac.uk/blog/museum-life/guest-blog-post-fall-of-the-house-of-usher-datasets-and-decay
[12] Anna Ridler, “Fall of the House of Usher. Datasets and Decay”, op. cit.
Pau Waelder (Spain) is an independent art critic and curator, researcher in contemporary art and new media. Website: https://www.pauwaelder.com/
He has often contributed to the CIAC’s Electronic Magazine in the past.
AICAN stands out for his abstract expressionism with vibrant colors, where intertwine artistic blurs and modulated lines. His works always have that same typical shine, that vivid brightness that gives the color scheme an almost hypnotic floral quality…
Read moreWe live, interact, we socialize, we almost exclusively flirt through the technologies, networks and machines that surround us.
Read moreSpotlight: AIArtists.org
CQAM luncheon meeting to be seen online
AI Ethics Conference on April 30 (on Zoom)
Read more