I am thrilled and honored to announce that Artnome will be curating an exhibition titled Augmenting Creativity - Decoding AI and Generative Art at Nanjing University this November, 2019.
In preparing an exhibition in a country I have never been to and whose language I do not speak, I’ve been thinking a lot about translation and communication. Specifically, I have been thinking about the language of computing as an increasingly dominant and universal language which shapes all aspects of our lives despite only a small percentage of our population (developers and computer scientists) having any real fluency in it.
People with limited exposure to programming and training networks often assume creating generative art is a carefully planned, predictable, and linear process with a predetermined outcome, but nothing could be further from the truth. A common thread across the artists in this exhibition is a circuitous path of discovery, an addiction to surprise, and an ability to embrace and build on “accidents” or the unexpected.
There is, of course, no one “right” way to conceive of these surprises and the role that technology plays in the generative artmaking process. Some see code and networks as a just another tool. Others describe it as more of a creative collaborator. That each artist in this show has their own way of describing how technology augments their own creativity only serves to further individuate their work and brings a richness to the dialogue surrounding the exhibition.
Among artists, generative artists carry a unique burden of having to explain how programming and machine learning work without distracting their audience from their other artistic ambitions. Without at least some insight into computing and code, we as an audience cannot fully appreciate their artistry and craft.
The most engaging generative artists share the ability to seamlessly educate us on how their work is created and use that opportunity to reinforce their artistic objectives rather than to distract from them.
In his 2010 presentation at the Flash on the Beach conference, Jared Tarbell does a masterful job of using his artwork as an aid in explaining programming concepts. Tarbell starts by sharing his love of nature, something we can all relate to. He then explains how iterating or “looping” a simple command to create dots can empower us to recreate the same complex and beautiful patterns found in nature. He then uses the simple algorithm behind his famous 2003 work Substrate (which we are excited to feature in our exhibition) as an example.
As Tarbell describes it:
Let’s imagine in this system that we create a line and we allow this line to draw. The line will continue to draw until it hits the edge of the screen or another line and then it stops and then two new lines form. New lines will also only form at right angles to existing lines. So those are the rules. But if we repeat this process over and over again, something amazing happens, so I am just going to let this go.
The construction of Tarbell’s Substrate mirrors that of his presentation in that it builds on very simple definitions and actions to create something complex, rewarding, and beautiful. The fact that Tarbell shows us how his work is made does not detract from it. Instead, understanding Tarbell’s process only increases our curiosity and engagement.
Tarbell originally created Substrate back in 2003 using a programming language specifically designed for creatives called Processing. Casey Reas and Ben Fry, co-creators of Processing, wanted to create a language and a community that could make it easier for creatives to explore programming as a tool for making art.
The Processing language has been a huge success and has empowered hundreds of thousands of creatives by reducing the learning curve to use programming as an artmaking tool.
We are honored to be featuring Reas’ 2004 MicroImage in the exhibition. As with Tarbell’s Substrate, Reas’ MicroImage is a complex system constructed from the repetition of relatively simple parts and commands. By creating thousands of dots and programming them each with a simple set of behaviors designed to react to their surroundings, Reas explores the concept of emergence within a software environment.
The dots follow simple rules, and there is no central system instructing how each individual dot should behave. Local interactions between the dots leads to the emergence of a type of swarm intelligence, a global behavior, which the individual dots could not exhibit on their own.
The results as seen in Reas MicroImage are breathtaking and feel like like a living, breathing, pen-and-ink drawing undulating with rythyms reminiscent of schools of fish or flocks of birds.
Reas shares his process:
The core of the “MicroImage” software was written in one day. The current version of the software has developed through a gradual evolution. While the base algorithm controlling the movement was constructed in a rational way, subsequent developments were the result of aesthetic judgments constructed through many months of interacting with the software. Through directly manipulating the code, hundreds of quick iterations were created, and changes were implemented based on analyzing the responsive structures. This process was more similar to intuitive sketching than rational calculation.
Substrate and MicroImage feel natural because they harness concepts and algorithms found in nature, but in the hands of generative artists like Tarbell and Reas, these algorithms create new worlds rather than replicate the one we live in. Part of what is new and powerful about generative art is the ability to go beyond painting an illusionistic image of nature and to actually create systems that emulate the creative forces of nature itself.
If programming can better help us understand the complexities of nature by harnessing hidden algorithms for the creation of art, could it also help us understand the complexities of our own minds and the minds of machines?
Artist David Young explores the similarities and differences between human and machine learning by training models with a severely limited data set. While most still consider art made with AI and ML to be “generative art,” the process is a significant departure from traditional methods used with tools like Processing. Young explains in his essay Tabla Rasa: Rethinking the Intelligence of Machine Minds :
Like AI programmers, artists working with AI don’t encode rules, but instead train networks. The resulting artworks look and feel different from previous forms of computer-based art for they reflect the organic messiness of their vast and inscrutable neural networks.
In an attempt to better understand machine intelligence, Young takes an empirical approach to training GANs (generative adversarial networks), observing closely as he sparingly adds new training data. By severely limiting the quantity of training data used for the model, Young hopes to better understand how AI builds an understanding of the natural world.
Our exhibition, Augmenting Human Creativity, features Young’s documentary animation Tabula Rasa (b67o,2600,2) which chronicles the learning process of his GAN. Watching Young’s model learn in compressed time is fascinating, but it raises more questions than it answers. Among the key questions prompted for Young were:
Do we need to develop a new concept of beauty to evaluate what emerges from working with these systems? And is it possible that a new aesthetic can give us a better, more empowered understanding of AI?
Young argues that all training data includes the emotional bias and irrationality of the human who selected it. This leads him to ask:
Given that we can’t help but use human terms to describe the behavior of AI systems — we say that the machine is “learning,” it “knows” something, it has “skills,” the word “intelligence” is inescapable — perhaps we should look for other human traits. Might machine “emotions” be a new way for us to approach and understand AI?
Whether we think of AI as an emotional entity or simply accept that the data it is trained on is imbued with the emotions and bias of the human who curated it, it is interesting to explore how it can interpret or even contribute to complex cultural concepts like fashion and costume.
Artist Harshit Agrawal creates artificial intelligence/machine learning-generated faces, drawing inspiration from traditional mask cultures of different regions of India. Agrawal explains:
Throughout our years of existence as a culture, we’ve crafted and performed several kinds of rituals and ceremonies, both collective and individualistic as acts of transformation and transcendence. Masks and face-transformative decorations have been fundamental across the Indian culture in our journeys into unknown realms, in our celebrations of the malleability of human representation, or as a tool for practical disguise and entertainment.
It helps us engage with our world from a completely new vantage point, augmenting our sense of self, very similar to what technology, especially AI, enables today. What happens when these media of transcendence collide? Can we teach machines about our cultural heritage, and as a result, make them an instrument for our own exploration and engagement with our heritage?
Agrawal’s project allows for users to explore both machine learning and the language of India’s mask culture directly through an interactive installation which we are excited to include in the Augmenting Human Creativity art exhibition.
Artist Robbie Barrat trained a GAN on Balenciaga’s fashion catalogs to produce radical new fashions and styles unlike anything a traditionally trained human fashion designer would ever come up with. Barrat particularly likes the absence of symmetry, the random placement of pockets, and the addition of non-functional adornments like handheld tassels.
Barrat uses Pix2Pix technology in combination with DensePose to map the new AI outfits to the models. DensePose tries to estimate human poses and "...aims at mapping all human pixels of an RGB image to the 3D surface of the human body." A simpler explanation of this is that Barrat is training the AI to not only recognize the clothing in the Balenciaga catalog of clothing, but also the poses of the fashion models, and is then mapping the new fashions on to AI-generated fashion models and postures.
Barrat went a step further with his creative collaborator Mushbuh and produced physical garments which were recently included in a show at the Fashion Museum of Hasselt. We are excited to be showing Barrat’s original runway video in our exhibition.
Barrat has always been careful to describe machine learning as a creative tool and is hesitant to anthropomorphize the machine as an artistic partner. Barrat points out that with generative art made using machine learning, and specifically GANs (generative adversarial networks):
A human chose the data set
A human designed the network
A human trained the network
A human curated the resulting outputs
Barrat’s framing of GANs as a tool and his emphasis on the human contribution helps to explain why art made with machine learning varies so much from artist to artist despite using similar models. It is predominantly human creativity that is driving the creation of the work and human creativity that supplies the work’s purpose and meaning.
Artist Sougwen Chung acknowledges the importance of the human artist in generative art, however, she tends to think of machines more as collaborators than just tools. Chung is best known for her project Drawing Operations in which she engages in a drawing performance with a robotic arm. Chung explains, “The interface is not simply a mediating apparatus for creation, but a speculative agent of co-creation.”
Like Agrawal, Chung is comfortable mixing technology with tradition. In a recent interview, Chung explains:
I don’t see tradition and technology as oppositional or mutually exclusive. I utilize the technology of tradition, and are we not growing into a tradition of technology? Insofar as we can frame a narrative around tradition as that which is made by hand and technology as that which is made by machine, the work I’m doing is certainly an exploration of the possibilities in the intersection of both… one that is gaining clarity the further along I go.
In addition to robotics and neural networks, Chung’s work uses a variety of traditional art materials as well as digital tools including, but not limited to, Photoshop, Illustrator, Cinema4d, Processing, nano-controllers, electric violin, Wacom tablet, Arduino, projectors, and Madmapper.
Augmenting Creativity will feature Chung’s work The Limitless, The Absolute which she released alongside the Lunar Calendar for 2017.
According to Chung:
“The Limitless, The Absolute” is an artistic interpretation of the Bagua八卦; the eight Taoist trigrams that represent the principles of reality as organic, emotive, and experiential environments. The correspondences between the eight interlocking concepts of the Bagua are used in Taoist cosmology to interpret and contemplate reality and existence. The patterns formed as the trigrams interrelate provide the conceptual logics used by the artist to create delicate, complex, shifting environments resembling ecosystems.
Like Chung, artist and roboticist Alexander Reben is fascinated with the relationship between humans and machines. Reben’s work deals with synthetic psychology, artificial philosophy, and robot ethics to “probe the inherently human nature of the artificial.”
New technologies can often feel foreign and distant from our human nature, frequently leading to irrational fear and anxiety. Reben’s work explores and often collapses this distance by exploring the relationships between humans and machines on a more visceral and intimate scale.
Issac Asimov’s first rule of robotics is “do no harm.” Breaking this rule, Reben developed a robotic arm with a lancet that was capable of detecting a human presence administering a needle prick to the intruder.
Reben explains:
It was kept simple intentionally, as one of the ideas with the work is that you don’t need an advanced AI or complicated system to start encountering interesting philosophical and ethical issues.
With just a few small adjustments, Reben turned his fear-inducing robotic arm into a soothing head massager, exploring the flip-side potential of administering pleasure between human and machine instead of pain.
Reben’s most recent project, Latent Faces, debuted last month at Ars Electronica in Linz, Austria. For the project, Reben built a photo booth and created an average portrait of over 700 of the attendees to the fair using machine learning and GANs.
Reben notes when people become active participants and see themselves as part of the project it, personalizes and transforms their relationship to the technology in was that simply seeing a tech demo of GANs does not.
We are excited to be including a variation of Latent Faces as part of the Augmenting Creativity show. The photos of visitors to the show in Nanjing will be processed by Reben in batches locally in San Fransisco to create a face of the average attendee in Nanjing.
It is an honor to continue to curate exhibitions of generative art around the world. I am extremely grateful to both Nanjing University for hosting the exhibition and to the many artists for their generosity in working with me to put together the show. The exhibition is open to the public, so if you find yourself near Nanjing, please do stop in.
Epilogue - Professor CJ Chen
“What’s beautiful is difficult,” as Plato said. Understanding art is also difficult now -- especially digital art. We all kind of agree that everyone is an artist in the digital era due to the convenience of digital tools. However, we also know that not everyone can understand the complexities of writing computer code, which is an important part of understanding digital art.
Accordingly, it causes difficulties for the audience to appreciate the artworks made by/with digital technology. It may not bother some artists, who feel the audience does not need to understand the code and programming behind the interface before they see the works, but it does bother researchers like me.
We study art as well as digital technology. We are creating/imagining an ecological environment where artists, audiences, and machines -- including hardware and software -- can understand and communicate with each other. Because some of us believe that the future will be built by humans and machines together, we want to know and better understand the technology better from the beginning as part of the process of appreciating the art.
Making art could be the most creative and innovative way to help human beings talk about and understand artificial intelligence and machine learning. Understanding these technologies is important because it impacts the future of all humans, no matter where you are. That’s a reason why I, a researcher on digital media and humanities at Nanjing University, China, invited Jason Bailey, a critic and curator on digital art in the US, to hold this exhibition in China.
Jason and I share the common belief that artists (and curators) can play a critical role in helping audiences to understand the new technologies shaping our world. As Jason says:
Programming is an increasingly dominant and universal language which shapes all aspects of our lives despite only a small percentage of our population (developers and computer scientists) having any real fluency in it.
Fear comes from ignorance. We hope everyone will enjoy and appreciate the works as well as the beauty of the collaboration between artists and technology, human, and machine.
- CJ Chen