Is there any algorithmic sequence in creating a stunning piece of art, which robot could learn and replicate?
I would answer more “yes” than “no”. The reason is that any kind of creativity involves, logic, structure, rules, mathematics and even physics. Watercolour, oil paint, acrylic paint dry in different times, need different kinds of canvas and demand different kinds of hand movement while drawing. You can not just mix everything and expect it to be good.
You probably have seen at least once Bob Ross’s “Joy of painting”. Yes, he mentions every time, that “there are no mistakes but happy accidents, however”, look at his movements while drawing. They are very structured, even mechanistic, strict and balanced, he starts with the background, then goes to drawing bigger objects and finishes with the smallest details on his canvas. Not much chaos I would say, more balanced and just a bit of improvisation. You might be surprised, but even academic art used algorithms for scene layout together with colour and content composition. So, art is not just disordered splashes of paint over the canvas, or better, at least the final work is not. But what about robots, machines, computers? Are they able to repeat what human did, maybe with the more stunning result?
Everything that can learn is capable of creating!
One of the proofs was found by a recent Google Brain team who conducted a research project called Magenta that explores the role of machine learning in the process of creating art and music. As this project creators say: “We develop new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it’s also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. “
Well, that’s pretty understandable, everything that can learn is capable of creating. In the case of Magenta, which used already written and drawn projects to create its own compositions and artworks. Of course, Magenta, still lack the improvisation, so there is enough space for work for Google Team.
The question is, would people appreciate that. What audience will say makes out it a piece of art through acceptance and appreciation. Machine generated works might struggle with that. But before moving into the philosophical mind journey, let’s look into what scientists, programmers say.
Project Magenta was launched in 2016 and now is known for generating its own musical compositions.
Its main human engine is Douglas Eck, a research scientist of the Google Brain team.
Magenta was created, by applying different deep learning algorithms, mostly avoiding dry code, amplifying lots of different machine-learning techniques, including recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning. In simple words, Google specialists applied different techniques of teaching machines on creating something new. AI was taught from different real existing examples using three kinds of algorithms.
One of Magenta project recent achievements was the cooperation with Sony creating Daddy’s Car a computer that composed songs inspired by Beatles. Also, Magenta learned different artistic styles from various painting and now can apply any style to any given image.
The question is: would people appreciate that?
On the other side, there are computer graphics and the so-called non-photorealistic rendering, methods that made a lot of progress in imitating painting styles using the computer. AI is generating a psychedelic pictures tool which modifies original images into stunning different artistic style artworks. DeepDream is one the most popular image modification platforms.
Developed at the University of Konstanz, E-David (an acronym for Electronic Drawing Apparatus for Vivid Interactive Display) is a combination of welding robot used to construct cars and a computer with the constructed algorithm. The aim of the creators was to build a painting machine that mimics human painters and is able to distribute real paint on a real canvas. A computer programme produces drawing commands which are executed by the machine. Although the output is not perfect, it is worth seeing.
E-David was provided with the camera, which through visual control loop pictures the output of the machine. Machine and programme equipment collaborate in a way which allows researchers to correct errors on the canvas and approach a given input image gradually. One of E-David’s roles, of course, was not just a keen of performance, but a research purpose, which is hypothesis that drawing, at least technical part of the painting process can be evaluated as an optimization process in which colour manually distributed on a canvas until one is able to recognize content, regardless the painting being representational.
Generative Adversarial Networks (GAN)
Recently well known for his peculiar artworks collection Nicolas Laugero-Lasserre’s has presented an art piece created using software Generative Adversarial Networks (shortly GAN). Laugero-Lasserre already became the owner of artworks of such authors as Shepard Fairey, Ivader, Banksy, and Swoon. All of them are well known urban artists.
Art collector’s recent acquisition Le Comte de Belamy costed him around €10,000 ($12,000).
GAN works by mimicking characteristics of images in a training data set (in this instance, paintings from the 14th to the 18th centuries). As the AI researcher states: ”although the works were created independently, the collective had chosen the title of the artwork, which was one of the first in a limited series”. The painting named “Belamy” as a tribute to Ian Goodfellow, the AI researcher that, together with his team, came up with the mathematical formula which is at the basis of the models were used to create the Le Comte de Belamy. (Goodfellow—who, just to make you feel unaccomplished, was born in 1985—roughly translates into “Belamy” in French.). The team of GAN plans to create more artworks and sell them at an auction at a starting price of 10,000 euros ($12,000). The proceeds from the sales will be used to further the collective’s research into training its algorithm and delving into 3D modelling.
So, all these kinds of AI what we shall deal in the future and cooperate in the present, so called augmented intelligence, reflect the truth of the real situation in the world of Skynet and Transformers: sophisticated technologies that enhance our artistic capabilities, still need human assistance to define the rules and control the flow of the work. As for the art, we can consider of AI as a collaboration between human artistic soul and technology brain, creating an unusual duo, which works as an incredible tool on creating something combining chaos of creativity, math and logic.