How neural networks are used to create concept art

How neural networks are used to create concept art

We all love that modern computers are capable of doing routine, boring work for us. Something we have brought to automatism, tedious and repeating endlessly.

Over the past couple of decades, new processes such as procedural generation have achieved significant success in computer technology. They allow us to achieve a variety of results with minimal cost, allowing us to focus more on creativity. With the help of procedural generation, levels (virtual locations), plants (Speedtree) and even textures (Substance Designer) are created.

And in recent years, a new tool has emerged – GAN, which stands for Generative Adversarial Networks (generative adversarial network). Neural networks that generate new data create images and sequences.

These machine learning frameworks pit a pair of AIs against each other to figure out what is the most plausible outcome. In doing so, the entire process relies on a data library loaded into the neural network.

How neural networks are used to create concept art

Early versions of the GAN were primitive and showed rather dubious results. At least no one would dare to say:

So let’s use this nightmare dog-knight image from DeepDream in our next big-budget video game.

Despite this seemingly bad start, big companies saw the potential of the GAN. So there are many neural networks available on the market today, with adequate price and quality. Therefore, in production they can be used not only by large companies, but also by small studios, freelance artists and ordinary people.

This opens up new, huge opportunities. But there are potential dangers and pitfalls that creatives should be aware of. This material provides an overview of how you can successfully use the GAN, what the future holds and how this will affect our work in the next 5 years.

Introduction

For starters, it’s worth taking a little leap in time. I created the image above a year and a half ago using the GAN site, where area color coding was used to define environment properties. One could sketch out clouds, a river, a wide ridge, or even a building with four rows of windows and a large door.

Introduction

Then, by clicking on the “Create” button, the real “magic” happened. I was really surprised when I first tried the service. And when he showed the result of his work, he stunned my colleagues and comrades.

For the first time it seemed that these neural networks have become so powerful that they will now increase work productivity. Instead of wasting several hours, this tool made it possible to create several believable landscape concepts in a matter of minutes. In the meantime, the NVIDIA team is working on a similar tool that includes not only schematic input, but also allows you to change the overall mood and time of day in the generated image.

A couple of months later, I stumbled upon an excellent talk by Scott Eaton, who was building a deep learning AI network that used a lot of pre-loaded photographs in its work. He used the web to create abstract human figures that rely on simple sketches as input.

Eventually, his experiments went so far as to use models of cubes and other shapes to train the network. After the AI ​​learned to use the new library, Scott took the results, also based on rough drafts, and turned them into physical sculptures.

We all love that modern computers are capable of doing routine, boring work for us. Something we have brought to automatism, tedious and repeating endlessly.

Over the past couple of decades, new processes such as procedural generation have achieved significant success in computer technology. They allow us to achieve a variety of results with minimal cost, allowing us to focus more on creativity. With the help of procedural generation, levels (virtual locations), plants (Speedtree) and even textures (Substance Designer) are created.

And in recent years, a new tool has emerged – GAN, which stands for Generative Adversarial Networks (generative adversarial network). Neural networks that generate new data create images and sequences.

These machine learning frameworks pit a pair of AIs against each other to figure out what is the most plausible outcome. In doing so, the entire process relies on a data library loaded into the neural network.

Early versions of the GAN were primitive and showed rather dubious results. At least no one would dare to say:

So let’s use this nightmare dog-knight image from DeepDream in our next big-budget video game.

Despite this seemingly bad start, big companies saw the potential of the GAN. So there are many neural networks available on the market today, with adequate price and quality. Therefore, in production they can be used not only by large companies, but also by small studios, freelance artists and ordinary people

This opens up new, huge opportunities. But there are potential dangers and pitfalls that creatives should be aware of. This material provides an overview of how you can successfully use the GAN, what the future holds and how this will affect our work in the next 5 years.

Introduction

For starters, it’s worth taking a little leap in time. I created the image above a year and a half ago using the GAN site, where area color coding was used to define environment properties. One could sketch out clouds, a river, a wide ridge, or even a building with four rows of windows and a large door.

Introduction

.Then, by clicking on the “Create” button, the real “magic” happened. I was really surprised when I first tried the service. And when he showed the result of his work, he stunned my colleagues and comrades.

For the first time it seemed that these neural networks have become so powerful that they will now increase work productivity. Instead of wasting several hours, this tool made it possible to create several believable landscape concepts in a matter of minutes. In the meantime, the NVIDIA team is working on a similar tool that includes not only schematic input, but also allows you to change the overall mood and time of day in the generated image.

A couple of months later, I stumbled upon an excellent talk by Scott Eaton, who was building a deep learning AI network that used a lot of pre-loaded photographs in its work. He used the web to create abstract human figures that rely on simple sketches as input.

Eventually, his experiments went so far as to use models of cubes and other shapes to train the network. After the AI ​​learned to use the new library, Scott took the results, also based on rough drafts, and turned them into physical sculptures.

Fast forward two months to the beginning of a new production cycle in our studio. Wonderful preproduction stage. And, as is often the case in the early stages of development, we had a little more freedom to experiment, so we could go back to starting point and find cool new ways to create crazy visuals and reimagine the work of our art departments. Pre-production is the best time to define time-consuming tasks and solutions to problems. Or just to optimize each workflow a little.

With that in mind, I thought about the Ganbreeder site I used a while ago. The resource allowed you to upload your own images and “cross” them with existing images from other creators or from their libraries. Since then, the site has been renamed Artbreeder , and now it has many different GANs tailored for specific tasks: creating environments, characters, faces, or more specialized categories, including anime and furry.

creatures

After a couple of days of working with the tool, I stopped worrying so much about my results. Don’t get me wrong, I loved the instrument and quickly got used to it, but often the results were awkward or even supernatural. When this period passed, I showed results in the team, and we began to discuss the possibilities that this CrossBreeder offered.

Application

Let’s be realistic. If you correctly configure and train your own neural network, then you can get limitless possibilities. You can create a network that is more goal-oriented, not only based on which library is used to train it, but also depending on how the input methods and variables will shape the end result.

In fact, most of us, without having a basement server farm or proper programming / scripting skills, are limited by the options available on the site.

In terms of our manufacturing process, I found several important aspects that have already brought real benefits. Not only in terms of saving time, but also for the embodiment of new creative ideas that were even difficult to imagine.

collection

Our project required the creation of concepts that would look alien and unexpected, which is precisely the strength of the GAN. They generate results that at first glance seem realistic, but if you let them, they can create strange, unusual shapes and designs.

Here are some possible use cases for the GAN and how I used them.

Character concepts

So far, I have mixed feelings about how GAN creates character concepts. The reason is simple: characters are at the heart of any movie, game, or other narrative product. They are carefully crafted and rely on the connection of many aspects, so they necessarily follow specific rules and are rarely randomly generated. Of course, all this is important only for the main characters with their own history and backstory.

Character concepts

In this case, after obtaining the results of the GAN generation, a lot of additional work has to be done. Often times, you have to change the perspective, transform some and combine other parts. And then, after talking with designers, screenwriters or technical artist, you have to do this work for the third, fourth, or even fifth time. This process can take as much time and effort as the conventional character design method.

Character concepts

One of the obvious advantages is the formation of ideas for suits and clothes. Here, I see the benefit of starting with something completely weird but looking cool and then toning down the result so it’s readable and reasonable for the purpose.

Another great use is minor characters and aliens. Often, their looks are free from strict rules or even benefit from looking strange and unrecognizable.

Character concepts
Character concepts

Portraits

There are difficulties here. Sometimes the results are too good: I was scared and alarmed when the face on the screen that I had just created smiled back at me, as if to let me know that it was as real as spilled coffee on my desktop. The results are completely ready for the production stage. I have no doubt that in a couple of years, many of the released concept art for realistic games like The Last Of Us will appear on Artstation, and no one will be surprised or asked how they managed to create such realistic images.

Character concepts

Well-trained systems can easily cope with creating more stylized and abstract faces. This allows artists to instantly see what a particular character will be like when depicted in a different style. Understand how the hero will look with a large beard, different hairstyle or softer features. Or even see a character’s face twice as wide as usual.

Character concepts
Character concepts

In this realistic example, StyleGAN was used to create the face. This neural network made it possible in a matter of minutes to create a 3/4 view of a face, see a smiling character, or even form an alternative image of him. For everything else, it was easier to get a fast, high-quality result with traditional overlay and blending.

Environment

In addition to creating portraits, GANs perform well in shaping environments. Whether you use Artbreeder or a segmented input method such as NVIDIA GauGAN, the results are phenomenal and allow for creativity at high speed, especially when creating thumbnails of moods and the like.

Character concepts
Character concepts

After the generation is complete, it still takes a little effort, but nature is more forgiving in terms of the “evil valley” effect. Therefore, almost perfect results can be obtained very quickly. But more importantly, they will be as diverse as possible. This is true for realistic landscapes as well as for unusual and unknown planets.

The only drawback is the integration of buildings into the landscape. Creating a remote city or urban profile is a simple task, but if you want to get the atmosphere of downtown Los Angeles, it is possible to achieve high accuracy and detailed realism only after specific training of the neural network for this purpose.

Keyframes, storyboards, scenes and illustrations

Perhaps one day the technology will reach a level where we can enter “a man in a blue shirt fighting a superhero in Italy” as input, and the result will be functional visualization. In this case, we will get closer to completing tasks from this category. But at the moment, neural networks are at the beginning of the path. There are already APIs for translating text and images, but to be honest, they work lousy.

Character concepts

The difficulty of creating a successful production scene in the entertainment industry is insane. As a creator, you need to think about composition, perspective, movement, lighting, previous and next frames, context. Too many details for modern GANs. The computer geniuses of the Earth will need at least another couple of years to achieve adequate results. Perhaps now is the time to become a storyboard artist.

Textures

Textures

To test the network capabilities, I started by creating a few texture bases. When it comes to finding different ideas for creating textures, using GAN is a good option. However, at times, clarity and precision suffer from a lack of detail, making the result look artificial. But since Substance Designer already exists, it is not difficult to assume that work is already underway to improve and solve the problems of integrating specific neural networks. It remains to wait for them.

Design

Design
Design

Design is another side of GAN that I fell in love with. The more freedom with input, the better. This process throws you out of your comfort zone and allows you to go beyond your habits. Organic design works much better than technical design. There is still a difficulty with clear edges and boundaries, so the smaller the neural network library, the more accurate and technically accurate the results will be. However, you can lose some of the surprise factor, which is sometimes beneficial.

Abstraction and art

The modern art world will need to rethink its view of where art begins and what the role of the viewer is, as opposed to the creative process of the creator.

You can easily create something that looks like a painting, and no one will doubt that the printed version hanging in your living room is a long-forgotten composition by Kandinsky or a modern abstract painting of a bouquet of flowers.

Abstraction and art

Art lives with its context and meaning. Often through the artist and what he / she insists on, to which he has dedicated his life. If you are using GAN to support your creative process, then it is worth considering it as a great tool for achieving results that expand thoughts, amplify holos. In this regard, it is dangerous to rely on the opinion of the computer and consume or use what it has done on its own, even if you provide it with the initial input.

Problems

It’s important to emphasize that GAN doesn’t have the magic “Create Great Art” button that people have dreamed of for years. You still need someone who knows what he wants and what to do with it after creation. Of course, even an untrained person can create cool images, and do it much faster. This has narrowed the gap between an artist with 10 years of experience and those who have just started working as a creative. However, you still need to know the basics of design, including composition, color and light, ratios, and so on.

Another important factor is post-generation work. After creating an image that will go into production, it is important to prepare it for this stage. And in this case, the quality depends solely on your skills and experience – there is no simple and convenient solution, only years of work. It is important to consider what animation requirements the design should meet, is the result consistent with the overall goals of the product or visual guidelines?

What to do

Starting to use GANs in your work, it is not difficult to drown in the effect you get from their use. Therefore, it is important to clearly understand what the goals are in this process. Think about what you want to achieve, not what tools you use. Think about your inputs and your vision, then start generating amazing results.

Abstraction and art

Like globalization, there is a danger that design will become too simplistic and straightforward. If many people feed the same data to neural networks, the result will be banal and uniform.

However, there is no doubt that in the future we will work more and more with AI support.

Leave a reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>