Generative AI has the potential to revolutionize every creative industry, and the gaming industry is no exception. Game worlds are becoming richer, more immersive, and in many ways closer to simulations of our “real” world. This means the cost and team size required to build them has also skyrocketed.
Generative AI tools, such as large-scale language models such as GPT-4 and image generation algorithms such as Dall-E 2, ease the burden on artists and designers to create thousands of unique, nuanced assets. Helpful. These assets are the locations, objects, characters, enemies, etc. that make up your game world.
Here is an example. If you take a walk through the forest in even the most recent video games, if you pay attention, you might start to notice that there are only a few models of individual trees. After a while, the same thing starts to repeat itself and the same tree appears in different locations.
The game’s job is to entertain and immerse the user in the action and story so that the user doesn’t notice these technical limitations, but when they do, it’s unpleasant and creates a sense of disbelief created by the game. Break through your reservations quickly.
With generative AI, your forest could be filled with thousands of completely unique trees and the same diversity of creatures and creepy crawlies as a real forest.
While this is a likely scenario for game design in the near future, here are some examples of generative AI in use today.
Revolution Software is a British game development company that had a huge hit with its Broken Sword series of adventure games in the 1990s, before the era of multiplayer online games and photorealistic 3D graphics.
Since then, they’ve either expanded into multimedia production powerhouses to deal with the increasing cost and complexity of game design, or failed to do so and gone out of business, or been absorbed by other studios that have grown. , we took a different path than many studios at the time.
according to polygonRevolution has maintained a small team structure and supported itself primarily with sequels and reissues of the Broken Sword series.
When planning to update the first game in the series to run on the latest generation of consoles and PCs, the studio ran into a problem. All older graphics were scaled to fit the much lower resolution displays in use at the time. Because these are hand-drawn artworks, recreating them all on Ultra HD displays at the resolutions today’s gamers expect would be cost-prohibitive.
Studio founder Charles Cecil connected with generative AI researchers at the University of York to obtain some samples of artwork designed for modern updates and use them to train generative adversarial networks (GANs). I was able to use it.
With help fine-tuning the model from Nvidia engineers, we now have a generative AI model that can create a single piece of in-game artwork, such as an object or character, in 5 to 10 minutes.
A human artist then retouches the AI-generated art, specifically the hands and face (which, as many have pointed out, are the most imperfect areas in AI-generated images of people). (This is an easy place to focus on.)
This made the studio’s plans to bring popular games to a new generation of modern gamers economically viable.
Cecil says: “The ability to use AI… is completely game-changing… [without it] We couldn’t afford it.
“In fact, incredibly talented character artists and animators can take the original and create something truly special without the hassle of redrawing everything.”
Automate mundane aspects of creativity
As in other industries, the most exciting applications of generative AI may seem somewhat mundane given the hype and appeal that has been built around it.
But the real magic isn’t in being able to create thousands of very similar images at high speed. Rather, it’s about what artists and designers can do with their time once they’re freed from the “grunt work.”
There are many other ways in which we can expect generative AI to disrupt the gaming industry.
In the near future, we may be able to meet and interact with characters in games that behave and speak much more naturally than we are used to today. NVidia avatar cloud engine (ACE) aims to enable game designers to incorporate characters with AI-driven generative personalities into their creations.
It can also be used to create dynamic storylines. Stories can be more flexible and tailored to individual player choices, creating a more personalized experience than using only human writers. For example, ChatGPT can be told to create a game where the AI generates an ongoing storyline using the following command: just a simple prompt.
It can also be used for automated testing. Create an army of simulated players, all of whom play the game in different ways to suit his style and personality. This means game developers can quickly determine what play styles might make the game experience less satisfying and adapt their products accordingly.
It can also be used for dynamically generated narration, allowing characters to speak their lines and maintain their resonance even if the player forces them to go off-script.
What does this mean for game design and game designers?
It’s great to imagine small independent studios being able to harness the power of generative AI to create games that would otherwise require much larger teams and huge amounts of money. It’s exciting.
At the same time, the industry must be mindful of managing the impact of these emerging technologies on human work. You can’t blame small studios for wanting to use AI to create otherwise substandard games. But many might argue that large studios have a duty to ensure that the creators they employ are not made redundant by machines.
However, in my personal opinion this goes beyond the duty of care and there are good business reasons to keep humans mixed up.
Some things are invisible, but some things are still undeniable. For example, there is the fact that AI cannot replicate the “spark” of human ingenuity and creative nuance. Alternatively, AI may lack emotional intelligence and therefore be less likely to create in a way that resonates with us on an emotional level.
Others are very practical. For example, we’ve all seen that generative AI has the potential to create in ways that deviate significantly from the user’s intentions, causing hallucinations. This may also produce output similar to the following: Hateful, discriminatory, or otherwise harmful. Without human oversight and expertise to mitigate this in a creativity-related way, this could have dire consequences for companies committed to AI.