If there are three “buzzword” topics that have certainly generated a great deal of hype over the past few years, it’s digital twins, generative AI, and the Metaverse.
But one area that’s arguably generating more than heat is games and 3D design, with companies like Unity Technology and Epic Games pulling the strings connecting these hot tech topics.
Epic Games and Unity’s platforms are best known for powering many of the most popular video games in history, but they’re also used in immersive 3D designs, virtual reality environments, and all kinds of games for the industrial and leisure markets. It is also widely used to create simulations.
To discuss this convergence and its potential for democratizing access to real-time 3D design, I recently had the opportunity to speak with Marc Whitten, SVP and GM Create at Unity.
Some of the ways artificial intelligence (AI), especially emerging classes of generative AI apps like ChatGPT and Stable Diffusion, will soon make it easier for anyone to create digital twin simulations and 3D interactive experiences and environments talked. This could revolutionize many industries that are already enthusiastically adopting these technologies, such as gaming, automotive, manufacturing and healthcare.
But the possibilities don’t end there. Besides democratizing the creative process, the benefits will undoubtedly be experienced “at runtime” as well. Users will then be able to use natural language to interact with simulations, digital twins and immersive environments to extract the information and insights they actually need. -time.
He said to me, “When you connect [3D simulated environments] With natural language and AI-based tools, you can literally ask, “Hey, computer, what’s going on on the manufacturing floor?” or “What are the top three issues I need to pay attention to right now?”
In this theoretical situation, supervisors are monitoring facilities or facility operations via real-time 3D graphical representations. This could be factories, retail stores, or sports and entertainment venues such as theme parks and sports stadiums. What happens next by watching on-screen through virtual reality (VR) headsets or using augmented reality (AR) glasses to overlay images on top of real-world views You can watch the predictions unfold before your very eyes. in real time.
Will this be a true vision of how the much-talked-about Metaverse could unfold, rather than the 3D world of Meta Horizons or the whimsical 3D environments of Decentraland?
“The term has been exaggerated to the point of being almost meaningless,” Witten told me.
“It started being used to cover everything … What people really want is to think about next-generation experiences where 3D plays a role. The ability for everyone to interact, even if they are…that’s why I like terms like “digital twin”… [better describe] What is a particular company trying to do to extract value from technology? ”
More and more companies and industries are entering the gaming world to help achieve this vision. Because the expertise is there.
In the 90s, as 3D graphics technology evolved, video games began to move away from the two-dimensional, Pac-Man-style bitmap images that characterized the first two decades.
Since then, game developers have used tools like Unity and Unreal Engine to push the boundaries in creating realistic, simulated worlds. Today, this expertise is being leveraged by companies like the one responsible for building the new Vancouver Airport. This company built a realistic 3D simulation before using a single construction tool. Alternatively, automakers such as Daimler have adopted the technology for configurator apps that buyers use to select options for their new car.
Beyond these existing applications, Whitten wants to look to a future where all industries transform static, flat data into 3D, real-time models to fundamentally reshape value chains. I’m here.
His view is that this maturity will see companies start with a ‘representative’ digital twin, recognizing the 3D environment as the environment, processes and systems being modeled. Then go to the “connected” digital twin and then to the “live data” twin. This represents the level of maturity at which the simulation can be informed by data collected from the real world by sensors and scanners and update the model as it happens.
He calls the next and final maturation stage (in this model) the ‘predictive twin’. At this stage, the simulations and data modeling are sufficiently sophisticated to allow operators to effectively peer into the future.
He said, “That’s part of the beauty of connecting these things through a real-time engine. Real-time 3D means it’s physical, so you can turn the clock forward and see these things.” We can say let’s simulate the next period, taking into account the conditions: in addition to the actual data coming in, we also consider the simulated data going forward.
“Does this show that we need to take action? Let this kind of information flow through the enterprise so that everyone can make better decisions, faster. Become.”
Back in the gaming world – Whitten sees the application of generative AI and natural language technology that promises to create richer, more immersive experiences for future players.
Once again he talks about the creative stage and the benefits of both stages. Generative AI allows the designer to simply describe what he wants to create instead of painstakingly creating everything using 3D modeling tools, reducing the workload associated with creating content and environments. increase.
Gamers then interact with seemingly realistic non-player characters (NPCs) with intelligence and conversational abilities. In other words, the medieval towns and space stations in the game environment no longer need to be patrolled by security guards with identities.
However, to realize this exciting vision, we need to develop ways to incorporate AI processing into the game loop in an economically viable way.
“A game could have 100 million people playing it, so it would be too costly if everyone needed access to the cloud to run the AI and figure out what the NPCs were trying to say. .”
But this is a problem Unity is working to solve, and edge computing could form part of the solution, using small-scale neural networks running on the player’s own device. . It probably doesn’t produce as much processing power as GPT-4, but it’s enough to make his NPCs look like the wooden toys our grandparents used to play with today.
Overall, Whitten said he is very excited about the impact these three cutting-edge tech trends will have on not just gaming, but the industry and economy as a whole.
he said to me “Unity’s mission is to believe that the world will be a better place with more creators participating … we understand things we haven’t had the chance to do before, and we can inject them, so we can make those things happen. Things will start to feel more alive and real with AI…it will do wonders.”
you can click here Watch the full conversation with Unity’s SVP and GM Create, Marc Whitten. There, he delves deeper into the convergence of generative AI, digital twins, and the metaverse, and the impact he predicts it will have on many industries.