[ad_1]

Siemens and Nvidia want to make the virtual seem a lot more realistic. But is it all just eye candy?

On the face of it, the decision by Siemens and Nvidia to forge a link between their tools seems simple enough. So simple that executives from both companies were at pains to point out that it really isn’t that simple.

The core of the agreement, presented at a joint event on Wednesday, will be to ensure Siemens tools that are used to design everything from chips to factories will ultimately be able to feed data to Nvidia’s virtual-world building software Omniverse to create more photorealistic visualisations. 

“The digital twin is physics-based. It doesn’t just look like the real thing, it behaves like the real thing,” claimed Siemens CEO Roland Busch. “This is not about animation but simulation. If you don’t mimic the real world accurately, you don’t get the benefit out of it. The digital twin has to be comprehensive. If you simulate something on a digital twin and you figure out you want to change something in the real world, you better get it right.”

Jensen Huang, Nvidia co-founder and CEO, pointed to the need for the digital twin and the physical systems it represents to match: “You need to believe they are the same. That’s why it’s so profoundly different to a video game.”

Busch used the example of robots in a factory to describe how their tie-up is meant to work. “Imagine your factory in China is slowing down: it produces fewer parts every day – nothing bad but it’s all adding up. The team at the factory has no idea why it’s happening.”

He described bringing a range of engineers at various locations, including the manufacturer of the robots on the production line, into a VR environment. “They immerse themselves in the digital twin of the plant. It mirrors exactly what is happening in the real plant and it is not a still photograph. It is in real time, down to the physical behaviour of the robots. The team travels back in time in the metaverse to when the output was strong to see what has changed since then. They realise that one robot on the feeder line has missed its latest software update and it’s out of sync. The team updates the software in the digital twin and the robot immediately speeds up and works in sync. Now the team is confident enough to update the software on the real robot.”

In this synthetic example, there’s one nagging question. Would a problem like this actually need a VR environment to determine the issue? And if not, why is it a good example of how this tie-up might work?

It’s generally very easy to make the argument that a VR-type environment makes things easier to debug because they make things look more realistic. But the issue with VR is that visually realistic is not the same as tangible – interacting with the system remains tough because despite all the money that’s been poured into headset development the only sense that has received a lot of attention in the metaverse is sight. In this case, being able to manipulate the robots is not all that important: you wouldn’t want to arm-wrestle an industrial robot under most circumstances. What matters in this case is being able to make sense of what the underlying data are telling you about the system. There is a chance someone might look at how the robots are moving and decide “you know, that’s the old software at work”. But it might just be a case that you could look at a more abstract simulation of the production flow and see that one robot just keeps missing deadlines or waiting ages to get started and then backtrack to why that might be the case.

In reality, such visualisations may simply work better as sales tools. Tony Hemmelgarn, CEO of Siemens Digital Industries Software, used the example of a yacht builder. “With our joint solution, we can enable our customer to do a walkthrough of this yacht on a photorealistic version of the digital twin…what this means for our customers is a quicker approval for them when they go to their customers and talk about what they are trying design for them.”

It need not just be the cosmetics. Dirk Didascalou, CTO of Siemens Digital Industries Software, pointed to the example of machine-tool maker Heller. Siemens has already worked with Heller on projects to optimise the flow of parts to robots based on where the tools needed to work on them sit in their magazines. Heller engineers had found long waits can develop if the tools are not in the best order because of the time it takes to transfer them from different positions in the magazine. In the setup they devised, the machine tool calls on a computer sitting nearby to compute the best ordering based on which parts are ready to work on.

According to Didascalou, Heller is taking a further step in using software to change how its machine tools behave with what he called a “hardware as a service” strategy. “Now they are going even further with a new business model where a customers can buy or lease a machine at a much lower price that has the basic functionality and then when they need it or if they just want to try it, they can upgrade to the premium features at the press of a button,” Didascolou explained.

Higher-fidelity digital twins will likely help this kind of process by making it possible to build a model of the client’s factory into a computer model and use that to evaluate how different upgrades and changes will affect throughput. For factory operators taking more direct control of their systems, the digital twin provides the ability to test out software updates and changes to see if they will cause problems down the line, though there is the question of how much photorealism this requires. One could conceive of a scenario where the modelling is of such fidelity that simulations of different temperatures and curing times for coatings might yield clear visual differences in the virtual realm, but how much work would it take to model the physics of all those processes and render them accurately compared to more targeted experiments that focus on the core data and more abstract visualisations?

The argument as to how important photorealistic rendering is to analysis is one that has run for years in medicine, particularly radiology, where experiments have shown that – possibly because of the way people are trained – sticking with 2D representations of tissues leads to better diagnoses than fancier 3D models. In forensics, however, experiments point to the 3D models being more important in explaining outcomes to non-experts rather than the experts, who find it more efficient to stick to visualisations that show them what they are looking for. Much like the yacht’s carpets in Hemmelgarn’s example, photorealism is good for getting input from others but might not be all that great for the core task.

However, the deal is not all eye candy. There are two underlying themes to the Siemens-Nvidia tie-up that are less obvious but potentially more important. One is the stated desire by both companies to have their software tools talk to each more efficiently, as well as for the industry to being to coalesce on standards around how you bring data from design tools and abstract simulations into VR simulations. Nvidia is keen on the USD interface originally devised by Pixar for automating the physics in its animations. Huang regards USD as potentially being the HTML of the metaverse. The two CEOs also stressed the need for communication to go all the way from physical to metaverse to design tools so that when someone records a change in one it is automatically reflected in the other, rather than relying on engineers making manual changes and run the risk of the models diverging.

“The digital twin will run forever concurrently with the plant. As it runs concurrently, it can predict properly what is happening physically and then in the future when you want to update it, you have confidence that the virtual model is consistent with the physical model,” said Huang.

How consistent is another matter. There is potentially a crisis of bureaucracy here. BMW board member Milan Nedeljković pointed out: “The digital twin is not the challenge, the challenge is to link the digital twin into the existing systems.” 

When Busch used the example of the lost software update in the Chinese factory, I thought a more likely scenario would be a subtle change in practices, possibly something as simple as a new worker accidentally leaving a trolley in the wrong place and forcing an automated guided vehicle to find a new route. Are factories going to be so closely monitored that this would be reflected in the virtual world? With cameras everywhere, it’s possible, but that introduces new questions not just about the legality of workplace surveillance but its influence on morale.

A second outcome is likely to be a lot more benign. One big advantage of highly realistic virtual worlds is that it makes the training of AI a lot more effective. In the field of automated driving, scenarios based on synthetic data rendered in virtual worlds are already being used to provide neural networks with more data than can be obtained even in millions of miles of real-world driving. Huang pointed to factory automation as another situation where synthetic data can improve the training of robots expected to work closely with humans. 

Siemens has its own PAVE360 environment, which models the electronics and sensors of vehicles but is not yet a fully immersive environment. With the Omniverse tie-up, that is a possibility for the future. Similarly, virtual-world training should make it easier to train industrial robots to work more closely alongside humans in a safe way rather than rely on safety cages. The virtual world makes it possible to design scenarios you simply cannot test safely in the physical world to ensure the robots avoid them.

Though it’s a deal that highlights surface changes, there is potentially a lot behind it if the two companies are able to achieve what they hope. If not, it will be yet another example of VR hope not getting remotely close to reality.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

[ad_2]

Source link