Nvidia brings a highly realistic, walking, and talking AI avatar for its Omniverse design tool-SiliconANGLE

2021-11-10 05:49:20 By : Ms. Cindy Wang

Nvidia Corp. is expanding its surreal graphics collaboration platform and ecosystem, Nvidia Omniverse, to use a new tool for generating interactive artificial intelligence avatars.

The company also announced a new synthetic data generation engine that can generate physical simulation synthetic data for training deep neural networks. The new features announced today on Nvidia GTC 2021 are designed to extend the usefulness of the Omniverse platform and support the creation of new AI models.

Nvidia Omniverse is a real-time collaboration tool designed to bring together graphic artists, designers, and engineers to create realistic and complex simulations for various purposes. Industry professionals from aerospace, construction, construction, media and entertainment, manufacturing, and gaming are using the software.

The platform has been in beta for several years, and today it is finally officially launched.

It launched the attractive new Nvidia Omniverse Avatar feature, which is said to combine Nvidia's voice AI, computer vision, natural language understanding, recommendation engine and simulation technology. Developers can use ray-traced 3D graphics to create ultra-realistic interactive characters that can see, speak, talk on a wide range of topics, and understand what people say to them.

Nvidia said it regards Omniverse Avatar as a platform for creating a new generation of artificial intelligence assistants, which can be customized for almost any industry. For example, users can create avatars that can be used to take orders in restaurants, help people with bank transactions, make appointments at hospitals, dentists, and even salons, and perform more tasks.

"These virtual worlds will contribute to the next era of innovation," Richard Kerris, vice president of Omniverse Platform, told reporters in a briefing.

Nvidia CEO Jensen Huang emphasized the power of Omniverse Avatar in the GTC keynote, where he showed a video showing his two colleagues having a real-time conversation and discussion with his toy replica (pictured) The topics include being biology and climate science. The second, more practical demonstration showed a customer service avatar in a restaurant, when they were able to see, talk and understand two human customers when they ordered veggie burgers, French fries and drinks.

As shown in the third example, Omniverse Avatar can also be used with Nvidia's DRIVE Con​​cierge AI platform. There, a digital assistant will pop up on the car’s dashboard screen to help the driver choose the most appropriate driving mode to reach the destination on time, while executing a separate request to set a reminder when the vehicle's mileage is less than 100 miles.

"The dawn of smart virtual assistants has arrived," Huang said. "Omniverse Avatar combines Nvidia's basic graphics, simulation, and AI technology to create some of the most complex real-time applications ever. The use cases for collaborative robots and virtual assistants are incredible and far-reaching."

The Omniverse Avatar will undoubtedly make it easier to build more powerful chatbots, but for other types of AI models, developers sometimes struggle to do this because they lack the information needed to train them. Nvidia hopes to solve this problem with the new Omniverse Replicator, a tool that can generate synthetic data sets, which can then be used to train neural networks to perform a series of tasks.

At the time of release, Omniverse Replicator provides two applications for generating synthetic data, which is annotated information created by computer simulations or algorithms, as an alternative to real-world data that may be lacking. They include a replicator for Nvidia DRIVE Sim (shown below), which is a virtual world used to build a digital twin of self-driving cars, and another replicator for Nvidia Isaac Sim, which is a Create a platform for manipulating the digital twin of the robot.

Nvidia said that developers can use the replicator to guide new AI models, fill in the data gaps in the real world, and mark the real situation in a way that humans cannot do. Nvidia said that the data generated in these virtual worlds can cover a wide range of scenarios, including rare or dangerous situations that may be difficult or dangerous to replicate in the real world.

Nvidia believes that synthetic data will enable autonomous vehicles and robots to master skills that can be applied to the physical world.

Rev Lebaredian, President of Nvidia Simulation Technology and Omniverse Engineering, said that synthetic data is critical to the future of artificial intelligence. "Omniverse Replicator enables us to create diverse, massive, and accurate data sets to build high-quality, high-performance, and secure artificial intelligence," he said. "Although we have built two domain-specific data generation engines, we can imagine many companies using Omniverse Replicator to build their own engines."

Today’s announcement is not limited to avatars and synthetic data. In addition to these two new features, Nvidia also announced new features of the platform, including integration with the enterprise-level streaming media framework Nvidia CloudXR. Nvidia stated that the idea is to allow users to interactively transfer their Omniverse experience to mobile augmented reality and virtual reality devices.

Elsewhere, Omniverse VR now supports full-image, real-time ray tracing VR, which means that developers can build tools that support VR on the platform. At the same time, Omniverse Remote now provides AR functions and virtual cameras, allowing designers to view the assets they create through iOS and Android devices, fully ray tracing.

Other new features include Omniverse Farm, which enables design teams to use multiple workstations or servers to provide additional functions for rendering, synthetic data generation, and file conversion.

Finally, non-designers can also enjoy some treatment. Nvidia said that there is a new application called Omniverse Showroom in the Omniverse Open Beta, where non-technical users can play Omniverse's technical demonstrations, showing the platform's real-time physics and rendering technology.

Robert Hoff reports

Click here to join the free and open Startup Showcase event.

We really want to hear from you, and we look forward to seeing you at the event and the CUBE club.

Click here to join the free and open Startup Showcase event.

Twitter Blue subscription service launches ad-free articles and withdrawal features

Coinbase stock price drops due to lower-than-expected revenue

Google’s Project Relate is designed to help people with language barriers communicate more easily

RingCentral’s stock rose because it posted a solid earnings over

Alphabet's CapitalG participated in a $250 million financing of data management startup Collibra

Socure raised US$450 million for its identity verification platform with its valuation of US$4.5B

Twitter Blue subscription service launches ad-free articles and withdrawal features

Application-Author: Duncan RILEY. 22 minutes ago

Coinbase stock price drops due to lower-than-expected revenue

Blockchain-Author: Duncan RILEY. 1 hour ago

Google’s Project Relate is designed to help people with language barriers communicate more easily

Application-by Mike Whitley. 1 hour ago

RingCentral’s stock rose because it posted a solid earnings over

Cloud-Author: Mike Whitley. 2 hours ago

Alphabet's CapitalG participated in a $250 million financing of data management startup Collibra

Big data-provided by Maria DEUTSCHER. 4 hours ago

Socure raised US$450 million for its identity verification platform with its valuation of US$4.5B

Artificial Intelligence-by Maria Decher. 6 hours ago

Like free content? Subscribe to follow.