Nvidia makes big strides in inverse rendering to turn photos into 3D objects – SiliconANGLE

Nvidia Corp. Today on display the impressive progress it has made in “inverse rendering,” a technique that uses artificial intelligence to reconstruct a series of still photos into a 3D model of an object or scene.

Nvidia’s latest method of performing inverse rendering, known as Nvidia 3D MoMa, has potential uses for video game developers, architects, designers, and concept artists. This allowed them to quickly and easily import a photo of an object into a graphics engine and modify it in a variety of ways, changing the material and texture, adding lighting effects or changing the scale.

3D MoMa works by taking a photograph of an object or scene and turning it into a sort of triangular mesh, complete with textured materials, that can be dropped into game engineers, 3D modeling programs, and movie renderers. Nvidia explained that advances in neural radiation fields, combined with the processing power of its Tensor Core graphics processing units, make it possible to generate these triangular mesh models in an hour or less.

The reconstruction process recreates three key features of the photograph: a 3D mesh model of the object or scene, the materials, and the lighting. The mesh can be seen as a papier-mâché model of the object in question, which is made up of triangles.

This model can then be modified in various ways by developers to adapt the object to their creative vision. Materials are integrated as 2D textures that can be placed on the 3D mesh like a skin. Then the model calculates how the recreated object is illuminated to maintain the accuracy of the illumination.

Everything is automated, making it much easier than the traditional process of creating 3D objects from scratch using complex photogrammetry techniques, which require manual effort and can take hours, Nvidia said.

At this week Conference on Computer Vision and Pattern RecognitionNvidia demonstrated the capabilities of Nvidia 3D MoMa by building 3D objects from a series of images of jazz band instruments, including a trumpet, trombone, saxophone, drum kit, and clarinet, taken from different angles.

Nvidia 3D MoMa studies these photos and reconstructs the 2D images as 3D representations of each instrument. In this way, it can extract them from their original scene and import them into the Nvidia Omniverse 3D Simulator Platform to edit them.

From there it becomes possible to change the shape or material of any instrument. Nvidia’s team replaced the trumpet’s original plastic material with various materials, including gold, marble, wood and cork.

The edited 3D objects can then be dropped into any virtual scene. Nvidia tested this by placing the instruments in a Cornell box, which is a classic graphical test of display quality. The instruments responded to light just like in the physical world, with the shiny metal ones reflecting brightly, while the matte drum heads absorb most of the light.

Finally, Nvidia used the instruments as building blocks to create a complex animated scene of a virtual jazz band.

Holger Mueller of Constellation Research Inc. told SiliconANGLE it was good to see Nvidia making progress in creating 3D objects, as it is an area that involves a lot of tedious and labor-intensive work. That has held back the creation of rich virtual reality and augmented reality applications, he added.

“Innovations like transforming 2D material into 3D objects are key to building a richer software experience, whether for VR, AR or the metaverse itself,” Mueller said.

“The Nvidia 3D MoMa rendering pipeline leverages the machinery of modern AI and the raw computing power of Nvidia GPUs to quickly produce 3D objects that creators can import, edit, and extend into existing tools without restriction,” said David Luebke, Nvidia’s vice president. president graphics. Research.

Images: Nvidia

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, ​​Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more celebrities and experts.

#Nvidia #big #strides #inverse #rendering #turn #photos #objects #SiliconANGLE

Leave a Comment

Your email address will not be published. Required fields are marked *