NVIDIA Introduces Real-Time Neural Material Models, Delivering Up to 24x Shading Acceleration

 


NVIDIA introduced a new approach to real-time neural material models that delivers a whopping 12-24x speedup in shading performance over traditional methods.



NVIDIA leverages AI to improve real-time model rendering, neural approach delivers up to 24x improvement over traditional methods

At Siggraph, NVIDIA is showing off a new real-time rendering approach called Neural Appearance Models that aims to leverage AI to accelerate shading capabilities. Last year, the company unveiled its Neural Compression technique that delivers 16x greater texture detail, and this year, the company is working to dramatically accelerate texture rendering and shading performance.


The new approach will be a universal execution mode for all materials from multiple sources, including real objects captured by artists, measurements, or generated from text prompts using generative AI. These models will be scalable to different quality levels ranging from PC/console gaming to VR and even movie rendering.







The model allows to capture every detail of the object to be rendered, such as incredibly delicate details and visual subtleties such as dust, water spots, lighting and even rays cast by the mixture of various light sources and colors. Traditionally, these models are rendered using shading graphs which are not only expensive to render in real time, but also include complexities.









With NVIDIA’s “Neural Materials” approach, the traditional materials rendering model is replaced by a less expensive and more computationally efficient neural network, which the company claims will enable shading computation performance up to 12 to 24 times faster. The company provides a comparison between a model rendered using a shading graph and the same model rendered with the Neural Materials model.


NVIDIA Introduces Real-Time Neural Material Models, Delivering Up to 24x Shading Acceleration 2

The model matches the details of the reference image in every way and, as mentioned above, it does so much faster. You can also view each model and compare the image quality yourself via this link.


The new model features the following innovations:


A complete and scalable system for cinematic-quality neural materials

tractable training for gigatexel-sized assets using an encoder

Decoders with priors for normal mapping and sampling

and efficient execution of neural networks in real-time shaders


The company also explains how the neural models work. At render time, the neural material is very similar to using a traditional model. At each impact point, they first look for textures and then evaluate two MLPs, one to get the BRDF value and the second to import and sample the outgoing direction. Some improvements to the real-time approach include built-in graphics priors that improve inference quality and encoder training time to generate the renders at massive resolutions.


NVIDIA Introduces Real-Time Neural Material Models, Delivering Up to 24x Shading Acceleration 3

Image source: NVIDIA research paper

All models rendered using the Neural Materials approach offer texture resolutions up to 16,000, delivering detailed and deep objects in games. These refined models are also less taxing on games, leading to better performance than was previously possible.


The fact that textures built on neural models run faster allows NVIDIA to tailor them to different applications. In a side-by-side comparison, NVIDIA shows two models, one with 2 layers (with 16 neurons) rendering in just 3.3 ms while a slightly more detailed model, rendered with 3 layers (with 64 neurons) still rendering in 11 ms.



Image source: NVIDIA research paper

As for the hardware that will support Neural Materials models, NVIDIA says it will leverage existing machine learning frameworks like PyTorch and TensorFlow, tools like GLSL or HLSL, and hardware-accelerated matrix multiply-accumulate (MMA) engines on GPU architectures like AMD, Intel, and NVIDIA. The runtime shader will compile the neural material description into optimized shared code using the open-source Slang shading language, which has backends for a variety of targets including Vulkan, Direct3D 12, and CUDA.



Image source: NVIDIA research paper

The Tensor-core architecture introduced in modern graphics architecture is also a step forward for these models, and while currently limited to compute APIs, NVIDIA exposes Tensor-core acceleration to shaders such as the modified open-source DirectX Shader Compiler based on LLVN that adds custom intrinsics for low-level access, allowing them to efficiently generate shared Slang code.


Performance is shown using an NVIDIA GeForce RTX 4090 GPU using hardware-accelerated DXR (Ray Tracing) at 1920×1080. Model rendering time is shown in ms, and results show that the new neural approach renders frames much faster and with better detail than the baseline performance. In full-screen rendering times with neural BRDFs, the 4090 achieves 1.64x faster performance with 3×64 model settings and 4.14x faster with 2×16 model settings. Material shading performance using Path Tracing appears to be 1.54x faster with 2×32 settings and 6.06x faster with 3×64 settings.


Overall, the new approach to NVIDIA Neural Materials models aims to redefine how textures and objects will be rendered in real time. With 12x to 24x acceleration, this will allow developers and content creators to generate materials and objects faster with ultra-realistic textures that also run quickly on the latest hardware. We look forward to seeing this approach leveraged by upcoming games and applications.


Real-time neural appearance models | NVIDIA Research

Post a Comment

0 Comments