
Nvidia CEO Jensen Huang recently addressed concerns about the visual quality of their DLSS 5 technology in an interview with Lex Fridman. Huang acknowledged that he also dislikes images that look poorly generated by AI, but explained that DLSS 5 works differently and doesn’t produce the same kind of visual issues.
Huang acknowledged the criticism surrounding DLSS 5, saying he understands why people are unhappy with AI-generated content. ‘I agree with their point of view,’ he explained, ‘because I don’t particularly like the look of a lot of AI-created images either. It all starts to look the same, and while it’s technically beautiful, it lacks originality.’ He expressed empathy for those voicing concerns.
According to Huang, DLSS 5 works by being “3D conditioned and 3D guided.” This means artists maintain full creative control over the game’s shapes and details, and DLSS 5 uses that information to create the images you see.
He explained that while he demonstrated several examples, DLSS 5 is different because it’s based on 3D data and uses that to guide its process. It accurately reflects the artist’s intended geometry in every frame, staying true to the original design.
Huang has previously defended DLSS 5 against criticism. When asked about the public’s reaction by Tom’s Hardware’s Paul Acorn earlier this month, he strongly disagreed with the negative feedback, stating that the critics were “completely wrong.”
Huang strongly disagreed with the claims, explaining that DLSS 5 combines control over all aspects of a game’s visuals with the power of generative AI. He emphasized that developers retain full artistic control, as they can customize the AI to achieve the specific look they want without altering their creative vision.
He clarified that what they were doing wasn’t simply refining the image after it was created, or tweaking each individual frame. Instead, they were fundamentally controlling the creation process at the level of the 3D shapes themselves.
Developers have a lot of creative freedom with DLSS 5, even being able to create unique visual styles like cartoon-like shading or entirely glass-based environments, and they maintain full control over how it all works. Unlike typical generative AI, this technology puts the power over content creation directly in the hands of the developer, which is why we refer to it as neural rendering.
In a recent interview, Jacob Freeman explained how DLSS 5 works. He clarified that the technology is trained using a game’s 2D image and its motion vectors. DLSS 5 then uses this information to generate its own improved image, even handling complex rendering effects like physically based materials.
Freeman didn’t share many details about how the technology alters characters, like in the Resident Evil Requiem demo. He clarified that the core 3D models remain the same, and emphasized that the demonstration was a preliminary look at the technology.
To learn more about DLSS 5 and the criticism it’s received, see what reporters and developers are saying. You can also find a report detailing how developers and artists at Capcom and Ubisoft weren’t informed about the technology beforehand.
Read More
- Trails in the Sky 2nd Chapter launches September 17
- HBO Max Just Added the Final Episodes of a Modern Adult Swim Classic
- Paradox codes (April 2026): Full list of codes and how to redeem them
- PRAGMATA ‘Eight’ trailer
- Pragmata Shows Off Even More Gorgeous RTX Path Tracing Ahead of Launch
- Crimson Desert’s Momentum Continues With 10 Incredible New Changes
- Hulu Just Added One of the Most Quotable Movies Ever Made (But It’s Sequel Is Impossible To Stream)
- Solo Leveling’s New Character Gets a New Story Amid Season 3 Delay
- Dragon Quest Smash/Grow launches April 21
- How Could We Forget About SOL Shogunate, the PS5 Action RPG About Samurai on the Moon?
2026-03-24 16:13