Nvidia Reflex 2 Frame Warp

Breaking news in the gaming monitor industry! Press releases, new monitors, rumors. IPS, OLED, 144Hz, G-SYNC, Ultrawides, etc. Submit news you see online!
User avatar
Chief Blur Buster
Site Admin
Posts: 11944
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Nvidia Reflex 2 Frame Warp

Post by Chief Blur Buster » 23 Jan 2025, 21:09

lann wrote:
23 Jan 2025, 00:35
A question is, Reflex1 can actually reduce latency by reducing the frame queue, for example, allowing for an earlier view of enemies during sniping, but Reflex2 only makes the post-operation visual feedback faster, and it cannot see enemies earlier than Reflex1, which is not helpful for competitive games, but should be very effective in 3A games after frame interpolation.
Not yet.

Future versions of GPUs is currently predicted (by the ~7000 series, possibly ~6000 series RTX -- and any AMD equivalent) to have scene-editing capabilities (AI and non-AI, framegen and non-framegen) since the ginormous compute demands of rendering one ray traced photorealistic frame is extreme.

The art of modifying a pre-existing rendered frame, to update positionals, will eventually become cheaper than having to re-render (by 2030) a 4K 1000fps 1000Hz frame or, heaven forbid (by 2040) an 8K 2000fps 2000Hz frame.

We're needing new paradigms for the 4K 1000fps 1000Hz future; and this may include Reflex3 between-frame enemy position updates (etc).

This Pandora Box is open. We have to deal with the repercussions, both framegen and non-framgen. Even triangles/polys are also faking real life, like a painter trying to make a painting of real life. On the social media, the holy wars have begun on framegen -- get your popcorn.

Regardless, ALL methods of blur busting is valid (non-framegen, framegen, BFI, CRT, strobe, etc), it's a matter of what compromises we make. CIE1931 is a imperfect one size fits all color gamut, and a pixel grid is an imperfect one size fits all for the random scatter of human vision photoreceptors, and... [choose various poisons we've traditionally chosen for 100 years]. We've optimized things in a way that are easy for electronics, to sythetically present reality to our eyes. But we must acknowledge for all of the compromises of even organic rendering -- they are what they are -- and we're already picking poisons.

The bigger problem is software developer mis-optimization & all the "AI! AI! AI!" bandwagon cheerleading. (Like the dotcom hype before 2000 crash).

We need to invent new benchmarks for the noise margins (frame differences relative to perfect Ultra frames), wheter it be stutters or mipmap blur or hallucinations or stroboscopic effects of all the artifacts (all non-framegen artifacts AND framegen artifacts).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on: BlueSky | Twitter | Facebook

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
William Sokol Erhard
Posts: 7
Joined: 16 Nov 2024, 00:56
Location: Seattle, Washington
Contact:

Re: Nvidia Reflex 2 Frame Warp

Post by William Sokol Erhard » 24 Jan 2025, 13:50

Chief Blur Buster wrote:
23 Jan 2025, 21:09

Not yet.

Future versions of GPUs is currently predicted (by the ~7000 series, possibly ~6000 series RTX -- and any AMD equivalent) to have scene-editing capabilities (AI and non-AI, framegen and non-framegen) since the ginormous compute demands of rendering one ray traced photorealistic frame is extreme.

The art of modifying a pre-existing rendered frame, to update positionals, will eventually become cheaper than having to re-render (by 2030) a 4K 1000fps 1000Hz frame or, heaven forbid (by 2040) an 8K 2000fps 2000Hz frame.
Scene editing is going to get very interesting. One option I haven't seen mentioned yet is already partially implemented as RTX remix: https://www.nvidia.com/en-us/geforce/rtx-remix/

Everyone is talking about fake frames but RTX remix shows an option of Nvidia stepping in between the game and the hardware and simply replacing the rendering pipeline with whatever they want. Currently RTX remix allows old games without programmable shaders to be pathtraced and have things like DLSS and framegen enabled in addition to replacing assets wholesale.

A very near potential future could allow such a platform to simply interpolate or extrapolate not the rendered image but rather the 3D assets and items in the game world entirely after and independent of the CPU gameloop and potentially even add responses to user input (like changing camera position) well after all CPU tasks are completed and use that to render a "real" frame at a far lower latency and potentially higher framerate than could otherwise be achieved.

While there are mountains of difficulties in creating such a system, Nvidia has good engineers.
This approach has the potential to sidestep many of the issues of current framegen and rendering processes and offload far more work onto the GPU. The beauty is that such an approach essentially replicates the interpolation used to make smooth animations in games for decades. It allows for both the elimination of most of the artifacts that plague framegen today while not requiring any change to the game itself.

We're seeing the development of (still rather single threaded) CPU performance in games cause numerous issues in simply trying to make games run at 60FPS let alone acceptable framerates. This approach could sidestep that issue.



My second remark is around issues of temporal fidelity and benchmarking/objective measurement.
I have issues with the way performance is being discussed especially in light of framegen. I want to start by saying that framegen and interpolation has a significant place in the future development of graphics rendering. I use it every day and I'm working on the technology myself. I think it can bring tremendous value and create far better results when the native input framerate can not be easily or effectively improved. It is however, in my opinion, always and inherently inferior to native high framerates. If it's only a little worse, that's still a win but it can never be as good, it can only get close.
This is to say generated frames can only mitigate the issues of low framerates, they can not solve it nor can they replace the need for native high framerates.

While the result of framegen will often create a far preferable result to native and the latency penalties are quite effectively mitigated, it seems disingenuous to describe the resulting framegen output as a comparable "framerate" value. Framegen brings an asterisk, and a large one at that, that should be acknowledged.


Temporal performance is getting more and more complex and the corollary is that it's getting more and more difficult to accurately and adequately describe and measure it. Even ignoring framegen, simple framerate or frametimes are woefully inadequate at explaining the temporal performance of modern games. For decades now we have games that run things like world physics or cloth physics, even animations of characters or environmental objects at a lower framerate than the actual rendering of the game. In Halo Infinite for example, aiming down the sight of a gun triggers a non-interpolated low framerate animation that differs in smoothness by an order of magnitude from the rendering of the game itself. Such disparities will only grow as the upper bounds for rendering performance climb.

We often see games like Quantum Break that have 24 or 30FPS cutscenes, which are more understandable when filmed in live action or prerendered, that entirely disrupt the smoothness of the otherwise high framerate presentation. The worse offenses are when those cutscenes are rendered in realtime and arbitrarily capped. I remember that Quantum Break had one such cutscene that "smoothly" transitioned into user controlled gameplay but the framerate remained capped at that terribly low framerate for a couple of seconds into that gameplay. Obviously that's a bug but it emphasizes the arbitrary degradation developers inflict on the temporal fidelity of the presentation.

I don't know how but for temporal fidelity to improve, we need buy in from the content creators themselves. We can only do so much to mitigate the harm that a bad developer, or filmmaker can inflict on temporal fidelity after the fact. Part of the problem is, it's a lot easier to advertise the static visuals of a game than the fidelity in motion but the issues are systemic.

Post Reply