NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Ask about motion blur reduction in gaming monitors. Includes ULMB (Ultra Low Motion Blur), NVIDIA LightBoost, ASUS ELMB, BenQ/Zowie DyAc, Turbo240, ToastyX Strobelight, etc.
RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 28 Sep 2022, 14:18

Hi

This is just outragous.
What a disappointment.

phpBB [video]

Also, Digital Foundry "experts" say they don't know how to measure the motion clarity, saying that one bad frame between two good ones, cannot be seen.

Maybe someone should tell them that low persistence mode exists?
Jeeesh. This is from the same guy (Alex B.) who said that for 600fps games (an old game) on a high refresh monitor, he recommends... enabling motion blur.

The level of knowledge of those "experts" followed by millions of people on YT, is just sad. :(

Anyway. We've been waiting for frame interpolation. But
- it's locked behind 40xx series cards. No other supports it
- it's tied to DLSS, which is already unfortunate
- it's basically useless for low persistence modes as this does NOT work with v-sync. And we all know how many g-sync "strobed" monitors which do it well, exist now :|

1000WATT
Posts: 391
Joined: 22 Jul 2018, 05:44

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by 1000WATT » 28 Sep 2022, 17:33

dlss.jpg
dlss.jpg (699.62 KiB) Viewed 5682 times
I often do not clearly state my thoughts. google translate is far from perfect. And in addition to the translator, I myself am mistaken. Do not take me seriously.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by Chief Blur Buster » 28 Sep 2022, 18:32

Still some useful nuggets, to interpret from the images (rather than words)

Virtual reality requires mandatory VSYNC, so it's too bad DLSS 3 is not ready yet for VR.

The quality of DLSS definitely needs to massively improve over the long term, with improving AI/neural networks.
RonsonPL wrote:
28 Sep 2022, 14:18
Maybe someone should tell them that low persistence mode exists?
Jeeesh. This is from the same guy (Alex B.) who said that for 600fps games (an old game) on a high refresh monitor, he recommends... enabling motion blur.
There's actually *some* credence to this, but for a different reason...

So here's a small informative piece (for educating readers and reviewers alike):

Useful Info About GPU Blur Effects' Benefit To Refresh Rate Race To Retina Refresh Rates

Some of us hate phantom arrays so much (The Stroboscopic Effect of Finite Frame Rates) to the point where it creates motion sickness and nausea, making the GPU-effect blur filter an absolute necessity for some of us, unfortunately.

See this person complaining about stroboscopic effects, and they find it much more comfortable to enable GPU motion blur effects. When framerates are extremely high, the blur effect can sometimes help make motion more bearable to those sensitive to stroboscopic effect / phantom arrays.

(It happens to be related to one of the common causes of PWM-dimming headaches -- headaches caused by stroboscopic effects rather than from the flicker itself. But PWM-free backlights do not solve all stroboscopic effects, and motion blur reduction strobe backlights can actually amplify the stroboscopic effect).

It's why I also know retina refresh rates need to be roughly 2x oversampled so that 1 frametime of GPU blur effect is still below human-visibility threshold... kinda of a temporal antialiasing with oversampling (kind of a nyquist effect compensation along the temporal domain).

Even a display refresh rate of 100,000 hertz (with a frame rate to match) can still produce stroboscopic phantom array effects for motion going 1 million pixels per second (1 million pixels per second / 100,000 hertz refresh = stroboscopic stepped phantom array every 10 pixels) -- e.g. like an ultrabright 10,000nit HDR magnesium tracer bullet zooming past field of view.

It should look like a continuous blur rather than a stroboscopic effect. Assuming the lumens surge of that single brief refresh cycle is enough for the human to register the brief tracerbullet blur across field of vision. So stroboscopic effect sensitivity is WAY higher than motion blur effect sensitivity.

The only way to solve this is to just oversample the refresh rate to just about 2x retina refresh rate THEN add intentional GPU blurring, to fix the stroboscopic effect.

So, this is my scientific explanation of why we will need GPU motion blur (below human-detection thresholds) to solve a mismatch between finite frame rate and analog real life motion (for say, a Star Trek Holodeck).

As some readers here know, we calculated that a 180-degree-FOV retina-resolution "Holodeck" requires approximately 20,000fps at 20,000Hz. This is to eliminate human-visible motion blur for all realistically eye-trackable motionspeeds. This is because of the Vicious Cycle Effect where increased resolutions amplify sensitivity to framerate & Hz. More resolutions per angular vision means more time to notice a difference between static resolution and motion resolution (the "scenery suddenly blurs during a pan" effect). So while ~2000 pixels/sec takes 1 second to pass a 24" 1080p screen, it takes ~16,000 pixels/sec to take 1 second to pass a 180-degree 16K-resolution. So 1000fps 1000Hz sample and hold still creates 16-pixels of motion blur (if eyetracked) or 16 pixel gapping of phantom array (if motion zooms while stationary gaze).

A way to fix the latter is add GPU blur effect to fix stationary-eye moving-object situation. But that turns 16000pix/sec on 1000fps 1000Hz into 32-pixels of motion blur (during eye tracking) just to fix phantom array during stationary gaze. So retina refresh rate is much higher than that -- even 20,000 pixels/sec would still have 2 pixels of motion blur for 10,000fps 10,000Hz, likely barely visible in extreme situations when all the pixels are stretched wide apart (like on a VR headset of >180-degrees) to the point where individual pixels don't maximize angular resolution of your vision center...

The diametric opposing compromises of fixing stroboscopics versus fixing persistence blur, is very tough without ultra high refresh rates as explained in Blur Busters Law: The Amazing Journey To Future 1000Hz Displays. Obviously, small refresh rate geometrics like 240Hz vs 360Hz (1.5x difference throttled to 1.1x difference due to jitter & nonzero GtG pixel response) is hard to see, but the blur difference of 240Hz vs 1000Hz is very clear to the average population in moving-text-readability tests -- We are finding that in early lab tests that >90% of human population can tell apart 4x blur differences like a 1/240sec SLR photo versus a 1/1000sec SLR photo (with these scientific variables) -- and a framerate=Hz 240Hz vs 1000Hz 0ms-GtG display have the same blur equivalence to photo of said shutter speed (see Pixel Response FAQ).

A different solution is an eyetracker sensor and use eye-tracking-compensated GPU blurring that only executes when eye motion and display motion diverges (i.e. becomes necessary to blur the delta between the motion vector of eye tracking and motion vector of moving objects on display...). An eyetracker sensor would dramatically lower the retina refresh rate of a display, since you can have stroboscopics-free strobing and still have sharp motion -- but would make it a single-viewer display (e.g. VR) -- the flicker would simply need to be well above flicker fusion threshold and then simply call it a day.

But this may need to be oversampled to 40,000fps at 40,000Hz if we want to add intentional GPU motion blurring to fix stroboscopic effect issues of ultrafast motion zooming past our field of views -- but this is kind of a "final whac-a-mole" before a display temporally passes an A/B blind test between transparent ski googgles and a VR headset (Can't tell apart real life = VR), which I informally call the theoretical "Holodeck Turing Test"...

Sorry about this subject sidetrack, but I needed to scientifically explain certain utility to the GPU blur effect...

On the opposite end of the spectrum (ultra-low frame rates like 20fps), the stutter is nauseating for some humans, so the GPU blur effect fixes it. GPU blur effect is not as useful to me on intermediate triple-digit frame rates, but becomes useful pick-poison choice at both the ultra-low frame rates (fix stutter that's a worse evil) and ultra-high frame rates (fix stroboscopics when you need to pass a reality A/B test).

In the near term, for single-viewer displays (e.g. VR), what is useful for VR is to improve GPU blur effect is zero-latency eye-tracking-compensated GPU blur effect, so you don't see the GPU blur effect EXCEPT when it's being used to fix stroboscopics (stationary-eye-moving-object).

TL;DR: Eye-tracking-sensor compensated dynamic GPU motion blur is kind of a blurless Holy Grail Band Aid for virtual reality headsets and for people who gets headaches from stroboscopic/phantomarrays. Basically GPU blurring the difference between eye-motion vector and object-motion vector. That way, zero-difference situations (stationary eye stationary object, AND tracking eye moving object) never has unnecessary GPU blur effect active.

The other "details" metioned by YouTubers are a bit baity/sensationalism to get the views, but as the resident Hz mythbuster -- I needed to shine some angle of truth to why GPU blur effect is a legitimate "Right Tool For Right Job" in the refresh rate to retina refresh rates...
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by thatoneguy » 28 Sep 2022, 20:53

RonsonPL wrote:
28 Sep 2022, 14:18

Also, Digital Foundry "experts" say they don't know how to measure the motion clarity, saying that one bad frame between two good ones, cannot be seen.
DF are shills. No wonder they won't be too critical about their Nvidia overlords lest they won't get free samples from them.
They don't really know what they're talking about most of the time.
phpBB [video]

Haste
Posts: 326
Joined: 22 Dec 2013, 09:03

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by Haste » 29 Sep 2022, 02:26

The quality of this forum really went downhill since last time I checked.

A huge milestone has been achieved with dlss3. Luddites are the last thing I would have expected here.
Monitor: Gigabyte M27Q X

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 29 Sep 2022, 05:53

RonsonPL wrote:
28 Sep 2022, 14:18
Jeeesh. This is from the same guy (Alex B.) who said that for 600fps games (an old game) on a high refresh monitor, he recommends... enabling motion blur.
So? There is nothing objectively wrong with this. Temporal supersampling, so to speak, is the ground truth for exposure-based motion blur, involving no flaws or compromises seen in reconstruction algorithms.
You may not like the look of motion blur, and I would fully agree with that sentiment, but the fact of the matter remains, it eliminates temporal aliasing.

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by thatoneguy » 29 Sep 2022, 07:19

Haste wrote:
29 Sep 2022, 02:26
The quality of this forum really went downhill since last time I checked.

A huge milestone has been achieved with dlss3. Luddites are the last thing I would have expected here.
What milestone?
The latency is worse than DLSS2 and there's more artifacts.
dlss.png
dlss.png (2.66 MiB) Viewed 5526 times
Few people are willing to shell out a premium for a product that doesn't work well.
Spare us the luddite crap.

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 29 Sep 2022, 08:23

thatoneguy wrote:
28 Sep 2022, 20:53
phpBB [video]
So as I begin watching, I am immediately presented with a complete non-sequitur.

Yes, he said seems, because most of the games they analyze don't have behind-the-scenes information, so they can only infer based on visual studies, studies they can perform exactly because of preexisting knowledge of how game rendering works, not a lack of.
Light probes are ubiquitous and have some telltale characteristics.

Guess what, he was completely right.
https://history.siggraph.org/wp-content ... art-II.pdf

A combination of light-mapping and probes.

RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 29 Sep 2022, 08:27

Chief Blur Buster wrote:
28 Sep 2022, 18:32

There's actually *some* credence to this, but for a different reason...
Sure, I get what you're saying, but this guy does NOT use a proper, motion-capable display for his gaming, so him spreading the approach "motion blur is good for you" is simply bad.
Also, what you're saying about people having issues with the stroboscopic effect, is rather rare, according to my observations. As long as the display and content is OK, I don't even know a single gamer in person, who would complain about it. I acknowledge that you're right and those people exist, but chances are that Alex B from DF does not suffer from this, just wanted to get things "more smooth" at the expense of motion clarity.

Haste wrote:
29 Sep 2022, 02:26
The quality of this forum really went downhill since last time I checked.

A huge milestone has been achieved with dlss3. Luddites are the last thing I would have expected here.
Man. We all, me included, wait for proper GPU-accelerated frame interpolation since many years. I do want that. But making it incompatible with v-sync is just horrible, horrible news. I created a forum thread informing about the new thing and provided both my opinion and the opinion which couldn't be more opposite to mine (DF's video). What would I need to do to meet your desired "level"? :ugeek:


silikone wrote:
29 Sep 2022, 05:53

So? There is nothing objectively wrong with this. Temporal supersampling, so to speak, is the ground truth for exposure-based motion blur, involving no flaws or compromises seen in reconstruction algorithms.
You may not like the look of motion blur, and I would fully agree with that sentiment, but the fact of the matter remains, it eliminates temporal aliasing.
It was an ancient game designed for 800x600 res. I'm pretty sure 3840x2160 +MSAAx4 would solve each and every aliasing issues ;)
I would also say I understand the need for TAA well, in games with engines incompatible with MSAA or with things which cannot be solved with MSAA anyway. But from the two downsides, I'd rather see pixels than turn my moving image into a blur, washed out mess. So, once the AI can truly maintain all the detail and only apply the TAA where it really needs to, where the "naked" pixels are just a real eye sore, I won't agree that it's any advancement at all. I've played many games with TAA, and I hated how it looks every single time. When available, once I applied community mods to get rid of it, the image gained so much! It's like switching from an old, overused 14" CRT monitor from 1990, to a brand new one in 1997. I really don't like the "soft" look of modern games, filtered by one or many postprocess filters. To each of their own, I guess, but my personal view on this is as stated: TAA is bad, FSR2 is awful, and I'd like the frame interpolation to be available for v-sync and separated from DLSS completely. But no.

Let's hope it's just Nvidia deciding that latency penalty with v-sync would put a wrench into their marketing machine, so that's the reason why they did it. I personally don't see why it would be such a problem, but maybe my knowledge about GPUs is not up to the task and I'm not aware of something. Let's hope not. I want to see it "fixed" and also adobted by AMD. It should be always an option. And never a thing forced upon gamers.

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 29 Sep 2022, 08:36

RonsonPL wrote:
29 Sep 2022, 08:27

It was an ancient game designed for 800x600 res. I'm pretty sure 3840x2160 +MSAAx4 would solve each and every aliasing issues ;)
I would also say I understand the need for TAA well, in games with engines incompatible with MSAA or with things which cannot be solved with MSAA anyway.
You seem to be confusing completely different topics here. I wasn't talking about TAA or any spatial aliasing. But while on the topic, I recall the game being talked about was Quake, which was in fact designed for 320x200, far from 800x600. I still fail to see the relevance, though.
TAA is bad because it relies on reconstruction, exactly like a lot of bad motion blur implementations do. What Alex talked about in terms of motion blur is equivalent to SSAA, meaning asymptotically perfect anti-aliasing. Maybe you also like the chunky look of vanilla Quake, and I don't blame you, but objectively speaking, the more SSAA, the better.

Anyway, regarding the thread's topic at hand, I too find DLSS 3 to be quite the regression from a Blur Busters perspective. Interpolation was never desirable for games, no matter how good the output was. The leap in accuracy seen from this means nothing when latency is affected this much. Even if I got my hands on an RTX 40 series, I doubt I would ever make use of DLSS 3 outside of ostentation purposes.
Digital Foundry points this out well, exactly like they should. Don't shoot the messenger.

Post Reply