Still some useful nuggets, to interpret from the images (rather than words)
Virtual reality requires mandatory VSYNC, so it's too bad DLSS 3 is not ready yet for VR.
The quality of DLSS definitely needs to massively improve over the long term, with improving AI/neural networks.
RonsonPL wrote: ↑28 Sep 2022, 14:18
Maybe someone should tell them that low persistence mode exists?
Jeeesh. This is from the same guy (Alex B.) who said that for 600fps games (an old game) on a high refresh monitor, he recommends... enabling motion blur.
There's actually
*some* credence to this, but for a different reason...
So here's a small informative piece (for educating readers and reviewers alike):
Useful Info About GPU Blur Effects' Benefit To Refresh Rate Race To Retina Refresh Rates
Some of us hate phantom arrays so much (
The Stroboscopic Effect of Finite Frame Rates) to the point where it creates motion sickness and nausea, making the GPU-effect blur filter an absolute necessity for some of us, unfortunately.
See
this person complaining about stroboscopic effects, and they find it much more comfortable to enable GPU motion blur effects. When framerates are extremely high, the blur effect can sometimes help make motion more bearable to those sensitive to stroboscopic effect / phantom arrays.
(It happens to be related to one of the common causes of PWM-dimming headaches -- headaches caused by stroboscopic effects rather than from the flicker itself. But PWM-free backlights do not solve all stroboscopic effects, and motion blur reduction strobe backlights can actually amplify the stroboscopic effect).
It's why I also know retina refresh rates need to be roughly 2x oversampled so that 1 frametime of GPU blur effect is still below human-visibility threshold... kinda of a temporal antialiasing with oversampling (kind of a nyquist effect compensation along the temporal domain).
Even a display refresh rate of 100,000 hertz (with a frame rate to match) can still produce stroboscopic phantom array effects for motion going 1 million pixels per second (1 million pixels per second / 100,000 hertz refresh = stroboscopic stepped phantom array every 10 pixels) -- e.g. like an ultrabright 10,000nit HDR magnesium tracer bullet zooming past field of view.
It should look like a continuous blur rather than a stroboscopic effect. Assuming the lumens surge of that single brief refresh cycle is enough for the human to register the brief tracerbullet blur across field of vision. So stroboscopic effect sensitivity is WAY higher than motion blur effect sensitivity.
The only way to solve this is to just oversample the refresh rate to just about 2x retina refresh rate THEN add intentional GPU blurring, to fix the stroboscopic effect.
So, this is my scientific explanation of why we will need GPU motion blur (below human-detection thresholds) to solve a mismatch between finite frame rate and analog real life motion (for say, a Star Trek Holodeck).
As some readers here know, we calculated that a 180-degree-FOV retina-resolution "Holodeck" requires approximately 20,000fps at 20,000Hz. This is to eliminate human-visible motion blur for all realistically eye-trackable motionspeeds. This is because of the
Vicious Cycle Effect where increased resolutions amplify sensitivity to framerate & Hz. More resolutions per angular vision means more time to notice a difference between static resolution and motion resolution (the "scenery suddenly blurs during a pan" effect). So while ~2000 pixels/sec takes 1 second to pass a 24" 1080p screen, it takes ~16,000 pixels/sec to take 1 second to pass a 180-degree 16K-resolution. So 1000fps 1000Hz sample and hold still creates 16-pixels of motion blur (if eyetracked) or 16 pixel gapping of phantom array (if motion zooms while stationary gaze).
A way to fix the latter is add GPU blur effect to fix stationary-eye moving-object situation. But that turns 16000pix/sec on 1000fps 1000Hz into 32-pixels of motion blur (during eye tracking) just to fix phantom array during stationary gaze. So retina refresh rate is much higher than that -- even 20,000 pixels/sec would still have 2 pixels of motion blur for 10,000fps 10,000Hz, likely barely visible in extreme situations when all the pixels are stretched wide apart (like on a VR headset of >180-degrees) to the point where individual pixels don't maximize angular resolution of your vision center...
The diametric opposing compromises of fixing stroboscopics versus fixing persistence blur, is very tough without ultra high refresh rates as explained in
Blur Busters Law: The Amazing Journey To Future 1000Hz Displays. Obviously, small refresh rate geometrics like 240Hz vs 360Hz (1.5x difference throttled to 1.1x difference due to jitter & nonzero GtG pixel response) is hard to see, but the blur difference of 240Hz vs 1000Hz is very clear to the average population in moving-text-readability tests -- We are finding that in early lab tests that >90% of human population can tell apart 4x blur differences like a 1/240sec SLR photo versus a 1/1000sec SLR photo (with these scientific variables) -- and a framerate=Hz 240Hz vs 1000Hz 0ms-GtG display have the same blur equivalence to photo of said shutter speed (see
Pixel Response FAQ).
A different solution is an eyetracker sensor and use eye-tracking-compensated GPU blurring that only executes when eye motion and display motion diverges (i.e. becomes necessary to blur the delta between the motion vector of eye tracking and motion vector of moving objects on display...). An eyetracker sensor would dramatically lower the retina refresh rate of a display, since you can have stroboscopics-free strobing and still have sharp motion -- but would make it a single-viewer display (e.g. VR) -- the flicker would simply need to be well above flicker fusion threshold and then simply call it a day.
But this may need to be oversampled to 40,000fps at 40,000Hz if we want to add intentional GPU motion blurring to fix stroboscopic effect issues of ultrafast motion zooming past our field of views -- but this is kind of a "final whac-a-mole" before a display temporally passes an A/B blind test between transparent ski googgles and a VR headset (Can't tell apart real life = VR), which I informally call the theoretical "Holodeck Turing Test"...
Sorry about this subject sidetrack, but I needed to scientifically explain certain utility to the GPU blur effect...
On the opposite end of the spectrum (ultra-low frame rates like 20fps), the stutter is nauseating for some humans, so the GPU blur effect fixes it. GPU blur effect is not as useful to me on intermediate triple-digit frame rates, but becomes useful pick-poison choice at both the ultra-low frame rates (fix stutter that's a worse evil) and ultra-high frame rates (fix stroboscopics when you need to pass a reality A/B test).
In the near term, for single-viewer displays (e.g. VR), what is useful for VR is to improve GPU blur effect is zero-latency eye-tracking-compensated GPU blur effect, so you don't see the GPU blur effect EXCEPT when it's being used to fix stroboscopics (stationary-eye-moving-object).
TL;DR: Eye-tracking-sensor compensated dynamic GPU motion blur is kind of a blurless Holy Grail Band Aid for virtual reality headsets and for people who gets headaches from stroboscopic/phantomarrays. Basically GPU blurring the difference between eye-motion vector and object-motion vector. That way, zero-difference situations (stationary eye stationary object, AND tracking eye moving object) never has unnecessary GPU blur effect active.
The other "details" metioned by YouTubers are a bit baity/sensationalism to get the views, but as the resident Hz mythbuster -- I needed to shine some angle of truth to why GPU blur effect is a legitimate "Right Tool For Right Job" in the refresh rate to retina refresh rates...