NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Ask about motion blur reduction in gaming monitors. Includes ULMB (Ultra Low Motion Blur), NVIDIA LightBoost, ASUS ELMB, BenQ/Zowie DyAc, Turbo240, ToastyX Strobelight, etc.
RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 29 Sep 2022, 16:11

Chief Blur Buster wrote:
29 Sep 2022, 14:12

I hate to lose favourite historical forum members like Haste because the general forums are getting diluted.

So I am simply guiding him to a different room in Blur Busters based on my familiarity of his past posts -- he probably visited for the first time in 2 years and then only briefly glanced at certain forums and frowned. I'm simply revealing a different area of Blur Busters.
OK, Understood. But I still think he had no reason to complain about this thread. At least assuming I didn't misunderstand what/who he was complaining about here. So if I did, the whole text below can be scratched out (skipping recommended in this case)
Everyone is free to create their own threads. I won't even mind deleting mine if something better takes its place. I wanted to bring some news. Unfortunately DLSS3 with v-sync incompatibility is a very sad news for me, after waiting for so many years. Was it bad to point this out?
There was the source (embedded video) provided. The only info source we've got so far. Apart from that, was just a short, possibly poorly written comment from me, as everyone is entitled to have an opinion. I consider myself a geek, way below the intelligence levels of you guys in the laboratory section, but I do consider myself an expert on the practical side of the topic and I probably know more than half of the laboratory guys about non-technical side of the topic, in aspects related to gaming vs. motion blur. I do fetch all the news related whenever I see them.
I've spent like hundreds if not thousand of hours researching the blur phenomenon in practice, I discovered the importance of blurless gaming way before John Carmack did (after The Chief helped them) for VR, and I was having my first surprise about stroboscopic effect when I made tests at 170Hz almost exactly 2 decades ago. I still remember who intrigued I was that day.
I even remember the place in the game I tested that at. :)
Since then, I've spent probably like a hundred hours on various forums, helping people learn about it, at least 10 people chose monitor models with low persistence because of the knowledge I brought them. I gave a big part of my life to bring knowlege about this topic to people and I think I am the only person who ever even touched the topic of psychology side of motion blur when gaming, why and how it affects what the "mainstream" thinks.
My writing style is bad especially in recent years due to some health issues. My intelligence is not on par with the Lab section guys. I am aware of that. But surely I am not draggging down any quality levels by creating a topic like this one. Neither I am one of the "mainstream noobs" who just lower the quality levels, I think. ;)

BTW.
8K 1000fps+ 1000Hz+ OLED UE5 FTW! :D
Well, not in Europe. EU wants to ban the biggest climate change issue in the world - TVs which take too much power. They will decide what power draw your TV can have. Just so happens 99% of 8K TVs are about to be axed. TV industry appeals. We'll see how it goes, but yeah. I won't comment on this cause I would definitely drag the quality of this forum down if I was to express my honest opinion about politics and their brilliant ideas here ;)

jorimt wrote:
29 Sep 2022, 14:21

From what I've seen and experienced with UE5, it's likely going to be a blurry, stuttery, over-processed (albeit somewhat pretty) mess at first. But that's how tech innovations tend to always go; one step forward, two steps back.

The process is often painfully slow and incremental, but for those innovations to eventually reach the oft unrealistic expectations and wildest imaginings of the mainstream audience, it usually means there first has to be enough early adoption to fund what typically appears to be a very compromised, incomplete, overpriced, and generally frustrating first few generations of product.

It happens with displays, game consoles, PCs, VR, game engines, rendering techniques, you name it, but as much as we'd like, nothing can start at the finish line.
Yeah, but if not for the VR and it's requirements, this Engine would still be not optimized for higher framerates even at 4.0 version. I also partially disagree with "it gets better". Well, to be honest I prefer the look of other engines and dislike the UE4.0 in most cases, although they made forward renderer an option for VR, which I was really glad to see. But all the other stuff stayed since early days of 4.0 It's still overprocessed, heavy, often stuttery. It's not bad or anything, I recognize its upsides and where it excels. But from the very first day I saw the presentation explaining how 4.0 works (before the games were actually being made on it) I already was like "oh, bummer". Reliance on postprocess, mostly. The way it handles it etc.
Well, it's a mod-friendly engine, at least, so we can enjoy some nice mods for 3D Vision, which as a side effect, usually allow to get rid of the postprocess, in some cases. It's also popular so more people are familiar with it etc.
I just wish an engine focused primarely on HFR, 3D, VR, low latency, caught this level of popularity. But it won't, cause the devs prefer the overprocessed, pseudo-realistic looks in their games. No way to force them to create what they don't want to create.
But we've digressed. What I meant is that the EU5 means more push toward computational-heavy looks of the new games. New technologies put even more strain on the CPU performance, the one which cannot be helped with adding more cores and threads. We mainly have games designed for PS4 now. If EU5 games designed for PS5 come out, and devs "overdo" them like GTA V on PS3/x360 or Shadow of The Colossus on PS, where sub-30fps was common, even hitting 60fps on med or high settings may be a challenge. This could mean a huge step backward in terms of reaching the holy grail of motion in gaming (ultra high framerates)

Well. Nvidia says they will improve the artifacting. I still hold some hope for both huge improvement at least when interpolating from 100+fps and maybe, one day, ability to enable this with v-sync ON. If "OFF" setting is enforced due to latency issues, then there's hope. This may still be unusable for VR, but VR already has its techniques, like time warp etc. and I can see myself playing a joypad-controlled chill-out game even if interpolation adds some latency when interpolating from 100 or even from 60fps.

------------------

about Digital Foundry. When they talk about motion blur, motion quality or trade offs related, it's really though to watch. It grinds my gears. But I can't say I value only their frametime graphs. They do provide interesting information, some of them at least partially understand the motion issue (after all, one of them does play on his CRT and said he uses his OLED TV in low persistence mode when possible). Their retro series is awesome, and apart from the last two new guys, their narrating is entertaining and I just like listening to what they say. I couldn't stand how Alex read the script at first. Much better now. The new guy's reading is horrible. And they do some good for the gaming world by talking to devs, which often results in fixing frame pacing issues, framerate drops etc. They're a positive. Just not 100% independent and gamer-focused, and not nearly as knowledgable in terms of motion clarity issue in gaming, as I would like them to be.

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 29 Sep 2022, 16:31

thatoneguy wrote:
29 Sep 2022, 16:02

Unlike Digital Foundry, I do not claim to be a technical expert so a big nope to your argument.
Even a monkey can get a thing right once or twice. DF has way more goofs than hits.
The only thing they have "expertise" in is to make fools out of their audience.
Do you think it was just a fluke that the terms he used ended up being pretty much exactly right? In the same analysis, he brings the lighting model into context, explaining and demonstrating in detail how it has weaknesses as well as strengths, namely inconsistency in the way things are lit, so it was clearly more than just a pure guess.

And it's easy to only see goofs thanks to confirmation bias. Digital Foundry does sometimes have insider info, or in the case of PC/emulated games, the power of dissection to make empirically verifiable statements.
thatoneguy wrote:
29 Sep 2022, 15:59

Uh, excuse me?
Quake being designed for 320x200?
Have you even played Quake?
It most certainly wasn't designed for 320x200. The textures are far too crisp for that.
It was always designed with 640x480 and higher in mind and when the Voodoo cards soon after they could run it at that resolution at about 30fps.

I'm guessing you're confusing it with Doom and you're talking out of your ass.
The texel density in Quake and Doom are identical. Using this as an argument for display res targeting is completely arbitrary and moot.
It was designed for 320x200 because that's what Pentiums at the time could manage at playable frame rates. It could theoretically go all the way up to 1280x1024, but that doesn't mean it was usable. Voodoo wasn't even the hardware of choice initially, Verite was. GLQuake was also never an "official" final release, and it was left permanently in a beta 0.x state, so that's yet another point against it being the intended design.

Image

RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 30 Sep 2022, 04:17

I wasn't sure which Quake that was, so I played it safe in terms it's Quake 2 or 3.
And you managed to get an argument about it :D

Either way, 240p or 600p. Makes no difference. No aliasing of any sort at 2160p+MSAA.

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 30 Sep 2022, 04:29

RonsonPL wrote:
30 Sep 2022, 04:17
I wasn't sure which Quake that was, so I played it safe in terms it's Quake 2 or 3.
And you managed to get an argument about it :D

Either way, 240p or 600p. Makes no difference. No aliasing of any sort at 2160p+MSAA.
Quake 2 still carries on the same texel density legacy. E.g. one of those buttons at head height are 32x32. It was Quake 3 that pushed it beyond, in part due to it being free from the software-rendering shackles.

And indeed, you won't ever see any spatial aliasing with such settings in Quake, but this is not considering the dimension of time. With a limited refresh rate of a monitor, fast motion will leave gaps when V-synced. The point of motion blur here would be to blend 1000 frames per second into something like 120Hz and eliminate such gaps in motion. This is only possible because Quake is a lightweight game, so you can afford to render that many frames. Modern games have to use cheap and flawed approximations of motion blur, which never look as good as the real deal.

RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 30 Sep 2022, 04:49

OK, but how would the game know what I'm looking at? Without instant eye tracking it can't. So adding motion blur to a game like this, which looks and feels aweseome enough at even 120, and surely better at properly strobed 240Hz display, is the wrong idea IMO.
I think it's just a matter of approach. If you focus on "do I see the stroboscopic effect?" all the time, then yeah.



That's my main problem with Alex B. from Digital Foundry, although he's one of many
He's a pro and expert in graphics and lighting.
He's a total layman in gaming.

He simply doesn't know what he's talking about. It shows. I watched all of his videos on DF. I get the impression it's a graphics, photo and programming geek, but only a casual gamer at best. He clearly lacks the experience with gaming. Sometimes when he makes settings recommendations... ah.. let's leave this without a comment, maybe. ;)
He never ever sat down and asked himself how the process of having fun from games looks like. What matters. How. What are the pitfalls, traps and surprises in this topic. What is obvious and what is not. What's important among the things we see as completely not important.

If he did, he'd never recommend turning the motion blur ON in my opinion.
So, either he's one of the few people the Chief mentioned, who just physically cannot stand strobing, or he just focuses on this too much and fails at evaluating the pros and cons of his choice. Just like people who've spent years on learning how the light behaves, be it photgraphy, lighting for videos and movies, drawing, painting, creating 3D art, tend to get allergic to unrealistic lighting and think that everyone else shares the same opinion. This is why they are even willing to make 30fps out of 120fps game with same asset quality, if only the lighting is significantly better.

This goes in other directon too. After spending so much time on motion related tests, research etc. I am too psychologically prone to get allergic to flaws in motion. I surely am irritated more than averge gamer whenever I try to track a moving object with my eyes, and cannot, because of motion blur. It's important to keep questioning your own opinion. That's the scientific approach. You should never be 100% sure you're right.

People ignore the psychological side of gaming and related technical issues. But it's really more important than it seems at a first glance.

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 30 Sep 2022, 04:57

Alex specifically mentioned camera motion blur as being detrimental due to its loss in clarity with fast mouse motion. Applying it per-object only would specifically mitigate this, which to me sounds like he favors designing visuals in a way that suits gaming.

The artifacts are hard to ignore, though. This is a screencap from his video where he explains the issue.
blur.jpg
blur.jpg (174.4 KiB) Viewed 2962 times
This is a reason enough for me to disregard it, but that's just my subjective stance.

RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 30 Sep 2022, 05:49

This is good that he prefers object motion blur rahter than camera blur, but what if I want to track the moving object with my eyes?
For example, God of War III has almost no camera movement, and yet, I fall asleep while playing on sample and hold mode. When I switch to low persistence, I enjoy the game way more.
I just don't like the blur in general. I don't care for inferior "smoothness". Especially at 120fps it's not a big deal.
And let's not forget the better the motion blur tech is, the more it costs. Even if not if direct performance, then in GPU die area etc.
I really think we'd be better off without it.
So I'm not happy when Alex and other ray-tracing fans say "finally we'll get that superb motion blur quality thanks to ray tracing" etc.

About the artifacts, OK. If they say Nvidia promised to improve this, we can wait with our judgement. Personally I expected interpolating from 120fps to be almost complately free from any artifacting, if the data from GPU is used to aid the "old style" frame interpolation methods. Interpolating from 60fps should be OKish at slow and medium motion.
I'm most interested in how it looks compared to native without any AA or AI upscaling tech enabled, without motion blur.
BTW. Did you notice how Digital Foundry compares image quality in temporal AA technologies, to "native" where their "native" is a frame from a game with TAA enabled? That's as far from native as it gets in my opinion.

silikone
Posts: 57
Joined: 02 Aug 2014, 12:27

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by silikone » 30 Sep 2022, 06:07

RonsonPL wrote:
30 Sep 2022, 05:49
This is good that he prefers object motion blur rahter than camera blur, but what if I want to track the moving object with my eyes?
For example, God of War III has almost no camera movement, and yet, I fall asleep while playing on sample and hold mode. When I switch to low persistence, I enjoy the game way more.
I just don't like the blur in general. I don't care for inferior "smoothness". Especially at 120fps it's not a big deal.
And let's not forget the better the motion blur tech is, the more it costs. Even if not if direct performance, then in GPU die area etc.
I really think we'd be better off without it.
So I'm not happy when Alex and other ray-tracing fans say "finally we'll get that superb motion blur quality thanks to ray tracing" etc.

About the artifacts, OK. If they say Nvidia promised to improve this, we can wait with our judgement. Personally I expected interpolating from 120fps to be almost complately free from any artifacting, if the data from GPU is used to aid the "old style" frame interpolation methods. Interpolating from 60fps should be OKish at slow and medium motion.
I'm most interested in how it looks compared to native without any AA or AI upscaling tech enabled, without motion blur.
BTW. Did you notice how Digital Foundry compares image quality in temporal AA technologies, to "native" where their "native" is a frame from a game with TAA enabled? That's as far from native as it gets in my opinion.
Actually, native would be the wrong reference too. What you'd ideally want to compare it against is the ground truth, i.e. massive framebuffer downsampled to fit a target.
Some amount of blur is necessary for effective AA, even when supersampling. The old way of doing MSAA amounts to a box filter, which is sharp, but isn't as effective as other filters. If this reference is sufficiently blurry, then you could argue that native with TAA is better than DLSS.

This is the kind of research I miss seeing from Digital Foundry. Their videos are meant to appeal to layman gamers, not geeks with a passion for theoretical video game fidelity.

RonsonPL
Posts: 122
Joined: 26 Aug 2014, 07:12

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by RonsonPL » 30 Sep 2022, 09:50

I am well aware of that. Yes, MSAA won't help with everything and yes, to get 100% antialiased image with all the modern shading techniques, you'll need blur.
But I'd argue we're better off with some aliasing than blur. I don't like the look of DSR 4:1 even with the "smoothness" value set to 0%. It's just not as.. hard to say. It just doesn't have the whole amount of life in it anymore. This is why I often struggle to decide whether I even want to use DSR 4:1 or not in some older games I can get away with this without dropping below the required FPS.

I guess, when you start from sub 240p (what was the Wolfenstein 3D resolution again? ;) ) you are not that allergic to a few pixels here and there.
But the way the dev designs a game matters. If the aliasing comes from 12.5% res shaders, then not even 16K or 64K res would help. Good example would be RDR 2. This game lack the higher quality assets for PC. So when you disable the TAA, which is what the game was designed for, you get awful pixellation in tree leaves, shadows etc. Judging by the 2 second long part of the Assassin's Creed's video on Digital Foundry, the same issue may be present in this game. It's also often visible in details like hair.

Also John Carmack mentions often in his talks, that you need to design the game to avoid aliasing for VR, rather than trying to rely on fixing the issue later.

It's a matter of design choice. There's no need to have such a soft, blurred image in 2022, when games from the past often look cleaner and sharper. Not in the era where 4K displays are widely adopted already. You can sacrifice some effects which weren't in games for 30 years and it was not a problem, like advanced volumetrics. You can switch from deferred rendering to forward rendering. You can make less dynamic lighting (this is ridiculous when I see the push towards "all dynamic or we'll die!!!" approach even in racing games, where static lighting on the track part is just fine. You can save all that power and use it to increase the resolution. Just like the wiser part of VR game devs does. There's no need to suffer 1080p TAA image at 30fps instead of 2160p with stable 120fps, just because you want ray tracing and ton of dynamic lights for realistic lighting... in a game with light sabers, dragons, magic spells, time machines and talking flowers.

yuri
Posts: 46
Joined: 09 Jun 2022, 14:19

Re: NVIDIA introduces DLSS3, interpolates frames, but is not v-sync compatible.

Post by yuri » 30 Sep 2022, 17:37

Chief Blur Buster wrote:
28 Sep 2022, 18:32
Still some useful nuggets, to interpret from the images (rather than words)

Virtual reality requires mandatory VSYNC, so it's too bad DLSS 3 is not ready yet for VR.

The quality of DLSS definitely needs to massively improve over the long term, with improving AI/neural networks.
RonsonPL wrote:
28 Sep 2022, 14:18
Maybe someone should tell them that low persistence mode exists?
Jeeesh. This is from the same guy (Alex B.) who said that for 600fps games (an old game) on a high refresh monitor, he recommends... enabling motion blur.
There's actually *some* credence to this, but for a different reason...

So here's a small informative piece (for educating readers and reviewers alike):

Useful Info About GPU Blur Effects' Benefit To Refresh Rate Race To Retina Refresh Rates

Some of us hate phantom arrays so much (The Stroboscopic Effect of Finite Frame Rates) to the point where it creates motion sickness and nausea, making the GPU-effect blur filter an absolute necessity for some of us, unfortunately.

See this person complaining about stroboscopic effects, and they find it much more comfortable to enable GPU motion blur effects. When framerates are extremely high, the blur effect can sometimes help make motion more bearable to those sensitive to stroboscopic effect / phantom arrays.

(It happens to be related to one of the common causes of PWM-dimming headaches -- headaches caused by stroboscopic effects rather than from the flicker itself. But PWM-free backlights do not solve all stroboscopic effects, and motion blur reduction strobe backlights can actually amplify the stroboscopic effect).

It's why I also know retina refresh rates need to be roughly 2x oversampled so that 1 frametime of GPU blur effect is still below human-visibility threshold... kinda of a temporal antialiasing with oversampling (kind of a nyquist effect compensation along the temporal domain).

Even a display refresh rate of 100,000 hertz (with a frame rate to match) can still produce stroboscopic phantom array effects for motion going 1 million pixels per second (1 million pixels per second / 100,000 hertz refresh = stroboscopic stepped phantom array every 10 pixels) -- e.g. like an ultrabright 10,000nit HDR magnesium tracer bullet zooming past field of view.

It should look like a continuous blur rather than a stroboscopic effect. Assuming the lumens surge of that single brief refresh cycle is enough for the human to register the brief tracerbullet blur across field of vision. So stroboscopic effect sensitivity is WAY higher than motion blur effect sensitivity.

The only way to solve this is to just oversample the refresh rate to just about 2x retina refresh rate THEN add intentional GPU blurring, to fix the stroboscopic effect.

So, this is my scientific explanation of why we will need GPU motion blur (below human-detection thresholds) to solve a mismatch between finite frame rate and analog real life motion (for say, a Star Trek Holodeck).

As some readers here know, we calculated that a 180-degree-FOV retina-resolution "Holodeck" requires approximately 20,000fps at 20,000Hz. This is to eliminate human-visible motion blur for all realistically eye-trackable motionspeeds. This is because of the Vicious Cycle Effect where increased resolutions amplify sensitivity to framerate & Hz. More resolutions per angular vision means more time to notice a difference between static resolution and motion resolution (the "scenery suddenly blurs during a pan" effect). So while ~2000 pixels/sec takes 1 second to pass a 24" 1080p screen, it takes ~16,000 pixels/sec to take 1 second to pass a 180-degree 16K-resolution. So 1000fps 1000Hz sample and hold still creates 16-pixels of motion blur (if eyetracked) or 16 pixel gapping of phantom array (if motion zooms while stationary gaze).

A way to fix the latter is add GPU blur effect to fix stationary-eye moving-object situation. But that turns 16000pix/sec on 1000fps 1000Hz into 32-pixels of motion blur (during eye tracking) just to fix phantom array during stationary gaze. So retina refresh rate is much higher than that -- even 20,000 pixels/sec would still have 2 pixels of motion blur for 10,000fps 10,000Hz, likely barely visible in extreme situations when all the pixels are stretched wide apart (like on a VR headset of >180-degrees) to the point where individual pixels don't maximize angular resolution of your vision center...

The diametric opposing compromises of fixing stroboscopics versus fixing persistence blur, is very tough without ultra high refresh rates as explained in Blur Busters Law: The Amazing Journey To Future 1000Hz Displays. Obviously, small refresh rate geometrics like 240Hz vs 360Hz (1.5x difference throttled to 1.1x difference due to jitter & nonzero GtG pixel response) is hard to see, but the blur difference of 240Hz vs 1000Hz is very clear to the average population in moving-text-readability tests -- We are finding that in early lab tests that >90% of human population can tell apart 4x blur differences like a 1/240sec SLR photo versus a 1/1000sec SLR photo (with these scientific variables) -- and a framerate=Hz 240Hz vs 1000Hz 0ms-GtG display have the same blur equivalence to photo of said shutter speed (see Pixel Response FAQ).

A different solution is an eyetracker sensor and use eye-tracking-compensated GPU blurring that only executes when eye motion and display motion diverges (i.e. becomes necessary to blur the delta between the motion vector of eye tracking and motion vector of moving objects on display...). An eyetracker sensor would dramatically lower the retina refresh rate of a display, since you can have stroboscopics-free strobing and still have sharp motion -- but would make it a single-viewer display (e.g. VR) -- the flicker would simply need to be well above flicker fusion threshold and then simply call it a day.

But this may need to be oversampled to 40,000fps at 40,000Hz if we want to add intentional GPU motion blurring to fix stroboscopic effect issues of ultrafast motion zooming past our field of views -- but this is kind of a "final whac-a-mole" before a display temporally passes an A/B blind test between transparent ski googgles and a VR headset (Can't tell apart real life = VR), which I informally call the theoretical "Holodeck Turing Test"...

Sorry about this subject sidetrack, but I needed to scientifically explain certain utility to the GPU blur effect...

On the opposite end of the spectrum (ultra-low frame rates like 20fps), the stutter is nauseating for some humans, so the GPU blur effect fixes it. GPU blur effect is not as useful to me on intermediate triple-digit frame rates, but becomes useful pick-poison choice at both the ultra-low frame rates (fix stutter that's a worse evil) and ultra-high frame rates (fix stroboscopics when you need to pass a reality A/B test).

In the near term, for single-viewer displays (e.g. VR), what is useful for VR is to improve GPU blur effect is zero-latency eye-tracking-compensated GPU blur effect, so you don't see the GPU blur effect EXCEPT when it's being used to fix stroboscopics (stationary-eye-moving-object).

TL;DR: Eye-tracking-sensor compensated dynamic GPU motion blur is kind of a blurless Holy Grail Band Aid for virtual reality headsets and for people who gets headaches from stroboscopic/phantomarrays. Basically GPU blurring the difference between eye-motion vector and object-motion vector. That way, zero-difference situations (stationary eye stationary object, AND tracking eye moving object) never has unnecessary GPU blur effect active.

The other "details" metioned by YouTubers are a bit baity/sensationalism to get the views, but as the resident Hz mythbuster -- I needed to shine some angle of truth to why GPU blur effect is a legitimate "Right Tool For Right Job" in the refresh rate to retina refresh rates...
I'm the person that chief motion about strobe effect and yeah i was Hype about interpolation at high framerate, but DLSS lower the global quality of the image. and this look worse than DLSS 2 :cry:

Nvidia really need to separate interpolation feature and DLSS.
the latency is crazy AF with DLSS too.
sadly it's locked behind 4000 series cards
and it never gonna be useful with strobed g-sync display.

As for Motion blur (digital foundry made a video about it), it help a lot for masking the gaps between each frame but devs doesn't allways provide a slider for adjust the motion blur. for me that hate stroboscopic effect at high frame rate it's better to inject 1 frame of blur with Reshade than make a 180 turn with double image everywhere..

in fact i've test on Bullet per minute that if you put the lowest motion blur setting at 240fps (low setting, and the value at 1/10) the strobe effect become not perceptible if you don't look for it, and the overall blur is not really disgusting to see.

Post Reply