Page 46 of 66

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 08:31
by jorimt
hmukos wrote:
17 May 2020, 06:03
Why is there such a huge variation between Min and Max input latency when VSYNC OFF + 2000 FPS @ 60hz? Shouldn't game check inputs and show at least something on screen every 0.5ms or so regardless of refresh rate? Or have I misunderstood what "first on-screen reaction" means?
"First On-screen" reaction means the first update spotted on-screen, be it at the top, bottom, or anywhere in-between, is registered as a single sample.

As for why there is such a "huge" variation at 60Hz, one big factor is "scanout" latency; 60Hz has a relatively slow 16.6ms scanout, so the duration of a single frame scan (16.6ms scanout = top reading: ~0ms, middle reading: 8.3ms, bottom reading: 16.6ms) is much greater than it is at, say, 240Hz (4.2ms scanout = top reading: ~0ms, middle reading: 2.1ms, bottom reading: 4.2ms).

Basically, the slower the scanout, the bigger difference there will be between the min, avg, and max readings with the "First on-screen reaction" measurement method.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 12:20
by hmukos
jorimt wrote:
17 May 2020, 08:31
As for why there is such a "huge" variation at 60Hz, one big factor is "scanout" latency; 60Hz has a relatively slow 16.6ms scanout, so the duration of a single frame scan (16.6ms scanout = top reading: ~0ms, middle reading: 8.3ms, bottom reading: 16.6ms) is much greater than it is at, say, 240Hz (4.2ms scanout = top reading: ~0ms, middle reading: 2.1ms, bottom reading: 4.2ms).
That's what I don't understand. Suppose we made an input right before "Frame 2" and than it rendered and it's teared part was scanned out. How is that different in terms of input lag than if we did the same for "Frame 7"? Doesn't V-SYNC OFF mean that as soon as our frame is rendered, it's teared part is instantly shown at the place where monitor was scanning right now?
Image

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 13:05
by Chief Blur Buster
hmukos wrote:
17 May 2020, 12:20
That's what I don't understand. Suppose we made an input right before "Frame 2" and than it rendered and it's teared part was scanned out. How is that different in terms of input lag than if we did the same for "Frame 7"? Doesn't V-SYNC OFF mean that as soon as our frame is rendered, it's teared part is instantly shown at the place where monitor was scanning right now?

Image
Yes, VSYNC OFF frameslices splice into cable scanout (usually within microseconds), and some monitors have panel scanout that is in sync with cable scanout (sub-refresh latency pixel-for-pixel).

What I think Jorim is saying is that higher refresh rates reduces the latency error margin caused by scanout. 60Hz produces a 16.7ms range, while 240Hz produces a 4.2ms range.

For a given frame rate, 500fps will output 4x more frameslice pixels per frame at 240Hz than at 60Hz. That's because the scanout is 4x faster, so a frame will spew out a 4x-taller frameslice at 240Hz than 60Hz for a specific given frame rate. Each frameslice is a latency sub-gradient within the scanout lag gradient. So for 100fps, a frameslice has a lag of [0ms...10ms] from top to bottom. For 500fps, a frameslice has a lag of [0ms...5ms] from top to bottom. We're assuming a 60Hz display doing 1/60sec scan velocity, and 240Hz display doing a 1/240sec scan velocity, with identical processing lag, and thus identical absolute lag.

Even if average absolute lag is identical for a perfect 60Hz monitor and for a perfect 240Hz monitor, the lagfeel is lower at 240Hz because more pixel update opportunities are occuring per pixel, allowing earlier visibility per pixel.

Now, if you measured average lag of all 1920x1080 pixels from all possible infinite gametime offsets to all actual photons (including situations where NO refresh cycles are happening and NO frames are happening), the average visibility lag for a pixel is actuallly lower, thanks to the more frequent pixel excitations.

Understanding this properly requires decoupling latency from frame rate, and thinking of gametime as an analog continuous concept instead of a digital value stepping from frame to frame. Once you math this correctly and properly, this makes sense.

Metaphorically, and mathematically (in photon visibility math), it is akin to real life shutter glasses flickering 60Hz versus 240Hz while walking around in real life (not staring at a screen). Your real life will feel slightly less lagged if your shutters of your shutter glasses are flashing at 240Hz instead of 60Hz. That's because of more frequent retina-excitement opportunities of the real-world photons hitting your eyeballs, reducing average lag of the existing apparent photons.

Once you use proper mathematics (treat a display as an infinite-refreshrate display) and calculate all lag offsets from a theoretical analog gametime, there's always more lag at lower refresh rates, despite having the same average absolute lag.

Most lag formulas don't factor this in -- not even absolute lag. Nearly all lag test methodologies neglect to consider the more frequent pixel excitation opportunity factor, if you're trying to use real world as the scientific lag reference.

The existing lag numbers are useful for comparing displays, but neglects to reveal the lag advantage of 240Hz for an identical-absolute-lag of 240Hz-vs-60Hz, thanks to more frequent sampling rate along the analog domain. To correctly math this out, requires a lag test method that decouples from digital references, and calculates against a theoretical infinite-Hz instant-scanout mathematical reference (i.e. real life as the lag reference). Once the lag formula does this, suddely, the "same-absolute-lag number" feels like it's hiding many lag secrets that many humans do not understand.

We have a use for this type of lag references too, as part of a future lag standard, because VR and Holodecks are trying to emulate real life, and thus, we need lag standards that has a venn diagram big enough -- to use this as a reference.

Lag is a complex science.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 13:35
by jorimt
hmukos wrote:
17 May 2020, 12:20
Doesn't V-SYNC OFF mean that as soon as our frame is rendered, it's teared part is instantly shown at the place where monitor was scanning right now?
Technically, yes, but to add to what the Chief said, this would more be the case if I had tested by making the camera continually spin in place. Instead, I was testing for randomized user input based on my clicking of the mouse approximately every 1 second.

So while the "absolute" input lag should be the same at that framerate, regardless of refresh rate, since the scene was stationary until the input was registered, and 60Hz has a slower scanout than the proceeding refresh rates in the remaining graphs, it has a higher possible min/avg/max range, as the "span" between top, middle, and bottom screen occur over a longer period than at the higher refresh rates, so the appearances of those same randomized inputs are going to occur in less frequent intervals.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 17:43
by BTRY B 529th FA BN
.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 19:13
by jorimt
BTRY B 529th FA BN wrote:
17 May 2020, 17:43
If I use Low Latency Mode 'Ultra' I am stuck at 224fps, if I use Low Latency Mode 'On' + in-game config file limiter (fps_max 237) I can feel an input lag.
Hm, not sure what to tell you; unless you're GPU is maxed (which is highly unlikely in CS:GO), neither the in-game limiter or LLM "On" should be creating additional input lag over "Ultra."

Have you tried a lower in-game FPS limit with LLM "On"?

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 19:46
by BTRY B 529th FA BN
.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 17 May 2020, 20:02
by jorimt
BTRY B 529th FA BN wrote:
17 May 2020, 19:46
Maybe I'm feeling the 1 frame buffer with LLM On vs LLM Ultra, not input lag.
There isn't a "1 frame buffer" of difference with LLM "On" vs. LLM "Ultra" if your GPU isn't maxed and your FPS is limited by your set cap though. With G-SYNC, the only known difference between the two is one auto caps the FPS (Ultra) and the other doesn't (On).

You should actually be getting up to 1 frame less input lag with the in-game limiter + LLM On vs. LLM Ultra + its auto limiter.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 18 May 2020, 09:25
by BTRY B 529th FA BN
.

Re: Blur Buster's G-SYNC 101 Series Discussion

Posted: 18 May 2020, 13:21
by jorimt
BTRY B 529th FA BN wrote:
18 May 2020, 09:25
In the NVCP LLM describes Typical usage scenarios:

Select On to prioritize Latency by Limiting queued frames to 1.

Select Ultra to prioritize latency by fully minimizing queued frames.
As far as is known...

G-SYNC:

- LLM "On" = MPRF "1"
- LLM Ultra = "MPRF "1" + auto FPS limit (using Nvidia V3 limiter)

Non-G-SYNC:


- LLM "On" = MPRF "1"
- LLM "Ultra" = "just in time" frame delivery
BTRY B 529th FA BN wrote:
18 May 2020, 09:25
It seems I'm not understanding frame buffer vs queued frame, or something.
There's the double/triple buffer of V-SYNC, and then there's the pre-rendered frames queue. Both are entirely separate forms of "buffering" so to speak. LLM settings are related to the pre-rendered frames queue only.