Blur Buster's G-SYNC 101 Series Discussion

Talk about NVIDIA G-SYNC, a variable refresh rate (VRR) technology. G-SYNC eliminates stutters, tearing, and reduces input lag. List of G-SYNC Monitors.
hmukos
Posts: 30
Joined: 16 Apr 2020, 15:41

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by hmukos » 01 Jun 2020, 15:44

Getting back to this question:
Also, one person I know has a 165hz/1440p monitor with G-Sync (Dell S2417DG). I linked this thread to him and he decided to test his monitor with 960fps camera on his phone and he measured that when he enabled 60hz mode the panel scanout time was 10-11ms (in 60hz G-Sync ON and 60hz G-Sync OFF cases). So its quite confusing. Is there a particular reason manufacturer would configure scan rate scaler for 60hz not at minimum 1/165s and not at maximum 1/60s but somewhere in the middle?
The person shared videos. So there are two videos of the same monitor (Dell S2417DG) made by the same camera (Sony Xperia XZ Premium @ 960fps):
1. 165hz with G-SYNC ON and with 60 fps cap (actually 62.5fps because the cap was set using 16ms frametime but that doesn't matter in this case). As expected every time scanout starts it goes on for about 6-7 videoframes (6-7ms of real time).
https://drive.google.com/file/d/1ntUoOh ... sp=sharing

2. Now he enabled 60hz with G-SYNC OFF and V-SYNC ON. Every time scanout starts it goes on for about 10-11 videoframes (10-11ms of real time).
https://drive.google.com/file/d/1SALe2L ... sp=sharing

So it is definitely lower than 16.6ms but higher than 6ms and as it seems camera has nothing to do with it. So the question remains:
Is there a particular reason manufacturer would configure scan rate scaler for 60hz not at minimum 1/165s and not at maximum 1/60s but somewhere in the middle?

User avatar
jorimt
Posts: 1139
Joined: 04 Nov 2016, 10:44

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by jorimt » 01 Jun 2020, 15:57

Sorry, overlooked these initially (and answering your latest post indirectly with the below replies as well)...
hmukos wrote:
01 Jun 2020, 11:20
His phone was precise enough (true 960fps at low resolution and 200ms max capture time).
Not even my original 1000 FPS test camera was accurate enough to always count the literal 16.6ms (or 4.2ms or what have you) in a capture. I'd often get short of that if counting frame by frame as well.

This is exacerbated by the fact that both the capture camera and the display being captured have independent scanouts occurring at the same time, and are not always fully in sync with each other (this can also worsen/improve depending on what orientation your camera is in when capturing). This factor contributes partially to the overall error margin in the test numbers.

You'd need an overkill camera (thousands of FPS) with a particular configuration to achieve such an accurate count in ms, at least consistently. My testing was approximate in this respect, but was accurate enough to discern the differences between the totals in each scenario, as the number totals themselves didn't matter for this purpose, we were only looking for the difference between them.

In fact, take a look at a casual test I did recently with both my original test camera and 960 FPS mode on my Note 9:
viewtopic.php?f=5&t=5903&start=110#p45250

It wasn't quite as sensitive or precise as my 1000 FPS camera in identical test situations, so I while I wouldn't claim what you're saying is absolutely out of the question, I wouldn't rule out error margin/camera limitations to explain what he's seeing either.
hmukos wrote:
01 Jun 2020, 11:20
Yes, of course. It was set only for testing purposes.
No worries; just an anecdote for other readers.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

hmukos
Posts: 30
Joined: 16 Apr 2020, 15:41

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by hmukos » 01 Jun 2020, 16:03

jorimt wrote:
01 Jun 2020, 15:12
A single "sample" with the "first on-screen reaction" method is considered the "first" discernible movement of the "first" frame update that appears at the given vertical position screen-wide due directly to the last user input. The remaining duration of that frame update (and/or any further frame update) is ignored until the screen becomes "static" again, repeat.
But in your test the frame update is global, isn't it? The frame update (vertical line being shifted to the side) isn't local to some small position but it affects whole frame from top to bottom. So at 60hz with 16.6ms panel scan rate wherever the scanline was when the frame was updated the result should be shown at the closest frameslice, right?
jorimt wrote:
01 Jun 2020, 15:12
This is why it was so important to pick a static scene and keep it static (no idle character animations, foreground activity, etc) to be able to spot these changes (especially with V-SYNC off) without spotting any possible false-positives.

The goal is to simply see how quick click to on-screen reaction is. It's actually a pretty harsh method, and worst case scenario for all scenarios tested, but the most accurate in this context.

When testing with any syncing method, the first vertical update is always going to appear at the very top of the screen (which is easy to spot), whereas when testing with V-SYNC off, the first vertical update can appear anywhere on the vertical axis (and often does, which can be much less easy to spot, especially where the tear slices become so thin with certain framerate/refresh rate ratios).
I'm curious if it would be easier to use some modded map that fills the whole map with different color when input is registered. Or did you want to be as close to real game as possible?

User avatar
jorimt
Posts: 1139
Joined: 04 Nov 2016, 10:44

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by jorimt » 01 Jun 2020, 16:16

hmukos wrote:
01 Jun 2020, 16:03
But in your test the frame update is global, isn't it? The frame update (vertical line being shifted to the side) isn't local to some small position but it affects whole frame from top to bottom. So at 60hz with 16.6ms panel scan rate wherever the scanline was when the frame was updated the result should be shown at the closest frameslice, right?
But it has to start somewhere; with V-SYNC off, the beginning of the tear can appear anywhere on the vertical axis, within any of the tear slices, per sample. That's it.

We're only looking for the start. The rest is worthless for sampling.

With V-SYNC off, the screen can appear completely still until the middle, or the bottom third of the scanout, what have you (anywhere from top to bottom, it is not a global change that affects the entire screen all at once). E.g. the scanout can perform a full sweep without anything visibly shifting on-screen so long as an input isn't made.
hmukos wrote:
01 Jun 2020, 16:03
I'm curious if it would be easier to use some modded map that fills the whole map with different color when input is registered. Or did you want to be as close to real game as possible?
A screen-wide "flash" (due to color change, etc) would have done nothing for this method. You need horizontal movement to spot the very beginning of the first on-screen reaction across the entirety of the vertical axis.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

hmukos
Posts: 30
Joined: 16 Apr 2020, 15:41

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by hmukos » 01 Jun 2020, 16:33

jorimt wrote:
01 Jun 2020, 16:16
A screen-wide "flash" (due to color change, etc) would have done nothing for this method. You need horizontal movement to spot the very beginning of the first on-screen reaction across the entirety of the vertical axis.
But how would flash differ from horizontal movement? If there would be flash instead of movement than wouldn't the colored frameslice theoretically appear in the same place where we see first vertical line shift?

User avatar
jorimt
Posts: 1139
Joined: 04 Nov 2016, 10:44

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by jorimt » 01 Jun 2020, 17:00

hmukos wrote:
01 Jun 2020, 16:33
But how would flash differ from horizontal movement? If there would be flash instead of movement than wouldn't the colored frameslice theoretically appear in the same place where we see first vertical line shift?
I'm not sure how much longer we can discuss this without you starting to perform some of these tests yourself so you can visualize what I'm describing, because believe it or not, I'm being very clear.

Look, here's a three frame .gif of a single sample captured at 60Hz 2000+ FPS V-SYNC off during one of my original tests:

Image

Blink and you miss it. The "start" of the sample appeared near the very bottom of the display in the second frame of the gif, and observe that there was no visible updates on the screen until then.

The movement was triggered by look left in the test map I explained in my article here:
https://blurbusters.com/gsync/gsync101- ... ettings/3/
Image

For CS:GO, a custom map provided by the Blur Busters Forum’s lexlazootin was used, which strips all unnecessary elements (time limits, objectives, assets, viewmodel, etc), and contains a lone white square suspended in a black void, that when positioned just right, allows the slightest reactions to be accurately spotted via the singular vertical black and white separation. Left click was mapped to look left [...].
Take the .gif example, and now envision that reaction occurring anywhere else horizontally on the vertical axis per sample, and you have the whole picture. It's really as simple (and literal) as it looks.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

User avatar
Chief Blur Buster
Site Admin
Posts: 7712
Joined: 05 Dec 2013, 15:44
Location: Toronto, Ontario, Canada
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by Chief Blur Buster » 01 Jun 2020, 21:59

The funny thing is that 60Hz and 240Hz can have idential absolute latency (pixel for pixel, photodiode point of view), but still laggier than real life, for a first-anywhere lag test (camera lag test).

Because of complexities such as scanout latency where not all pixels refresh at the same time. But the higher the Hz, the more frameslice height for a constant framerate, until refreshrate is equal to framerate. So for 2000fps on a theoretical 2000Hz display, frameslices are now full screen height.

Photodiode lasg testers -- photodiodes are just single-pixel lag tests, which creates some latency disonnances from lag number to real world reaction number, because of things like scanout latency and whatever sync technology is selected.

They are useful, just cognizant that latency, alas, is more complex than a single number...
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

User avatar
jorimt
Posts: 1139
Joined: 04 Nov 2016, 10:44

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by jorimt » 03 Jun 2020, 08:12

Chief Blur Buster wrote:
01 Jun 2020, 21:59
Photodiode lasg testers -- photodiodes are just single-pixel lag tests, which creates some latency disonnances from lag number to real world reaction number, because of things like scanout latency and whatever sync technology is selected.

They are useful, just cognizant that latency, alas, is more complex than a single number...
Yes, they are wholly adequate for testing sycned scenarios (a setup I would definitely use instead of my "first on-screen reaction" camera method for any future extensive G-SYNC/sync-only tests), but they're effectively ineffective for fully testing V-SYNC off scenarios, unfortunately.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

User avatar
Chief Blur Buster
Site Admin
Posts: 7712
Joined: 05 Dec 2013, 15:44
Location: Toronto, Ontario, Canada
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by Chief Blur Buster » 03 Jun 2020, 13:33

Yes, the latency gradient problem, and gradients can invert.

VRR/VSYNC ON + Nonstrobed creates TOP < CENTER < BOTTOM
(smooth linear latency gradient that continually increases from top to bottom, Leo Bodnar style. Latency gradient is consistent with VRR too, the scanout starts when the frame is presented by software. Capped VRR will be less lag then VSYNC ON because no frame queue buffers up. However the scanout latency is the same)

VSYNC ON + Strobed creates TOP = CENTER = BOTTOM
(fixed latency for whole screen surface, unless strobe phase puts the crosstalk bar mid-screen rather than in the VBI)

VSYNC OFF + Nonstrobed creates TOP = CENTER = BOTTOM
(Average. The first pixel row below the tearline have the same latency as zero-queuedepth VSYNC ON + Nonstrobed, no matter where the pixel row is. So BOTTOM can be as low-lag as TOP. That said, each frameslice are latency sub-gradients themselves, so it's actually a sawtooth gradient that goes finer at higher VSYNC OFF framerates)

VSYNC OFF + Strobed creates TOP > CENTER > BOTTOM
(An inverted latency gradient! That's because the global flash strobe backlight has the least time-differential to the bottom edge of the scanout. It's a sloped sawtooth that slopes downwards towards the bottom, and the sawtooth becomes finer more equal to a diagonal line, the higher the VSYNC OFF framerate)

Opposing goals
-To reduce the latency gradient, use a higher refresh rate. 1/60sec = 16.7ms max scanout lag, and 1/240sec = 4.2ms max scanout lag.
-To use minimum absolute absolute latency, use VSYNC OFF + framerates far beyond Hz
-To use lowest "non-VSYNC-OFF" sync technology, use capped VRR.
-To eliminate latency gradient without VSYNC OFF, use strobed combined with framerate=Hz (VSYNC ON, perhaps with NULL)

Those latency goals are often musually opposing. The only way to solve all goals simultaneously is retina refresh rates (aka 1000fps+ + 1000Hz+). Then you have:
(A) zero blur of strobing
(B) near elimination of latency gradient if 1/1000sec scanout
(C) consistent lag across the whole panel surface
(D) essentially per-pixel VRR behavior where multiple frame rates can be stutterless simultaneously on the same screen
(E) no tearing visiblity when tearlines are visible for only 1/1000sec
(F) All sync technologies look identical and feel identical, VSYNC ON same as VSYNC OFF same as VRR
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

tinami
Posts: 1
Joined: 14 Jun 2020, 23:18

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by tinami » 14 Jun 2020, 23:29

Hi all,

I'm not sure if this is the right thread since the discussion I want to bring up is from an article that links to this thread from years ago (https://blurbusters.com/gsync/gsync101- ... ttings/10/).

My understanding is that by enabling G-sync, it's possible to remove the DWM related additional input lag for windowed programs (which I think is caused by a forced vsync with DWM?), which on win 7 could be removed by disabling desktop composition. This is great as it sounds like you can remove the negatives of DWM without breaking explorer by killing the process manually.

Does anyone know if as of now (i understand a few new versions of windows 10 have come out since that threat) the fix still works, as in it achieves the same input lag as you would see if you were to kill DWM, and
I was wondering if anyone knew if the same fix can be achieved with freesync.

Thanks in advance.

Post Reply