Aldagar wrote: ↑31 Mar 2020, 10:23
If I'm not mistaken, VSync buffers frames and sends them when it receives a VBI signal
Correct.
Default VSYNC ON behaviour for most graphics drivers is that the frame buffer flip will occur right at the end of the current refresh cycle's scanout, but before the VBI.
During VSYNC ON at full frame rates, most game software will "block" (software pauses for a few milliseconds = input lag) when the software.
Note: Computer programmers will be using commands like Present() or glxxSwapBuffers() to transfer the frame from the GPU's memory to the front buffer (the buffer to be delivered to the GPU output)
The behaviour os VSYNC ON is just like
pause-inputread-render-display-
pause-inputread-render-display-
pause-etc.
So the game is effectively pausing 60 times a second for 60fps. You don't see those micro-pauses (input lag) caused by the blocking behaviour of frame presentation, but it can add up to a frame of input latency. Knowing this will help you understand the concept of "inputdelay" which is what end-of-VBI presentation aims to do.
In most game engine workflows, this is kind of what it does, but some game engines may behave differently or not co-operate well with this.
In some statistics displays, the stopwatch starts when inputread occurs, and the stopwatch ends when the frame presents. That's your "frametime" because it's dominantly render time. And render time VARIES a lot. Sometimes the stopwatch stop occurs right after the frame presents. VSYNC ON can sometimes block, block longer, and sometimes does not block (no pause), depending on how much the frame buffer has queued up. If there's a lineup of waiting frames waiting to be delivered to display, the software is forced to do a micro-pause (which can add input lag) -- often between the input read and the display of the frame. So there can be a variable-length micro-pause there.
Aldagar wrote: ↑31 Mar 2020, 10:23
Enhanced Sync is a triple buffering method that doesn't limit frame rate. Interestingly, Enhanced Sync does not work with a 60 fps cap in my 60Hz monitor (actually 59.949Hz) but it does with Scanline Sync.
Enhanced Sync has algorithms that has sometimes nasty interactions unless you pre-calibrate with VSYNC OFF before you re-enable Enhanced Sync. Even so, it's not always perfect.
VSYNC OFF and VSYNC ON is easier to explain, while algorithms like Enhanced Sync is sometimes difficult to explain.
Aldagar wrote: ↑31 Mar 2020, 10:23
The point is, in demanding games (or maybe unoptimized) in which the tearline tends to jump, Enhanced Sync causes REALLY BAD stuttering the closer the Scanline Sync index is to the VBI range. If I understand this correctly, it is because when the tearline falls inside the VBI it adds an extra frame delay, thus the erratic stuttering.
Yes, because if the figurative "tearline" is inside VBI, it is already too late for the existing driver VSYNC ON behavior to flip because it's always flipping on the beginning of VBI. This is an annoying behaviour of graphics drivers and less than 10% of people at NVIDIA and AMD understands this problem (think of interns and great engineers who understand lots but aren't familiar with "raster interrupts" or "beam racing" science like Blur Busters understands). People like Blur Busters have to explain to them how Quick Frame Transport works sometimes.
Aldagar wrote: ↑31 Mar 2020, 10:23
Would setting the scanline in the middle of the screen ensure stability at the cost of slightly more input lag?
Yup. About half a refreshtime added input lag.
But you can custom-optimize this, although optimizing can sometimes be game-dependant (GPU-load-dependant)
1. Use VSYNC OFF initially
2. Determine how big your tearline jitter amplitude is.
3. If your tearline jitter amplitude is about 10% screen height
4. Set your RTSS Scanline Sync number to roughly twice the jitter amplitude above the bottom edge of the screen (20%)
5. Now re-enable your preferred non-VSYNC-OFF sync tech (such as VSYNC ON or Enhanced Sync or Fast Sync)
6. Your stutters SHOULD be gone.
7. Your latency will be that margin (approximate 20% of a refresh cycle margin -- i.e. adding 3ms lag (but it's still at least 13ms less lag than ordinary VSYNC ON)
8. Your safety jitter-margin is your latency. Bigger jitter margin, slightly higher latency.
Aldagar wrote: ↑31 Mar 2020, 10:23
With Vsync I can't seem to reproduce this behaviour. The image feels smooth with no stuttering no matter the scanline index, but the frame times stop fluctuating with Scanline Sync ON. Is it because it buffers frames, so it has padding?
Glassfloor frametimes with Scanline Sync is caused by creating a fixed time between inputread and pageflip, regardless of GPU rendertime.
VSYNC ON:
Some games manage to do glassfloor with VSYNC ON, but not all of them. The varying rendertime is creating the varying latency between inputread to frame presentation, occasionally creating inconsistent latency despite smooth motion. Other times, the latency is actually consistent to the human but the logging is out of sync (not recording numbers that are closer of input-to-photons). Even RTSS is unable to accurately record inputread-to-photontime without a photodiode sensor. There's only so much knowledge that RTSS can be able to pull in, but it can't "see" beyond the confines of the computer.
RTSS Scanline Sync
Frame presentation becomes re-timed to RTSS timing, and RTSS has 100% full knowledge of frame presentation time during Scanline Sync (unlike the black box ness of a graphics driver VSYNC ON algorithm which can add lag not measurable by RTSS), and you've got the potential ability of a more exact time between input read to frame presentation time. Other times, it is just an artifact of how RTSS stopwatching behaves during Scanline Sync versus letting graphics drivers use its own sync technology.
The bottom line is RTSS frametimes are not always equal to:
-- frametime:photontime relative sync
-- inputread:photontime relative sync
These can be 3 completely independent volatilities, unfortunately because RTSS can't always view deeper into the graphics driver pipeline, nor always know when the game did its own inputread. So the RTSS glassfloorness may still have inputread non-glassfloorness, as well as RTSS non-glassfloorness may still mean inputread glassfloorness. Because of how some sync technologies de-jitters for you (e.g. VSYNC ON), and other sync technologies amplify jitters (e.g. VSYNC OFF), and changing modes on the display (VRR, strobing) may reduce/amplify dissonances between these three volatilities.
But, Wait! There's One More Thing (Apple Style)
-- Latency can feel different for top edge of screen than bottom edge -- or vice versa -- because of gametime:photontime differnces on top edge versus bottom edge --
High Speed Videos of Scanout. It's VERY weird.
-- This is sometimes noticed during strobed VSYNC OFF at lower refresh rates, which is why I prefer strobed VSYNC ON (glassfloor top-to-bottom), or with RTSS Scanline-Sync.
-- Single number RTSS data will not tell you information about the input lag of each individual pixel on the screen and/or their lag offsets relative to screen center.
The input lag of different pixels of different parts of the screens can be different due to
www.blurbusters.com/scanout
The latency gradient of sync technology combinations are as follows:
VSYNC ON non-strobed creates TOP > CENTER > BOTTOM
VSYNC OFF non-strobed TOP = CENTER = BOTTOM (easy glassfloor input-to-photons)
VSYNC ON strobed creates TOP = CENTER = BOTTOM (easy glassfloor input-to-photons, but with fixed higher lag)
VSYNC OFF strobed creates TOP < CENTER < BOTTOM (but highly volatile, requires ~100 measurement samples to notice)
Also, on certain panels (like BenQ ZOWIE monitors), if you adjust strobe phase to screen middle, you can "split" the latency gradients with, as seen in this forum thread.
BenQ Strobe Calibration & Input Lag Gradient Behaviours
This allows you to zero-out the lag of the screen middle during strobed modes, at the cost of higher lag for certain parts of the screen, and a bit of a crosstlak bar underneath your low-lag area.
Aldagar wrote: ↑31 Mar 2020, 10:23
Sorry if I'm asking too many questions. I've been quarantined for more than two weeks now due to COVID-19 and I decided to investigate about Sync technologies.
It's quite a fascinating science.
I may actually split this thread to cover advanced "
Area 51: Display Science, Research and Engineering" aspects of inputreadtime:photontime dissonances not measurable by RTSS.
Even things like power management can interact with VRR to create volatility, and we've seen 0.5ms stutter (dissonances in gametime:photontime) become human visible, see this thread
0.5ms Stutters Are Human Visible In Some Situations In The Refresh Rate Race To Retina Refresh Rates
In the Blur Busters "Milliseconds Matter" science, we're very fascinated how sub-milliseconds slowly begin to reveal themselves unexpectedly when refresh rates go up (144Hz -> 240Hz -> 360Hz -> 480Hz), in the
Vicious Cycle Effect. This is why ASUS now has a roadmap to 1000Hz displays.
P.S. We're veering into advanced concepts, so we might split off this thread.