jorimt wrote: ↑20 Aug 2025, 09:58
G-SYNC was originally created to prevent tearing for fluctuating framerates within the physical refresh rate without adding the latency or stutter standalone V-SYNC does in the same scenario, and is responsible for one thing and one thing only; dynamically steering the tearline off-screen into the VBLANK by adjusting the amount of times the screen refreshes per second to the current average framerate.
If I may interfere in your explanation, I've been curious about GSYNC & in extension FreeSync.
I haven't been able to find the information what the precision of VRR is (in microseconds/nanoseconds) and to what specific metric in the pipeline it hooks onto.
To be specific
or the pipeline here:
https://www.nvidia.com/en-us/geforce/ne ... ed-section
To which particular PresentMon metric does the monitor synchronize itself when G-SYNC is enabled?
(Note: PresentMon 1.x, 2.x differ in what they consider frame time is. There's also the
MsBetweenSimulationStart metric (NV Reflex-specific metric), which according to other sources, one can consider the “true” frame time of a game.)
What is the precision of this (in ms/µs/ns) procedure?
Does NV's G-Sync, AMD"s FreeSync, VESA's Adaptive Sync or HDMI's VRR differ in precision?
This is important to me, because refresh rates (or the equivalent in the time-domain) are never integer, they're always fractional in nature. I'd like to know how many decimal points in precision it has (or the time-domain equivalent), as I haven't been able to find this information anywhere online.
Another question I have is: Why does G-Sync On + V-Sync Off not behave out of the box like G-Sync On vs V-Sync On?
From your guide:
G-SYNC + V-SYNC “Off” disables the G-SYNC module’s ability to compensate for sudden frametime variances, meaning, instead of aligning the next frame scan to the next scanout (the process that physically draws each frame, pixel by pixel, left to right, top to bottom on-screen), G-SYNC + V-SYNC “Off” will opt to start the next frame scan in the current scanout instead. This results in simultaneous delivery of more than one frame in a single scanout (tearing).
G-SYNC + V-SYNC “On”:
This is how G-SYNC was originally intended to function. Unlike G-SYNC + V-SYNC “Off,” G-SYNC + V-SYNC “On” allows the G-SYNC module to compensate for sudden frametime variances by adhering to the scanout, which ensures the affected frame scan will complete in the current scanout before the next frame scan and scanout begin. This eliminates tearing within the G-SYNC range, in spite of the frametime variances encountered.
Frametime compensation with V-SYNC “On” is performed during the vertical blanking interval (the span between the previous and next frame scan), and, as such, does not delay single frame delivery within the G-SYNC range and is recommended for a tear-free experience (see G-SYNC 101: Optimal Settings & Conclusion).
This explanation doesn't make me understand
why this difference exists.
May I ask what your source on the "originally intended to function "is?
I believe there's some ambguity in your text in regards to this topic. Or maybe something from your explanation is escaping my mind.