Re: RTSS now has new automatic Low-Lag VSYNC ON (raster base
Posted: 20 Feb 2019, 20:49
It depends on which sync tech, which GPU, which graphics driver version, and what behaviour it imparts on a sync mode.andrelip wrote:What is the behavior of s-sync when fps are inconstant and the frametime get longers than the refresh. Let's say a random drop occors to the range of 220-240 for 240hz. It will start the render of the new frame immediately? It will add a single and long delay to the return to the desired position as fast as possible? It will spread that delay across n frames to smooth the transition?
VSYNC ON will simply round to the next refresh cycle (sudden +16.7ms frametime spike for a delayed frame at 60Hz).
Now, Scanline Sync is usually used with VSYNC OFF, so it is complicated to explain.
see VSYNC OFF tearline are just rasters (Warning: Technological "Pandora Box" topic)
The input lag is relative to the location of the tearline. In a Custom Resolution Utility, the relative lag increaser of a pixel row below a tearline is ([pixel rows]/[horizontal scanrate]) latency.
If your screen is displaying 160KHz horizontal scan rate (160000 scanlines per second), the input lag of a pixel row 10 rows below the tearline is 10/160,000sec of input lag.
That's latency at the GPU output, subject to any micropacketization latency (which might bundle 2, 4, 6, or 8 pixel rows together simultaneously) and other nitty-picky microsecond-league variables.
But needless to say, the point I am making is that the latency of a specific pixel row is actually relative to the location of its preceding tearline.
So in the case of RTSS Scanline Sync, the usual test case is to calibrate the tearline to just right above the top edge of the screen (aka hiding the tearline between refresh cycles). Now, this excludes the lag of other parts of the chain....
Now that complexity explained, let's try to answer your question...
In addition to the increased frametime latency (the rendertime latency -- e.g. becoming 1/220sec instead of 1/240sec on a 240Hz monitors) -- your latency gradient during RTSS Scanline Sync will vertically shift if the tearline is in a new location. Scanout lag is always relative to the tearline immediately above.
If you have a tearline in the middle of the screen, that means you can have a weird wraparound lag gradient [2ms...4ms][0ms...2ms] for the screen instead of [0...4ms]. For a middle-of-screen tearline at 240Hz, basically top edge has a +2ms lag, just above tearline has +4ms lag, just below tearine has 0ms lag, and bottom edge has +2ms lag that then wraps around back to the top for the next refresh cycle (right after what is usually a sub-millisecond VBI when you're not using a Large Vertical Total).
Be noted, this is the scanout latency, not other parts of the lag chain (e.g. rendering lag, mouse lag, display processing lag, GtG pixel response lag, etc).
Fortunately, if you usually keep full framerate at Scanline Sync, framedrops only create minor reappearances of tearing that only disrupt latency consistency briefly. In fact, if it's only a very slight movement of tearline downwards from top edge (e.g. stays near top edge), then your latency gradient has only shifted slightly, possibly only adding a sub-millisecond momentary inconsistency only for those specific refresh cycles that has an aberrant tearline location different from its stable location. (Keep in mind 1ms = one quarter screen height scanout on 240Hz).
In reality, frametime latency variances will be your bigger latency error margin, especially on a 240Hz monitor. At 60Hz, the slow scanout latency (16.7ms) definitely can give you major latency inconsistency, so perfect RTSS Scanline Sync is more important at 60Hz than at 240Hz -- it's okay to be more tolerant of RTSS Scanline Sync abberations at 240Hz.
Now, if you're one of those who prefer a brief stutter instead of a brief tearline, you may have already calibrated RTSS Scanline Sync with VSYNC ON / Enhanced Sync / Fast Sync. In this event, your lag increase effect usually be a binary sudden increase of one refresh cycle (with an accompanying microstutter) for only one instant of one refresh cycle. Such as [16ms...16ms...16ms...16ms...33ms...16ms....16ms] during 60Hz. However, this method can be useful when you're much more bothered by tearing than stutters, as long as those framerate slowdowns are "exception to the rule" -- RTSS scanline sync is mainly recommended for games that can almost always run at full framerate, where drops are exceptions rather than the rule.