Chief Blur Buster wrote: ↑
23 May 2022, 20:26
Discorz wrote: ↑
23 May 2022, 03:53
This also holds true for perfectly flat frametimes. Wouldn't this mean we need to test e.g 120 vs 480 fps or 60 vs 1000 fps at same gpu utilization? I'm not even sure if there is a way to go around this if we want to keep the same resolution scale. We could exclude interactive device out of the equation, but we don't want that. On top of this, nature of fluctuating gpu usage (even with flat frame pacing and times) may still affect it. Or I might be turning wrong direction here.
I forgot to mention...
GPU rendering times don't affect VSYNC ON because gametimes are keyed to the VBI's of VSYNC ON.
(Even as simple as just merely grabbing a timestamp immediately upon return from a blocking Present() on VSYNC ON -- which returns from a blocking API call when it hits the display's own VSYNC in the signal -- that's why it's called VSYNC ON).
VSYNC stands for V
hronization, a component of a display signal that is a defacto comma-separator between refresh cycles (in digital POV). But back then in the analog days, it was an analog signal to vertically
move the electron beam back to the top edge of the screen, to begin scanning a new refresh cycle.
Today it's still in the signal, but functioning more like a comma-separator signal and a time-delay (to allow display motherboard processing more time to begin to prepare a new refresh cycle -- but many display processors are so fast, you only need a tiny 1-scanline VSYNC).
That's why it's called "VSYNC ON" in graphics drivers, it's named after the video signal's VSYNC, something that has existed for about 100 years since analog TV broadcasts has a VSYNC in it too
(...Another name is "blanking interval" or "vertical blanking interval", but that's (porches/overscan + vsync) totalled. (porches were used as overscan area on a CRT tube, because tubes weren't perfectly shaped like digital monitors, and you needed to overscan a rectangular broadcast onto an odd-shaped tube, and extra overscan was also added to prevent the electron beam from going too far beyond the edges of the tube...)
So this is very old technology. VSYNC existed in the 1920s Baird and Farnsworth TV broadcast experiments! They didn't call it VSYNC yet, but it's exactly the same signal still used on 2020s HDMI and DisplayPort cables. During the analog to digital transition we kept a 1:1 pixel clock.
A 1080p analog signal and a 1080p digital signal can be flawlessly adaptored to each other via an unbuffered realtime HDMI-to-VGA or VGA-to-HDMI adaptor. Yyou can even mirror the same 1080p HDMI output to both a 1080p digital monitor and a Sony FW900 CRT tube concurrently -- from the same digital GPU output (on cards that don't have VGA -- even a current RTX 3080 card. As long as both displays support the resolution and refresh rate and the timings (ATSC HDTV standard vertical total of 1125, which is used for both 1080i and 1080p), it works fine.
Digital signals are simply digital versions of the analog signal in a 1:1 pixel symmetry. Everything was preserved during the analog to digital transition, including the VSYNC signal embedded.
Now you understand the history of what "VSYNC" really means.
Now back to modern nomenclature of GPU-based "VSYNC ON" which tells the drivers to synchronize frames to the signal, from a programmer perspective:
Windows waits for the GPU output to be aligned to a new refresh cycle, before returning from the Present() API.
Even with NULL (avoid buffering up), these waits are one big reason why VSYNC ON still has more latency than VSYNC OFF, but it does solve a big stutter weak link.
The magical thing is that at 1000Hz, VSYNC ON latency can become negligible (1ms). The higher the Hz, the less difference in latency between VSYNC ON and VSYNC OFF! So even if this visual test also became a latency test, the problem automagically solves itself because you're forced to ultrabrief frametimes with ultratiny differences between sync technologies (in latency).
For an experimental test, perfect framerate=Hz (ala VSYNC ON) is used to avoid a microstutter weak link that diminishes the difference between refresh rates. It's easier to tell apart 144Hz versus 180Hz if the content isn't microstuttering. [email protected]
and [email protected]
is generally easier to tell apart than [email protected]
vs [email protected]
, so we're removing various weak links that may lower the "retina refresh rate" measured in an experiment.
Because it's in exactly the same location of a refresh cycle, this keeps gametimes synchronous with refresh cycles.
1. Gametime clocks increases monolithically during VSYNC ON, keeping gametime:refreshtime in sync.
2. Object positions moving at constant speed always move at exact steps, despite varying GPU render times.
3. So gametime gets internally jittered by the varying GPU rendertime
4. But the VSYNC ON does the equivalent of 1-dimensional snap-to-grid, putting refreshtime back in sync with gametime.
5. Gametimes and refreshtimes are in perfect sync
Problem & weak link solved.
Thanks to VSYNC ON, increasing the threshold of retina refresh rate. VSYNC ON isn't perfect (lag, lag) but it's important for a researcher to measure a retina refresh rate threshold from a vision perspective. The sheer nature of retina refresh rates, also diminishes latency, since VSYNC ON latency can be optimized to refreshtime instead frametime, causing major latency drops as you increase refresh rates trying to test a humankind's retina refresh rate.
So multiple birds are hit concurrently. The intent was visual smoothness / blur / stroboscopic testing like The Stroboscopic Effect of Finite Frame Rates
as well as 1000Hz Journey
Thus, varying GPU rendertimes has no effect on this test, as long as gametime:refreshtime stays in sync (thanks to rendertimes staying less than refreshtime).