Adaptive VSYNC VS VSYNC input lag

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
User avatar
jorimt
Posts: 2484
Joined: 04 Nov 2016, 10:44
Location: USA

Re: Adaptive VSYNC VS VSYNC input lag

Post by jorimt » 13 Nov 2018, 12:26

petrakeas wrote:it was a nice explanation nevertheless (better than the one presented in your article ;) ).
Granted, it's in the name ("G-SYNC 101") that my article sacrifices more complex explanation of V-SYNC to simply cover the "101" basics of why "G-SYNC" is better than traditional sync in the ways that it is. There are plenty of other sources available (as you've noted) that break V-SYNC functionality down in more detail.

Glad my post's last description was clearer for you though.
petrakeas wrote:The reason I suspected (not tested yet, but will test it when I find some time) that Adaptive VSync may be faster that NVCP VSync is that I think that the latter may add one extra frame buffer to the queue and thus add 1 extra frame of latency.
As far as I know, in exclusive fullscreen mode, Adaptive V-SYNC is using whatever V-SYNC the game is implementing (double or triple buffer) when the framerate is above the refresh rate.

I think what you're suggesting is that Adaptive V-SYNC forces double buffer V-SYNC when above the refresh rate, no matter what.

Is that the case? I don't think so, but I can't be sure for all cases.

Though what that would simply mean is that, at best, Adaptive V-SYNC with the framerate above the refresh rate is still only as low lag as standalone double buffer V-SYNC without an FPS limit, and that standard V-SYNC with an FPS limit below the refresh rate still has lower lag than Adaptive V-SYNC at framerates above the refresh rate (which is the only time V-SYNC is actually active with Adaptive), be it double or triple buffer anyway.
petrakeas wrote:What I haven't figured out though (slightly off-topic) is how maximum pre-rendered frames play along with frame buffers of VSync.
From what I know, frames are only rendered ahead for when a set frametime can't be determined (fluctuating, uncapped framerate).

For instance, if the system can sustain a constant 144+ FPS on a 144Hz monitor, and you limit the FPS to 141 with G-SYNC, the engine is now guaranteed a static, dependable frametime target, making the pre-rendered frames queue effectively 0, until, of course, the framerate drops below that limit, at which point pre-rendered frame creation will resume.

I have numbers from some of my previous off-the-record high-speed camera tests to back this up, with discussion:
viewtopic.php?f=5&t=3441&start=110#p26975

The tests in the above link show that an RTSS cap (with G-SYNC in this instance) effectively cancels out the pre-rendered queue in that scenario.

Not sure if it's exactly the same with V-SYNC, as I'm not really an expert on (or too interested in) the whole pre-rendered frames subject.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

User avatar
RealNC
Site Admin
Posts: 3757
Joined: 24 Dec 2013, 18:32
Contact:

Re: Adaptive VSYNC VS VSYNC input lag

Post by RealNC » 14 Nov 2018, 00:46

petrakeas wrote:What I haven't figured out though (slightly off-topic) is how maximum pre-rendered frames play along with frame buffers of VSync. I have read some threads, but we can't be sure how Nvidia has implemented this. For example, in double-buffered VSync and 1 maximum pre-rendered frame, what is the maximum number of queued frames possible (if no framerate limit is engaged by the game or an external tool)? It could be 2: 1 fully rendered frame buffer waiting to be flipped with the other buffer, and 1 pre-rendered frame (commands waiting to be executed). Or it could be just 1: 1 fully rendered frame (GPU blocks until the buffer is flipped).
It's still double buffered vsync. One front, one back. Nothing changes in that respect. However, I suspect that the reason why you don't see the FPS halving in many cases in double buffering is because frame presentation is asynchronous in "modern" systems ("modern" meaning not a DirectX 7-era system or something like that.) The introduction of the pre-render buffer (as the result of multi-core CPUs becoming mainstream) meant that vsync will not always block the game, as that would prevent it from pre-rendering. So in many cases, double buffer will look like triple buffer. I say "in many cases" because it also depends on how the game's rendering engine works. Some games will still suffer double buffer FPS halving, while others won't.

As you already know, the reason why FPS halving happens is because the game is prevented from doing any further work prior to flipping the two buffers. It just waits there, doing nothing, and thus if the 16.7ms deadline is missed, the game is blocked until the next 16.7ms interval. With the pre-render system however, the game is not blocked. Even though only two frame buffers are available, the game is allowed to run. The present API call is asynchronous and it returns to the caller rather than block. This means no FPS halving if the buffers will get flipped prior to the game trying to actually use the frame buffer. The game will first start to prepare the next frame and put it in the pre-render queue. While the game does that, the video driver might flip the front and back buffers because the next vsync was reached.

I noticed over the years that double buffer vsync would not halve my FPS in most cases when the game dropped from 60FPS to, say 55FPS, or even 50FPS. However, if it dropped even more, then where you'd expect 45FPS or 40FPS, you instead get a 30FPS lock. So it seems asynchronous frame presentation gives most games some headroom instead of having a hard limit of 16.7ms (or whatever the display Hz is.)

If we would be able to completely disable the pre-render system, I think we would always get FPS halving again in all games, even when there's just a 1FPS drop below 60. However, it is no longer possible to do that. It was many years ago where you could do that. In modern drivers, I doubt there's even a code path left in there at all that has no pre-render buffer.
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

petrakeas
Posts: 15
Joined: 27 Apr 2018, 06:34

Re: Adaptive VSYNC VS VSYNC input lag

Post by petrakeas » 14 Nov 2018, 08:23

@RealNC nice explanation. Some questions :)
The present API call is asynchronous and it returns to the caller rather than block
Will it block when the pre-rendered frames queue is filled?
The game will first start to prepare the next frame and put it in the pre-render queue. While the game does that, the video driver might flip the front and back buffers because the next vsync was reached.
I guess that the GPU intensive part is when the pre-render frame gets rendered on a frame buffer. And it seems that if we only have 1 frame buffer available to work on, we won't be able to avoid fps havling (if we are GPU bound). I'll explain what I mean in an example. Let's imagine that the GPU needs 18 ms to render a frame on a particular scene (16 ms per sync). It starts rendering on the second buffer and misses one sync. It finishes rendering (almost 2 ms after the sync) and the game starts preparing the next pre-render frame. It queues the pre-rendered frame. However, that pre-rendered frame will not be rendered yet because there will not be a frame buffer to render on. It will have to wait until the next sync when the buffers get flipped and start rendering that frame on the available frame buffer. Thus, the GPU will not be able to take full advantage of the time left until the next sync (the CPU and the game itself will gain some time to do some stuff though ;) ). Am I right or did I miss something?

Post Reply