About triple buffering

Talk about NVIDIA G-SYNC, a variable refresh rate (VRR) technology. G-SYNC eliminates stutters, tearing, and reduces input lag. List of G-SYNC Monitors.
Post Reply
Khysarth
Posts: 8
Joined: 02 Dec 2018, 18:56

About triple buffering

Post by Khysarth » 12 Dec 2018, 15:51

If i'm using the optimal g sync settings should I have the NCVP triple buffering off or on? or does it not matter

User avatar
Chief Blur Buster
Site Admin
Posts: 8289
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: About triple buffering

Post by Chief Blur Buster » 12 Dec 2018, 18:10

If you're optimal settings and framerate capping just below max Hz, it doesn't matter. The buffering behaviour doesn't activate except momentarily for those specific frametimes less than the duration of a max-Hz refresh cycle. So if you have a perfect 237fps cap on a 240Hz monitor, the triple buffer behavior never activates.

However, there can potentially be transient frametime variations (e.g. a frametime is 1/235sec and another frametime is 1/241sec) -- fluctuating frametimes can cause triggering of the buffering behaviour. That's why we often recommend a few frames per second capping below max Hz, to give those fluctuations some breathing room.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

User avatar
jorimt
Posts: 1400
Joined: 04 Nov 2016, 10:44
Location: USA

Re: About triple buffering

Post by jorimt » 12 Dec 2018, 21:54

Khysarth wrote:If i'm using the optimal g sync settings should I have the NCVP triple buffering off or on? or does it not matter
To add to what the Chief said...

Short answer:
Off.

Long answer:
G-SYNC functionality is technically based on a double buffer, so, at best, with a proper FPS limit in place, triple buffer will have zero benefit (let alone an effect), and at worst, it could be a detriment in edge-cases where buffer behavior kicks in with G-SYNC enabled (as seen with framerates near the max refresh rate when G-SYNC is paired with Fast Sync, for instance, though Fast Sync is a form of "true" triple buffer, and the "Triple buffering" option in the control panel you are speaking of is an alternate form).

And not to rub this in, but if you've read my optimal G-SYNC settings, I've already given the answer there...
https://www.blurbusters.com/gsync/gsync ... ttings/14/

The second item under "In-game settings" section on that page states:
Disable all available “Vertical Sync,” “V-SYNC” and “Triple Buffering” options [in-game].
And in the "Nvidia Control Panel V-SYNC vs. In-game V-SYNC" section further below on that page states, in part:
some in-game V-SYNC solutions may introduce their own frame buffer or frame pacing behaviors, enable triple buffer V-SYNC automatically (not optimal for the native double buffer of G-SYNC), or simply not function at all, and, thus, NVCP V-SYNC [double buffer] is the safest bet.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: Acer Predator XB271HU / LG 48CX OS: Windows 10 MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

User avatar
RealNC
Site Admin
Posts: 3073
Joined: 24 Dec 2013, 18:32
Contact:

Re: About triple buffering

Post by RealNC » 12 Dec 2018, 23:11

Also, the setting only applies to OpenGL games. For DirectX games, it has no effect.
SteamGitHubStack OverflowTwitter
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

User avatar
jorimt
Posts: 1400
Joined: 04 Nov 2016, 10:44
Location: USA

Re: About triple buffering

Post by jorimt » 13 Dec 2018, 13:50

^ In 99% of cases, that is correct, though I have heard it can actually have an effect in certain DX games (don't know which ones, myself; maybe for those that have an existing in-game triple buffer option or specific flip queue behavior, perhaps?).
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: Acer Predator XB271HU / LG 48CX OS: Windows 10 MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

Sparky
Posts: 666
Joined: 15 Jan 2014, 02:29

Re: About triple buffering

Post by Sparky » 13 Dec 2018, 21:39

Off. All the problems that triple buffering solves are already fixed by g-sync. It would only come into play in some weird edge cases(dramatic changes in framerate), and to no benefit.

drameloide
Posts: 2
Joined: 29 Sep 2020, 11:56

Re: About triple buffering

Post by drameloide » 19 Oct 2020, 20:55

Chief Blur Buster wrote:
12 Dec 2018, 18:10
If you're optimal settings and framerate capping just below max Hz, it doesn't matter. The buffering behaviour doesn't activate except momentarily for those specific frametimes less than the duration of a max-Hz refresh cycle. So if you have a perfect 237fps cap on a 240Hz monitor, the triple buffer behavior never activates.

However, there can potentially be transient frametime variations (e.g. a frametime is 1/235sec and another frametime is 1/241sec) -- fluctuating frametimes can cause triggering of the buffering behaviour. That's why we often recommend a few frames per second capping below max Hz, to give those fluctuations some breathing room.
double buffer and triple buffering are static buffers not dynamic like cpu prerender frames

User avatar
Chief Blur Buster
Site Admin
Posts: 8289
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: About triple buffering

Post by Chief Blur Buster » 19 Oct 2020, 22:17

drameloide wrote:
19 Oct 2020, 20:55
Chief Blur Buster wrote:
12 Dec 2018, 18:10
If you're optimal settings and framerate capping just below max Hz, it doesn't matter. The buffering behaviour doesn't activate except momentarily for those specific frametimes less than the duration of a max-Hz refresh cycle. So if you have a perfect 237fps cap on a 240Hz monitor, the triple buffer behavior never activates.

However, there can potentially be transient frametime variations (e.g. a frametime is 1/235sec and another frametime is 1/241sec) -- fluctuating frametimes can cause triggering of the buffering behaviour. That's why we often recommend a few frames per second capping below max Hz, to give those fluctuations some breathing room.
double buffer and triple buffering are static buffers not dynamic like cpu prerender frames
1. Sometimes but not necessarily (nowadays in modern Windows 10 GPU drivers)
2. The venn diagram overlaps

There can be multiple buffer responsibilities because of how the swapchain works -- for example DWM can behave as an additional buffer, turning double buffer behavior into triple buffering behavior.

Also there are two kind of triple buffering in abused terminology -- triple buffering as a max prerendered queue of 2 (the laggy kind of "triple buffer" in terminology abuse), and triple buffering as in the low-lag new-buffer-replaces-old-buffer (whether be in the drivers, or whether by an intermediary app-supplied buffer or OS-supplied buffer such as DWM being the pre-emptible buffer).

Nontheless, the way modern GPU drivers do it now, is that the "1" of Max Prerendered Frames in NVIDIA drivers with can pretty much be the back buffer of a double buffer system (the other buffer than the front buffer) -- at least this is how it works in the NVIDIA driver implementation, with the fastest swapchain implementation in the application; when presented, it also instantly gets flagged as the waiting back buffer in many workflows. Specifically, under Windows 10, specifically, in full screen exclusive mode, specifically (excluding DWM's role as a defacto third buffer). The buffers can be in the app (application buffers(), in Windows (DWM), or in the drivers (Max Prerendered Frames / Front Buffer). The drivers generally nowadays integrate it all into a generally unified buffer stack, with just metadata attached to them internally in the drivers (the front buffer, the waiting double buffer, and all the tertiary additional buffers). All these buffers are all stored in GPU memory in the same data format, so they just have to be kept in order, and flagged as which is the front buffer, etc. The two most important buffers, obviously, are the front buffer and the immediate-in-waiting back buffer, but that are just flagging attached to buffers that used to be distant buffers in a frame queue in a unified buffer stack where one buffer is currently flagged as the front buffer that is currently being scanned out one pixel row at a time out of the GPU output.

Also, if your application implements Direct3D + waitable swapchains, and max prerender 1, then you just got a back buffer and a front buffer that is fairly deterministic if you're keeping good track of time: You're pretty much pretty close to the latency metal, with Present() displaying on the screen right after the next VBI (aka next refresh cycle), with no other buffers in the way. That's how emulators historically tend to do it for minimum lag.

Various settings in NULL (NVIDIA Ultra Low Latency) can influence the behavior of the buffer handling (kind of like yesterday's Max Prerendered Frames 0 -- which was a bit controversial of a discussion back in the day) but software can influence the behavior too, to an extent, turning a Max Prerendered Frames of 1 into either a double buffer or a (laggy) triple buffer, so the latency can vary depending on how the application handles the swap chain. For precision latency control, applications use the waitable features (waitable swapchains) and then Max Prerendered Frames 1 is the immediate back buffer in this particular workflow.

You're talking to someone who wrote Tearline Jedi -- the mastery of timing-precise tearline steering (to raster interrupt-like precision). I think I know enough about double buffering and triple buffering -- not everything, mind you -- but enough to be a bit dangerous...

Sometimes the terminology "double buffer" and "triple buffer" is kind of archaic because there's a lot of custom buffering workflows that can be piggybacked/daisychained, whether by the application (its own application-level in-memory frame buffers that are outside of the GPU drivers' frame queue), or by drivers (in NVIDIA's implementation, the front buffer/back buffers including the rerendered queue as part of existing buffering system that is also part of double buffering -- mostly semantics, schemantics that blur into each other nowadays), or by Present() hooks such as RTSS / SweetFX / Kaledian / Freestyle / etc (which can add its own frame queue or not).

As you know, I was the one who helped Guru3D add RTSS Scanline Sync, and I was the one who came up with the idea of Lagless VSYNC for Emulators.

The world is vastly more complex than just "double buffer" and "triple buffer" and it is not siloable with year 2020 drivers in year 2020 operating systems, unlike what one may have been taught in classical GPU driver workflows. Some of what happens in the world does deviate from what I do say, but fundamantally the world isn't really monolithic "double buffer" or "triple buffer"

Blur Busters specializes in the temporal Pandora black box of Present()-to-photons...
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

Post Reply