Hello everyone.
So I set up Gsync as per the Blur Busters articles and set Vsync to on in NVCP global settings and off in games. I am pretty sure that when I had Gsync and Vsync enabled and looked into the single program settings of a game under Vsync there always was "use 3D application setting". I just looked into it and now it says "use global settings (ON)". Did something change in the NVCP? I am pretty sure it should say "use 3D application setting" even if I have it ON in global settings, no? Wouldn't it force me to have Vsync on in game as well like this?
Question about Vsync in Nvidia Control Panel
Question about Vsync in Nvidia Control Panel
- Attachments
-
- Screenshot 2025-04-04 211524.jpg (114.49 KiB) Viewed 14823 times
Re: Question about Vsync in Nvidia Control Panel
Don't force vsync globally in NVCP. If you want to force it, do it per-profile.
With that being said, the per-profile settings will default to the global setting (unless Nvidia specifically changed the default for a specific game,) otherwise what's the point? If you set something in the global profile, then all per-application profiles will default to that. If you set vsync ON globally, then all profiles will default to vsync ON.
Note that Nvidia uses custom per-profile settings for some games/apps. So sometimes you will see a setting not being set to "use global setting" by default. One example is the profile for Doom 2016 having power management set to "maximum performance" by default, ignoring the global setting unless you manually set it to "use global setting." It might be that for some reason or other, Nvidia might have defaulted vsync to "use 3D application setting" for some games.
With that being said, the per-profile settings will default to the global setting (unless Nvidia specifically changed the default for a specific game,) otherwise what's the point? If you set something in the global profile, then all per-application profiles will default to that. If you set vsync ON globally, then all profiles will default to vsync ON.
Note that Nvidia uses custom per-profile settings for some games/apps. So sometimes you will see a setting not being set to "use global setting" by default. One example is the profile for Doom 2016 having power management set to "maximum performance" by default, ignoring the global setting unless you manually set it to "use global setting." It might be that for some reason or other, Nvidia might have defaulted vsync to "use 3D application setting" for some games.
Steam • GitHub • Stack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.
Re: Question about Vsync in Nvidia Control Panel
But isn't the Blur Busters recommendation literally to enable Vsync globally? That's what the articles said. Did something change?
If I have Vsync on globally in NVCP but off in game the ingame setting should override it anyway, right?
If I have Vsync on globally in NVCP but off in game the ingame setting should override it anyway, right?
Re: Question about Vsync in Nvidia Control Panel
global driver vsync on may cause vrr to trigger on apps that weren't designed for that... and cause stutter and/or flicker
on displays with MPOs at least.
i don't think blurbuster's gsync101 guide takes MPOs into account... (and i generally wouldn't recommend disabling MPOs... having them on is generally good for gaming to ensure that your game uses an independent flip presentation in more scenarios, which would result in lower latency and better performance for your game compared to running with the subpar "composed: flip" presentation...)
nvidia didn't have support for MPOs when blurbuster's gsync101 guide was written tbf
intel's gpu drivers have had support for MPOs at least as far back as 2016... but nvidia's official MPOs support via drivers came in 2021
so i'd suggest trying in-game vsync on and/or driver vsync on in a game-specific driver profile; not the global driver profile. i keep the global one set to "use the 3d application setting"
(that and/or you could also force vsync on from special k. this software is generally not meant to be used with games that use some anti-cheat though)
in-game vsync on is generally fine to combine with gsync, unless it enables fractional vsync (the fractional vsync would break vrr)
on displays with MPOs at least.
i don't think blurbuster's gsync101 guide takes MPOs into account... (and i generally wouldn't recommend disabling MPOs... having them on is generally good for gaming to ensure that your game uses an independent flip presentation in more scenarios, which would result in lower latency and better performance for your game compared to running with the subpar "composed: flip" presentation...)
nvidia didn't have support for MPOs when blurbuster's gsync101 guide was written tbf
intel's gpu drivers have had support for MPOs at least as far back as 2016... but nvidia's official MPOs support via drivers came in 2021
so i'd suggest trying in-game vsync on and/or driver vsync on in a game-specific driver profile; not the global driver profile. i keep the global one set to "use the 3d application setting"
(that and/or you could also force vsync on from special k. this software is generally not meant to be used with games that use some anti-cheat though)
in-game vsync on is generally fine to combine with gsync, unless it enables fractional vsync (the fractional vsync would break vrr)
Re: Question about Vsync in Nvidia Control Panel
If you're referring to https://blurbusters.com/gsync/gsync101- ... ttings/14/, my recommendations never specified whether NVCP V-SYNC should be enabled globally, only that it be enabled over in-game V-SYNC where possible, and really only to rule out any per-game issues (99% of the time, it's acceptable to use an in-game double buffer V-SYNC option with G-SYNC over NVCP V-SYNC as well).
Correct. And I can't practically offer a single recommendation for MPO due to it being somewhat config-dependent.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series
Displays: ASUS PG27AQN, LG 48C4 Scaler: RetroTINK 4k Consoles: Dreamcast, PS2, PS3, PS5, Switch 2, Wii, Xbox, Analogue Pocket + Dock VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)
Author: Blur Busters "G-SYNC 101" Series
Displays: ASUS PG27AQN, LG 48C4 Scaler: RetroTINK 4k Consoles: Dreamcast, PS2, PS3, PS5, Switch 2, Wii, Xbox, Analogue Pocket + Dock VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)
Re: Question about Vsync in Nvidia Control Panel
triple buffering normally wouldn't really make a difference when vrr is enabled with fps capped enough below the display's max refresh rate (or when nvidia's reflex low latency mode is doing that for us when using gsync + vsync on). hm i believe the gsync101 guide basically mentions that somewhere also.
however, for what's it's worth, i like when games have a triple buffering option (or if the game is a d3d11 game so i can safely adjust its buffer count with special k). d3d's "render ahead" triple buffering vsync (dxgi sync interval 1 with 3 buffers) can be beneficial to smooth out framerate some when/if a game falls into a "composed: flip" presentation (such as when the windows volume overlay is on top of the game while we're adjusting the windows volume on a display without MPOs) because the gpu could start working on a buffer sooner...
keep in mind also that vrr/gsync normally would become inactive while the app/game is using "composed: flip" for presentation. further, while using "composed: flip" for presentation, the fps may be halved in certain games with vsync on when/if the fps falls below the target despite the dwm being active for composition with a "composed: flip" presentation, but using triple buffering may prevent that fps halving situation or if the game is using deferred rendering...
(the impact of double vs triple buffering on framerate with vsync enabled is coped to how much rendering is done after the point in your frame where you touch the back buffer, since windows uses gpu synchronization for that. if you wait until the end of the frame and only copy into the back buffer, the impact is minimal and you could approach 60fps even with double buffering and going over 16.6ms frame budgets. triple buffering isn't the only thing that could prevent a game's fps from halving when using vsync on with fixed refresh, but yeah it can depend on the game...)
and there are some differences between the older windowed bitblt model "composed: copy with gpu gdi" presentation and the newer windowed flip model "composed: flip" presentation. the older bitblt model in windowed mode would get the app's swapchain copied into an offscreen intermediate surface shared with gdi, and that surface is not available with "composed: flip" -- though an app that's using "composed: flip" for presentation has the ability (if certain criteria are met) to engage directflip/independent flip optimizations which effectively results in bypassing the dwm for presentation... thus then allowing for lower latency and better performance... (plus windowed/borderless games using dxgi flip model can use waitable swapchains with an independent flip presentation and even with a composed flip presentation, which allows even a composed flip presentation to result in lower latency than the older windowed bitblt "composed: copy with gpu gdi" presentation in certain cases. ideally you'd want your game to be using an independent flip presentation though...)
also, in some cases adding an extra buffer or more can increase total fps, and if the app/game is using DXGI_PRESENT_RESTART or using sync interval 0 with the dxgi tearing flags disabled... we could then even have the app/game request 3 or more buffers and the extra buffers basically wouldn't incur the typical extra latency (they'd normally consume some extra vram though) since that'd mean to cancel the remaining time on the previously presented frame and discard this frame if a newer frame is queued. oh and an app/game may recover faster from some glitches this way.
https://learn.microsoft.com/en-us/windo ... in-present
https://learn.microsoft.com/en-us/windo ... m-glitches
note that there is a loose relationship between buffer count and requested max device latency/pre-rendered frames. with d3d's "deep queueing" support, the cpu does not need to wait for a buffer to be available before submitting rendering work, but the gpu needs to wait for it to be available before beginning its job on the buffers...
waitable swapchains in windowed/borderless games using a dxgi flip swap effect can also reduce the latency incurred from certain scenarios in which your game would've queued frames such as when using the sequential/"render ahead" vsync method (sync interval 1). even fractional vsync can incur less latency if the game is using a properly configured waitable swapchain (though i still generally wouldn't recommend fractional vsync, especially if you were wanting to use vrr... since fractional vsync breaks vrr). there are quite a few d3d12 and d3d11 games using waitable swapchains these days btw. i keep seeing new unity games using this. maybe the unity engine started enabling the waitable object by default at some point. some version of unity did start enabling dxgi flip model by default for windowed/borderless... unity's current docs do mention that unity defaults to using the dxgi flip model for the direct3d 11 graphics api.
https://docs.unity3d.com/6000.1/Documen ... chain.html
https://learn.microsoft.com/en-us/windo ... ableobject
https://learn.microsoft.com/en-us/windo ... wap-chains
https://learn.microsoft.com/en-us/windo ... chain_flag
granted waitable swapchains aren't supported in FSE/legacy flip and the composed flip scenario won't happen if you're in FSE/legacy flip (so we wouldn't really have to worry about a game falling into the subpar composed flip presentation if a game is using FSE/legacy flip... though games running in FSE/legacy flip with a bitblt swap effect may fallback into the subpar bitblt "composed: copy with gpu gdi" presentation when you try to place something external on top of them. i believe most games in FSE/legacy flip will minimize though when you try to place something external on top or the external window would be blocked from showing). also, sync interval 0 without tearing isn't possible in FSE/legacy flip... but tbh i imagine most people aren't actually using FSE/legacy flip nowadays.
anyway, i suppose next time someone says that the gsync101 guide recommends global driver vsync on, i can at least remind them that the guide doesn't actually specify that...
however, for what's it's worth, i like when games have a triple buffering option (or if the game is a d3d11 game so i can safely adjust its buffer count with special k). d3d's "render ahead" triple buffering vsync (dxgi sync interval 1 with 3 buffers) can be beneficial to smooth out framerate some when/if a game falls into a "composed: flip" presentation (such as when the windows volume overlay is on top of the game while we're adjusting the windows volume on a display without MPOs) because the gpu could start working on a buffer sooner...
keep in mind also that vrr/gsync normally would become inactive while the app/game is using "composed: flip" for presentation. further, while using "composed: flip" for presentation, the fps may be halved in certain games with vsync on when/if the fps falls below the target despite the dwm being active for composition with a "composed: flip" presentation, but using triple buffering may prevent that fps halving situation or if the game is using deferred rendering...
(the impact of double vs triple buffering on framerate with vsync enabled is coped to how much rendering is done after the point in your frame where you touch the back buffer, since windows uses gpu synchronization for that. if you wait until the end of the frame and only copy into the back buffer, the impact is minimal and you could approach 60fps even with double buffering and going over 16.6ms frame budgets. triple buffering isn't the only thing that could prevent a game's fps from halving when using vsync on with fixed refresh, but yeah it can depend on the game...)
and there are some differences between the older windowed bitblt model "composed: copy with gpu gdi" presentation and the newer windowed flip model "composed: flip" presentation. the older bitblt model in windowed mode would get the app's swapchain copied into an offscreen intermediate surface shared with gdi, and that surface is not available with "composed: flip" -- though an app that's using "composed: flip" for presentation has the ability (if certain criteria are met) to engage directflip/independent flip optimizations which effectively results in bypassing the dwm for presentation... thus then allowing for lower latency and better performance... (plus windowed/borderless games using dxgi flip model can use waitable swapchains with an independent flip presentation and even with a composed flip presentation, which allows even a composed flip presentation to result in lower latency than the older windowed bitblt "composed: copy with gpu gdi" presentation in certain cases. ideally you'd want your game to be using an independent flip presentation though...)
also, in some cases adding an extra buffer or more can increase total fps, and if the app/game is using DXGI_PRESENT_RESTART or using sync interval 0 with the dxgi tearing flags disabled... we could then even have the app/game request 3 or more buffers and the extra buffers basically wouldn't incur the typical extra latency (they'd normally consume some extra vram though) since that'd mean to cancel the remaining time on the previously presented frame and discard this frame if a newer frame is queued. oh and an app/game may recover faster from some glitches this way.
https://learn.microsoft.com/en-us/windo ... in-present
https://learn.microsoft.com/en-us/windo ... m-glitches
note that there is a loose relationship between buffer count and requested max device latency/pre-rendered frames. with d3d's "deep queueing" support, the cpu does not need to wait for a buffer to be available before submitting rendering work, but the gpu needs to wait for it to be available before beginning its job on the buffers...
waitable swapchains in windowed/borderless games using a dxgi flip swap effect can also reduce the latency incurred from certain scenarios in which your game would've queued frames such as when using the sequential/"render ahead" vsync method (sync interval 1). even fractional vsync can incur less latency if the game is using a properly configured waitable swapchain (though i still generally wouldn't recommend fractional vsync, especially if you were wanting to use vrr... since fractional vsync breaks vrr). there are quite a few d3d12 and d3d11 games using waitable swapchains these days btw. i keep seeing new unity games using this. maybe the unity engine started enabling the waitable object by default at some point. some version of unity did start enabling dxgi flip model by default for windowed/borderless... unity's current docs do mention that unity defaults to using the dxgi flip model for the direct3d 11 graphics api.
https://docs.unity3d.com/6000.1/Documen ... chain.html
https://learn.microsoft.com/en-us/windo ... ableobject
https://learn.microsoft.com/en-us/windo ... wap-chains
https://learn.microsoft.com/en-us/windo ... chain_flag
granted waitable swapchains aren't supported in FSE/legacy flip and the composed flip scenario won't happen if you're in FSE/legacy flip (so we wouldn't really have to worry about a game falling into the subpar composed flip presentation if a game is using FSE/legacy flip... though games running in FSE/legacy flip with a bitblt swap effect may fallback into the subpar bitblt "composed: copy with gpu gdi" presentation when you try to place something external on top of them. i believe most games in FSE/legacy flip will minimize though when you try to place something external on top or the external window would be blocked from showing). also, sync interval 0 without tearing isn't possible in FSE/legacy flip... but tbh i imagine most people aren't actually using FSE/legacy flip nowadays.
anyway, i suppose next time someone says that the gsync101 guide recommends global driver vsync on, i can at least remind them that the guide doesn't actually specify that...
Re: Question about Vsync in Nvidia Control Panel
Yes, I think you're referring to the last paragraph in:Gias wrote: ↑06 May 2025, 09:10triple buffering normally wouldn't really make a difference when vrr is enabled with fps capped enough below the display's max refresh rate (or when nvidia's reflex low latency mode is doing that for us when using gsync + vsync on). hm i believe the gsync101 guide basically mentions that somewhere also.
https://blurbusters.com/gsync/gsync101- ... ettings/8/
Correct, I never strictly specified it should be set globally. Mandela Effect at work, I guess.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series
Displays: ASUS PG27AQN, LG 48C4 Scaler: RetroTINK 4k Consoles: Dreamcast, PS2, PS3, PS5, Switch 2, Wii, Xbox, Analogue Pocket + Dock VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)
Author: Blur Busters "G-SYNC 101" Series
Displays: ASUS PG27AQN, LG 48C4 Scaler: RetroTINK 4k Consoles: Dreamcast, PS2, PS3, PS5, Switch 2, Wii, Xbox, Analogue Pocket + Dock VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)
