RealNC wrote:mello wrote:Chief Blur Buster wrote:And what about triple buffering ? How does it affect input lag in the best case scenario (VSYNC ON) that you just desribed ?
Tripple buffering will always increase lag by at least one frame. The kind of triple buffering the Chief is talking about, which actually reduces lag, is not used anymore in games. It was last used about 15 years ago or so. These days, triple buffering means just one more render-ahead buffer compared to double buffering.
This isnt true. What happened is it became non-obvious how to turn it on.
Today, it is more confusing how to turn it on, because it is more obscure.
This is the lag-reducing triple buffering: http://www.anandtech.com/show/2794
It is still possible to do it today, if you use utilities to configure the drivers, but it is tricky to confirm sometimes. Moe easy to do is use windowed mode. The old style of triple buffering occurs when you use windowed+VSYNC OFF, since the compositing layer grabs only the latest frame, once a refresh. You do not see tearing during VSYNC OFF, yet input lag is reduced in windowed mode! However, the compositing layer sometimes adds two frames of lag on some GPUs, so it behaves like a double buffer stacked in series with 'proper' triple buffering, so you get 1 frame of extra lag (but better than 2 frames lag) However, on GPUs and Windows drivers that only add 1 frame for compositing, then it becomes equal to old-fashioned proper triple buffer (and no penalty over true intentional 'proper' triple buffers of yesterday yore). In all cases, you get less lag than VSYNC ON windowed mode, but no tearing!
Games that have "full screen windowed mode", turn that on. You see lag increase. Now, turn off VSYNC. You see lag decrease, but no tearing.
Also, I wish all GPUs would keep compositing fast and minimum depth, only 1 frame shallow where possible, by doing just-in-time compositing in-GPU during VSYNC. Then it has no lag penalty over proper triple buffer. Memory bandwidth is fast enough to finish compositing in just a few hundred microseconds, so compositing can be done just-in-time in GPUs. Manufacturers that do not, need to get with that ball already being done in some GPUs, and optimize compositing to be as shallow as possible. Then, there is only the double buffer style lag penalty, and reduced by VSYNC OFF and compositing becomes the defacto proper triple buffer. Some say it is 'emulated', but it is a duck (it looks like a duck, it quacks like one, it reduces lag without tearing). It is a real proper triple buffer occuring, then potentially tape-delayed a bit again by inefficient compositing! (But still less lag than double buffering in windowed mode on same GPU!)
Min you, double-buffer is better for strobe motion quality in the best case scenario (perfect framerate-refreshrate sync). But if you hate input lag and/or get stutters or framerate slowdowns, 'proper' triple buffer is better.
This may be an interesting test to do at some point.