BenQ XL2720Z VSYNC OFF Optimize input lag?

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
Trip
Posts: 157
Joined: 23 Apr 2014, 15:44

Re: BenQ XL2720Z VSYNC OFF Optimize input lag?

Post by Trip » 04 Jul 2014, 08:27

An often overlooked part of the input lag chain can be multicore rendering. Parallel processing can can be done in a multitude of ways. True separation with no dependancy on other threads exists like sound and video. But even stuff like ai, physics, player input and networking which are often told they can be run in parallel to the rendering thread which is true in certain sense. But the rendering thread still depends on all this stuff to be updated to really show a correct picture of the game world. Input latency is mostly felt when the rendering thread is using player input information which is already older then it could be.

Unreal engine 3 has a variable called oneframethreadlag which allows the model to be updated straight after it has finished its previous update. What happens is the player input is buffered essentially so during the next frame the render thread can start working straight away instead of waiting for the model. This can increase frame rates but can also introduce input latency.
If you own any unreal engine 3 game and it allows you to change this variable you should try it out. What you should see in task manager is going from mainly two threads at high usage to one thread of high usage. What I do not know is whether the engine lets the render thread poll the model thread or if the model just creates an update and stores it until the render thread grabs it (pre processing). If the first were the case the input latency shouldn't be that bad but the latter can be really bad.

Of course user input is not the only thing that can lag behind. Something which can be even worse is networking, a ping to a server over the internet usually in the best case lies around 20-30ms but 50-100ms isn't uncommon. The worse the ping the more your own world lags behind the others there worlds. This problem is somewhat countered by introducing interpolation algorithms which predict positions of players. But this is in no way perfect and can introduce side effects like hit registration failing rubber banding and other issues.

Other things like physics are also difficult to implement especially in multiplayer. You want that piece of rock to fall for every player so the obvious thing would be to just offload it to the server but doing this adds at least the ping time to the server. So in turn this also adds latency thats why a client usually also does its own processing to avoid this latency this visually does not lag behind but can actually lag behind because the piece of rock is still not calculated on the server side. What could also happen is the rock updating in different time intervals first showing you its local calculation then the server if the pattern is too irregular it could be bouncing back and forth looking like stuttering.

As you can see the nicely multithreaded engine of battlefield 4 actually has its draw backs and it shows in actually testing for input latency like on blur busters in its article about gsync its input lag:
http://www.blurbusters.com/gsync/preview2/
I believe it also enables triple buffering by default which is of course another variable which introduces latency if it is actually another buffer which is just pre rendering another frame, while not throwing away the old frame and just showing the new one.

Counter strike global offensive its behavior with high input lag when gsync is enabled with 143 fps might also be explained since the model is only updated 128 times per second(128 tick). It could be that there are frames which hold the same information since the model isn't updated. When gsync is off the problem isn't there since even if it shows two consecutive frames it will still update with the following frames. Although I am not sure whether this is the problem it seems really likely since at 120 fps it is fixed. As a side note csgo might also have a multicore rendering option css at least did and that might also decrease latency if it is disabled.

I would hope game developers would just start independent threads that just update without any timing interval and just only store their latest data and then poll these threads from a main thread at a set time interval (fps cap) or of course with vsync no time interval. That way the latest information is always available and the cpu's are used to their maximum potential. Although this 'wastes' cpu processing power it does enable the smoothest experience I think. The icing on the cake would be a proper triple buffering implementation with vsync like in OpenGL or of course gsync which is driven by the game engine its fps cap.

So yeah even though this story is quite huge you could try to search for some multicore rendering options and see if it improves input latency be warned though it might decrease frame rates or if the engine is 'properly' coded it might even increase input lag.
I like oneframethreadlag=false in unreal I do not know if other games have these options or whether or not disabling them improves any thing for you. Some people like smoothness more then responsiveness and vice versa so just try out some stuff. Also putting your pre rendered frames/flip queue size in the driver to 1 helps with latency if your cpu can handle it.

Post Reply