flood's input lag measurements

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
User avatar
lexlazootin
Posts: 1251
Joined: 16 Dec 2014, 02:57

Re: flood's input lag measurements

Post by lexlazootin » 28 Feb 2015, 14:16

flood wrote:note to self: pick more random numbers for fps_max
Try not to go too random, CSGO (On my computer anyways) likes certain fps more then others and will try to "Stick" to fps caps that are 1000 divided by a full number plus 1.

Here's 10 random screenshots of a fps_max that follows that rule...

Low Settings: FPS:126 1000 / 8 + 1

Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.1
Avg FPS 124, Frame MS 7.9
Avg FPS 125, Frame MS 7.9
Avg FPS 124, Frame MS 8.1
Avg FPS 124, Frame MS 8.1
Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.0

and ten results that don't...

Low Settings: FPS:135

Avg FPS 130, Frame MS 7.4
Avg FPS 129, Frame MS 7.7
Avg FPS 125, Frame MS 8.0
Avg FPS 127, Frame MS 7.4
Avg FPS 128, Frame MS 8.2
Avg FPS 127, Frame MS 8.0
Avg FPS 126, Frame MS 8.1
Avg FPS 130, Frame MS 7.6
Avg FPS 127, Frame MS 7.8
Avg FPS 128, Frame MS 7.8

I have no idea why, but these numbers for me anyways work the best. And btw source engine accepts decimal places, so 1000 / 7 + 1 = fps_max 143.857142 works fine.

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: flood's input lag measurements

Post by flood » 28 Feb 2015, 14:56

stirner wrote:Cool.

Any chance you can be bothered to throw different pre-render values into the mix? I. e. pre-render 1-3 uncapped mqm 0-2 vs. pre-render 1-3 capped mqm 0-2. The pre-render setting "use application setting" would be interesting as well with differing queue modes.

I find it strange that there's more variance to the single-threaded input lag, because frame rendering times get more consistent with it (something you should maybe mention - even if the difference in input lag is insignificant, single-threaded rendering is less prone to produce microstuttering).
well last time i tried, prerender didnt affect anything unless vsync was on

variance in inpit lag is dominated by fps. see my mspaint pictures. yea theres the question of microstuttering but i havent noticed it. probably if im actually movinf around the map there would be a little bit of it. maybe possible to check with a fraps benchmark

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: flood's input lag measurements

Post by flood » 28 Feb 2015, 15:01

btw im travelling for a week... so nothings gunna happen until im back
lexlazootin wrote:
flood wrote:note to self: pick more random numbers for fps_max
Try not to go too random, CSGO (On my computer anyways) likes certain fps more then others and will try to "Stick" to fps caps that are 1000 divided by a full number plus 1.

Here's 10 random screenshots of a fps_max that follows that rule...

Low Settings: FPS:126 1000 / 8 + 1

Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.1
Avg FPS 124, Frame MS 7.9
Avg FPS 125, Frame MS 7.9
Avg FPS 124, Frame MS 8.1
Avg FPS 124, Frame MS 8.1
Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.0
Avg FPS 124, Frame MS 8.0

and ten results that don't...

Low Settings: FPS:135

Avg FPS 130, Frame MS 7.4
Avg FPS 129, Frame MS 7.7
Avg FPS 125, Frame MS 8.0
Avg FPS 127, Frame MS 7.4
Avg FPS 128, Frame MS 8.2
Avg FPS 127, Frame MS 8.0
Avg FPS 126, Frame MS 8.1
Avg FPS 130, Frame MS 7.6
Avg FPS 127, Frame MS 7.8
Avg FPS 128, Frame MS 7.8

I have no idea why, but these numbers for me anyways work the best. And btw source engine accepts decimal places, so 1000 / 7 + 1 = fps_max 143.857142 works fine.
i notice this too but i don't think it matters. i just wanted to make sure that the frames are out of sync with the usb polling

stirner
Posts: 74
Joined: 07 Aug 2014, 04:55

Re: flood's input lag measurements

Post by stirner » 28 Feb 2015, 15:10

Just saying when frame rendering times are more consistent, the lag results should be as well.

I'm actually most interested in external frame limiters. What Chief said about that in terms of the internal cap being sync'd with the overall rendering process while external forcing sleep potentially after collecting input does make sense, and in my cam test it did show up as lag inducing, but frametimes still look much more stable with external limiters. In FRAPS frametimes that is.
Would be interesting to know at what stage of the process FRAPS and the Source in-game monitor measure FPS though.

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: flood's input lag measurements

Post by spacediver » 28 Feb 2015, 20:12

flood wrote:oh in that picture the frame rendering time means from the start of cpu processing of input to when the gpu finishes drawing

i'm considering the uncapped case, where that time is equal to 1/fps
trying to put this all together but am struggling. For now, can you just tell me what gpu limited means? Does it have something to do with capped vs uncapped?

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: flood's input lag measurements

Post by Sparky » 28 Feb 2015, 20:58

spacediver wrote:
flood wrote:oh in that picture the frame rendering time means from the start of cpu processing of input to when the gpu finishes drawing

i'm considering the uncapped case, where that time is equal to 1/fps
trying to put this all together but am struggling. For now, can you just tell me what gpu limited means? Does it have something to do with capped vs uncapped?
First the CPU works on a frame, then the GPU, then the monitor. Say the CPU is fast enough to finish working on a frame in 5ms, the GPU takes 10ms, and the monitor takes 16.6ms(v-sync on a 60hz monitor). In this case, the GPU finishes the frame 6.6ms before the monitor is ready to display it, so, it just waits with the finished frame and that 6.6ms gets tacked on to the input latency. The same thing happens between the CPU and the GPU, in the example above, the CPU would be done with the frame a full 11.6 ms before the GPU was ready to accept it, adding a lot of latency.

If you removed the monitor as a bottleneck(variable refresh or v-sync off), you would be limited by the GPU, because it takes 10ms on each frame. This means the CPU is sitting on a frame it's done with for 5ms before handing it off to the GPU. (the framerate would be 100fps in this example).

Framerate caps, implemented correctly, add a delay before the CPU starts working on a frame, so instead of finishing a frame and then waiting 5ms before sending it to the CPU, you wait before you start working on the frame, so that you can include fresher input data. If you cap the framerate at 95fps, the CPU would wait 5.5ms before it starts working on the frame, so it takes 10.5ms. now when the CPU finishes working on that frame, the GPU is already done with the previous one, so it can get to work immediately, lowering the input lag by about 4.5ms, despite the slightly lower framerate.

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: flood's input lag measurements

Post by spacediver » 28 Feb 2015, 21:32

Thanks for the detailed reply. It's gonna take me a while to understand it (my brain finds this stuff incredibly hard to process for some reason!). But something's tripping me up at the beginning:

Sparky wrote:First the CPU works on a frame, then the GPU, then the monitor. Say the CPU is fast enough to finish working on a frame in 5ms, the GPU takes 10ms, and the monitor takes 16.6ms(v-sync on a 60hz monitor).
Just so I understand, in the first 5 ms, the CPU has only worked on part of the frame, right? It hasn't calculated all the details yet. And then the GPU takes over, and takes an additional 10 ms to complete the frame. So 15 ms total for frame calculation time. Have I got this part right?

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: flood's input lag measurements

Post by Sparky » 28 Feb 2015, 21:36

yes, 15ms of frame calculation, but if you're gpu limited there's 20ms of input lag(in these 2 steps anyway). An analogy would be a checkout line, where the framerate is the number of customers served in an hour, the calculation time is how long it takes to check out if nobody is in front of you, and the input latency is the time you spend waiting in line plus the time it takes to check out.

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: flood's input lag measurements

Post by spacediver » 28 Feb 2015, 22:27

First the CPU works on a frame, then the GPU, then the monitor. Say the CPU is fast enough to finish working on a frame in 5ms, the GPU takes 10ms, and the monitor takes 16.6ms(v-sync on a 60hz monitor). In this case, the GPU finishes the frame 6.6ms before the monitor is ready to display it, so, it just waits with the finished frame and that 6.6ms gets tacked on to the input latency.
So at time=0, CPU begins working on frame
At time = 5, GPU begins working on frame
At time = 15 (when GPU has finished working on frame), monitor begins drawing frame (in vsync condition).
At time 31.6, last line of frame is complete.

Where did that 6.6 delay come from?

I know I'm making an error here, since frame time is not 31.6 ms, but it has to be 16.7 ms.


edit: ok I thought about it some more, the CPU and GPU are working on the next frame while the monitor is scanning the current frame. I think with this understanding I'll be able to tackle the issue now :)

however, I still don't see where the 6.6 ms figure comes in. Shouldn't it be 1.6 ms?

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: flood's input lag measurements

Post by Sparky » 28 Feb 2015, 23:04

spacediver wrote:
First the CPU works on a frame, then the GPU, then the monitor. Say the CPU is fast enough to finish working on a frame in 5ms, the GPU takes 10ms, and the monitor takes 16.6ms(v-sync on a 60hz monitor). In this case, the GPU finishes the frame 6.6ms before the monitor is ready to display it, so, it just waits with the finished frame and that 6.6ms gets tacked on to the input latency.
So at time=0, CPU begins working on frame
At time = 5, GPU begins working on frame
At time = 15 (when GPU has finished working on frame), monitor begins drawing frame (in vsync condition).
At time 31.6, last line of frame is complete.

Where did that 6.6 delay come from?

I know I'm making an error here, since frame time is not 31.6 ms, but it has to be 16.7 ms.


edit: ok I thought about it some more, the CPU and GPU are working on the next frame while the monitor is scanning the current frame. I think with this understanding I'll be able to tackle the issue now :)

however, I still don't see where the 6.6 ms figure comes in. Shouldn't it be 1.6 ms?
with v-sync on(60hz display), the display, CPU, and GPU all start working on a different frame at the same time, dictated by the monitor. So there would be 50ms between when the CPU started working on it and when the last line is displayed. 31.66 of that is processing time(16.66ms from the monitor, 10ms from the GPU, and 5ms from the CPU), and the rest is queue time(6.6ms of GPU waiting on display, and 11.6 of CPU waiting on GPU). The 6.6 ms figure comes from subtracting the GPU render time from the display time.

Post Reply