[programming] input lag in a barebones program

Talk to software developers and aspiring geeks. Programming tips. Improve motion fluidity. Reduce input lag. Come Present() yourself!
Post Reply
flood
Posts: 929
Joined: 21 Dec 2013, 01:25

[programming] input lag in a barebones program

Post by flood » 24 Dec 2013, 03:42

How much input lag do you expect to see in this seemingly innocuous SDL2 program that has VSYNC? All it does is draw a pattern that follows the cursor.

I will post some interesting results in the coming days.

Code: Select all

all:
	gcc -O2 -mconsole -o test test.c `sdl2-config --cflags --libs`

Code: Select all

#include <SDL.h>

int main(int argc, char *argv[])
{
	SDL_Window *win	= SDL_CreateWindow("input lag test", 0, 0, 1024, 768, SDL_WINDOW_FULLSCREEN_DESKTOP);
	SDL_Renderer *ren = SDL_CreateRenderer(win, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);

	int mx = 0, my = 0;
	int running = 1;
	while (running) {
		SDL_Event e;
		while (SDL_PollEvent(&e)) {
			switch (e.type) {
			case SDL_MOUSEMOTION:
				mx = e.motion.x;
				my = e.motion.y;
				break;
			case SDL_MOUSEBUTTONDOWN:
			case SDL_QUIT:
				running = 0;
			}
		}

		SDL_SetRenderDrawColor(ren, 150, 150, 150, 255);
		SDL_RenderClear(ren);
		SDL_SetRenderDrawColor(ren, 0, 255, 0, 255);
		SDL_RenderDrawLine(ren, mx, my - 10, mx, my + 50);
		SDL_RenderDrawLine(ren, mx - 10, my, mx + 50, my);

		SDL_RenderPresent(ren);
	}

	SDL_DestroyRenderer(ren);
	SDL_DestroyWindow(win);
	SDL_Quit();
	return 0;
}

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: [programming] input lag in a barebones program

Post by Chief Blur Buster » 24 Dec 2013, 13:21

Even though I may not know game programming as much as the pros do, I know the inner workings of displays more than most game programmers, so I parse things from a display perspective. With that knowledge, I know the whole-chain latency from code all the way to photons hitting eyeballs...

Your code is really simple. The simpleness of your execution suggests approximately 1 refresh cycle of input latency, relative to the start of scan-out.

-- This is because the rendering runs virtually instantaneously (very rudimentary rendering), so Present() would have near-zero render overhead.
-- As is the typical VSYNC ON situation for most API's, I assume Present() renders and then waits for VSYNC and then returns practically right on the page flip (upon blanking interval). That means Present() returns at the beginning of a refresh.
-- This means the next input read occurs very early in the next refresh cycle (as your loop repeats early in refresh cycle)
-- Therefore, the results of that input read won't be presented until after the current refresh cycle (the call to Present() wait-out the current full refresh cycle before the refresh can be displayed)
-- When the refresh cycle finally begin again, the scanout begins at the top. That's a full frame cycle after the input read.
-- You have more lag for bottom edge of the screen. That's yet another full frame cycle of input lag, again.
-- In short, the display is currently scanning out the contents of the frame of previous Present() call while the next call to Present() is waiting for that to finish scanning out. Present() then returns immediately upon the blanking interval, and the contents of that Present() is currently starting to be scanned out as your while loop begins the next iteration.

So assuming 60Hz refresh (1/60sec = 16.7ms), and using a CRT, with a zero-latency analog connection (VGA), your input lag would be 16.7ms (1/60sec) for the top edge of the screen and 33.3ms (2/60sec) for the bottom edge of the screen. If you're using one of the fastest LCDs on the market (modern ASUS/BENQ 120Hz/144Hz TN panel, realtime scanout without frame buffering, and 1-2ms transition time), add about two to three milliseconds extra on top of this.

If your code can execute a blocking busywait loop (e.g. a 15ms pause, to get closer to right before VSYNC, to get less than 2ms before VSYNC) you can reduce your input latency by practically a full frame. Realistically, you can easily reduce to approximately 1/10th refresh cycle of input latency before the start of scan-out. Basically, you want to wait until the raster is near the bottom of the existing refresh, before reading input and quickly rendering the next frame, and presented to display (with consequently fresher input reads). In that situation, you can reduce your absolute minimum input lag to almost 0ms for the top edge of the screen, and almost 16.7ms for the bottom edge of the screen. If Present() returns almost instantly, you were on time for VSYNC and successfully reduced input lag via the "wait-till-right-before-VSYNC-before-reading-input" technique. If Present() takes a full refresh cycle to refresh, you've missed VSYNC.

The problem of waiting till right before VSYNC to read input and render, is that you might potentially miss VSYNC. But if your rendertime is predictable (e.g. certain emulators), this is a great technique to massively reduce VSYNC ON input latency (without getting a GSYNC monitor). I know that certain emulators such as WinUAE has a command line option to pull this feat off, to reduce input latency of VSYNC ON. This is far less practical for highly-variable-framerate games, though. "Just-in-time" rendering right before VSYNC is one of the ways to reduce VSYNC ON input latency, but without a forgiving variable-refresh-rate monitor, missing VSYNC gives you an instant penalty of one full refresh cycle of input lag. So that's the fine line -- trying to render at the very last minute before VSYNC, is a "gamble".

1980's History Note: Did you know? Old arcade games and 8-bit computer games used to be able to read input at the very last minute. Very low VSYNC ON input latency because they could read and process input during the blanking interval, pre-position sprites and scrolling registers, or write a small amount of character-set-based data (e.g. a few bytes of new graphics data at edge of screen during scrolling). You could even actually COUNT the number of machine language instruction execution cycles, and successfully fit everything in the time period of a vertical blanking interval (about 1 millisecond for NTSC televisions), so input reads were ultra-fresh at the beginning of display scan-out. That was back in the 8-bit Atari and Nintendo era, the golden days of arcade games. All of them always ran VSYNC ON. So input lag of Super Mario Brothers was never bad, even though Super Mario Brothers always ran essentially VSYNC ON. In fact, sometimes input reads were read inside raster interrupts, and sprites pre-positioned just before the scanout began to display sprites. Input lag was sometimes only a few hundred microseconds in some ultra highly optimized 1980's games! Today, 3D graphics have difficult-to-predict render times, so we've gone to buffering schemes, which unfortunately adds input lag, typically often a full frame worth. In the 21st century, VSYNC ON has a bad reputation among competitive gamers, due to input lag. The few remaining programmers that still closely understand rasters (e.g. from Atari 2600 programming or raster interrupt programming from the 1980s), will be better-positions to understand whole-chain input latency issues than the average 21st century 3D game programmer who never played on CRTs and have no concept/idea of how displays are refreshed.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: [programming] input lag in a barebones program

Post by flood » 04 Jan 2014, 18:12

Yea, 16.7ms at top of frame and 33.3ms at bottom would be the input lag one would expect from a double buffering system. Reaching those values in practice... eh that's a different story. for example, on my laptop I'm getting 4 frames of lag!
When I get the time, I'll post some data for various combinations of gfx hardware (nvidia/intel), gfx drivers, OS (windows, linux + X), and various other settings.
For now, I will determine input lag by seeing how far the pattern lags behind OS's cursor. In the past I've checked with a 60fps camera and a CRT screen that the cursor (in Windows or linux+X) has input lag of 0 at the top of frame and 16.7ms at the bottom.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: [programming] input lag in a barebones program

Post by Chief Blur Buster » 04 Jan 2014, 19:26

flood wrote:Yea, 16.7ms at top of frame and 33.3ms at bottom would be the input lag one would expect from a double buffering system. Reaching those values in practice... eh that's a different story. for example, on my laptop I'm getting 4 frames of lag!
When I get the time, I'll post some data for various combinations of gfx hardware (nvidia/intel), gfx drivers, OS (windows, linux + X), and various other settings.
For now, I will determine input lag by seeing how far the pattern lags behind OS's cursor. In the past I've checked with a 60fps camera and a CRT screen that the cursor (in Windows or linux+X) has input lag of 0 at the top of frame and 16.7ms at the bottom.
I have developed a new method of measuring input lag from button-to-photons, involving a hardwired LED (modification of a gaming mouse, with LED hardwired to left mouse button) and a 1000fps high speed camera. The number of frames (milliseconds) between LED reaction, and screen reaction, is the whole-chain input latency. This is going to be the feature of the upcoming GSYNC Article Part #2, currently about 80% complete. Keep tuned.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Haste
Posts: 326
Joined: 22 Dec 2013, 09:03

Re: [programming] input lag in a barebones program

Post by Haste » 04 Jan 2014, 19:31

Chief Blur Buster wrote:I have developed a new method of measuring input lag from button-to-photons, involving a hardwired LED (modification of a gaming mouse, with LED hardwired to left mouse button) and a 1000fps high speed camera. The number of frames (milliseconds) between LED reaction, and screen reaction, is the whole-chain input latency.
Nice! That's gonna be very handy. Great work!
Monitor: Gigabyte M27Q X

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: [programming] input lag in a barebones program

Post by flood » 05 Jan 2014, 00:00

Chief Blur Buster wrote:
flood wrote:Yea, 16.7ms at top of frame and 33.3ms at bottom would be the input lag one would expect from a double buffering system. Reaching those values in practice... eh that's a different story. for example, on my laptop I'm getting 4 frames of lag!
When I get the time, I'll post some data for various combinations of gfx hardware (nvidia/intel), gfx drivers, OS (windows, linux + X), and various other settings.
For now, I will determine input lag by seeing how far the pattern lags behind OS's cursor. In the past I've checked with a 60fps camera and a CRT screen that the cursor (in Windows or linux+X) has input lag of 0 at the top of frame and 16.7ms at the bottom.
I have developed a new method of measuring input lag from button-to-photons, involving a hardwired LED (modification of a gaming mouse, with LED hardwired to left mouse button) and a 1000fps high speed camera. The number of frames (milliseconds) between LED reaction, and screen reaction, is the whole-chain input latency. This is going to be the feature of the upcoming GSYNC Article Part #2, currently about 80% complete. Keep tuned.
sounds pretty cool

I've tried to do the same using the sound of button clicks in video recordings, but I was limited by my iphone 5's 60fps recording. Which camera are you using for 1000fps?

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: [programming] input lag in a barebones program

Post by Chief Blur Buster » 05 Jan 2014, 02:01

flood wrote:I've tried to do the same using the sound of button clicks in video recordings, but I was limited by my iphone 5's 60fps recording. Which camera are you using for 1000fps?
I am using the Casio EX-ZR200, purchased off eBay for under $250. You can also use the GoPro Hero3 (240fps), Casio EX-FC200S (1000fps), Casio EX-ZR200 (1000fps), EX-F1 (1000fps),Fuji HS10 (1000fps), Nikon1 J1 (1200fps), Nikon1 J2 (1200fps), Nikon1 V2 (1200fps).

Cheap high speed video ($100-$500 range) is very low-resolution (postage stamp size) but it's perfectly fine for these kinds of tests, like the high speed video in the AnandTech input lag article. The LED is a more reliable start indicator than the video of a human pressing the button. And more reliable than audio, which can be out-of-sync with the video.

With high speed video now cheap, I highly recommend game developers begin doing high speed video tests of their game engines. GSYNC Article Part #2 will illustrate why. I found some have high variability (gunshot-to-gunshot latency varying +/-20ms input lag in some engines, while gunshot-to-gunshot varying +/-5ms in other game engines) I would imagine that real life guns have random lag (e.g. some bullets may fire a millisecond later, others a millisecond sooner, because of mechanical/chemical variances), and some of the games do simulate that behavior, but most don't insert a random input lag for guntshots -- but the random input lag variabilities exist anyway.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

HeLLoWorld
Posts: 33
Joined: 07 Jan 2014, 18:44

Re: [programming] input lag in a barebones program

Post by HeLLoWorld » 07 Jan 2014, 21:40

On topic, sdl has or had multiple back-end renderers on windows.
I wouldn't say a sdl helloworld is the leanest display loop you could do on windows, wether it uses gdi, directx old or new, 9 or 10/11, directdraw? I think tinyptc even had something related to win32 multimedia.
Plus, even if it uses the best directx, the implementation is maybe not the leanest...Maybe threads to read inputs, things like that, who knows.
Anyway I'd rather use a recent directx sdk helloworld stripped of every unnecessary bit, read the mouse in hid mode, check values of nvcpl max renderahead frames/threaded optim/maybe AA (i think I read it could maybe...:) ), check all possible resident annoyances, (oh and run fullscreen exclusive ofcourse, windowed apparently will always be far less realtime) and then only conclude that the platform (os/runtime/drivers) does 3 frames of delay. But maybe you did all this, in which case I said nothing :)

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: [programming] input lag in a barebones program

Post by flood » 08 Jan 2014, 17:42

HeLLoWorld wrote:On topic, sdl has or had multiple back-end renderers on windows.
I wouldn't say a sdl helloworld is the leanest display loop you could do on windows, wether it uses gdi, directx old or new, 9 or 10/11, directdraw? I think tinyptc even had something related to win32 multimedia.
Plus, even if it uses the best directx, the implementation is maybe not the leanest...Maybe threads to read inputs, things like that, who knows.
Anyway I'd rather use a recent directx sdk helloworld stripped of every unnecessary bit, read the mouse in hid mode, check values of nvcpl max renderahead frames/threaded optim/maybe AA (i think I read it could maybe...:) ), check all possible resident annoyances, (oh and run fullscreen exclusive ofcourse, windowed apparently will always be far less realtime) and then only conclude that the platform (os/runtime/drivers) does 3 frames of delay. But maybe you did all this, in which case I said nothing :)
well damn I think you're right in that there is significant overhead with SDL because I'm getting like 4 frames of delay...
Unfortunately I am not familiar with Windows stuff and don't have the time right now to figure out how to make a barebones Windows program. But I will do tests on linux with both SDL and raw opengl/glx/x programs

HeLLoWorld
Posts: 33
Joined: 07 Jan 2014, 18:44

Re: [programming] input lag in a barebones program

Post by HeLLoWorld » 15 Jan 2014, 17:33

Well wait I did't say it was the case, I just said it was possible :)
I think some developers attain less lag so I'm just thinking of what could do a difference.
More testing would be needed.
You test the lag between mouse cursor and application by filming the screen at 60Hz and count the 60Hz frames between cursor moves and application change I understand? That's not a bad idea, assuming mouse under modern windows and aero still is 0 lag, but it probably is the case. With accelerated cursor rendering, I'm not even sure the mouse coordinates go through cpu before the framebuffer is updated. Information on this seem to be scarce although this is very interesting from an OS architecture point of view.

Post Reply