a program to measure your visual reaction time

Talk to software developers and aspiring geeks. Programming tips. Improve motion fluidity. Reduce input lag. Come Present() yourself!
Post Reply
Posts: 66
Joined: 18 Sep 2018, 00:29

a program to measure your visual reaction time

Post by ad8e » 17 Jun 2022, 10:58

This should be faster than the other visual benchmarks.
vsync copy.7z
(2.24 MiB) Downloaded 30 times
I get 240 ms in humanbenchmark.com, 155 ms here. You will see much less of an improvement. VRR (FreeSync, G-Sync) won't work because I don't have a VRR monitor to test with. (If you turn off VRR, it'll work.)

If you put your face up against the monitor, with eyes right below the 20% tearline, you will get faster reactions (145 ms for me).

Esc will exit.

Result history will be in "reaction time results.txt".

Ignore the "threshold 10.0000", it's not important.

You can skip reading the rest of this post, it's science for nerds.

1. Input thread does nothing but waiting for input. When it receives input, it records the time and passes it to another thread.
2. When the screen changes color, you see a tearline happen 20% down the screen. This is done through buffer swap times rather than by only modifying part of the frame.
3. Strict ordering: timestamps are _before_ frame presentation, and _after_ input. There is no adjustment for latencies, even known latencies. (Also, it pre-renders the frame, then swaps right after the timepoint is taken.)
4. The start time of the test is stateless (Poisson process).
5. The reaction test is triggered by a large luminance change across a large region.

Any reaction test that satisfies these 5 properties should be as good as this one. A reaction test missing any property will be worse, unless there's a VRR reaction test that carefully controls frame times. Rendering at 1000+ fps is not enough.

It uses a luminance change, gray-to-white, because afaik, luminance changes are the fastest thing vision can detect. If you press Ctrl+3, you can try a gray-to-black transition. I had to try this because I don't remember seeing research on light-to-dark luminance changes. This gives 165 ms for me, which means black-to-white produces 10 ms faster reactions than white-to-black. I only did 20 trials so I'll estimate the confidence interval at [0 ms, 20 ms].

I didn't do black-to-white luminance change, because it hurts my eyes. Even though it would be faster than gray-to-white.

The tearline mechanism means a 144 Hz monitor will have much less of an advantage over a 60 Hz monitor than it usually does.

I have not tested this on any system other than my own, so there's a good chance it won't work. As long as you see the 20% tearline close to where it should be, everything is good. If it's happening somewhere else or you don't see it, I'll need to figure something out.

There used to be talking about mouse position as a possible faster trigger than clicks. I implemented that. I found it to be extremely slow; my current jerk model takes 20-40 ms longer than a click. The input has high noise from tremor from large arm muscles, and low acceleration from arm + hand + mouse weight. These considerations make sensitivity very low. My filters aren't good, but only a 4x improvement would win against a button press, which I doubt is possible with my current setup. So there is no basis to using mouse position as a fast reaction time. If you want, you can try it out yourself: turn your mouse CPI to the highest sane setting. Drag in straight lines, then jerk in a perpendicular direction when the screen turns white. If it's turning red prematurely, then drag with the right mouse button upward to raise the threshold. If it's registering the jerk too late, then drag with the right mouse button downward to lower the threshold. If your straight line runs into an edge of the screen before the test happens, then too bad, try again. (WM_INPUT would normally fix this, but trying to turn it on through GLFW gives mysterious dropped inputs which I couldn't figure out.)

Source code is inside but won't compile. It has everything for the exe, but I ripped it out of a larger project without attempting to fix references.

Credit: a lot of the power comes from the tearline mechanism, for which a lot of credit goes to Chief Blur Buster. He had influence or credit with the following: Windows D3D apis, scanline testing of performance and correctness on Nvidia, monitor porch background, large vertical porches, confirmation of skewing across different systems, GPU power management, GPU early wakeup, tearline quantums, tearline instability at top, GLFW support, initial demo.

I don't know of any other reaction test messing with tearlines. Maybe you can use RTSS to force tearlines in another program, though I couldn't get RTSS to work on anything.

I used WaitMessage() in my input thread, but it seems to be fine on my system. The upside is not taking 100% CPU. The downside is whatever latency WaitMessage() creates, which is less than 1 ms in my testing.

This doesn't wake up the GPU early. It also doesn't change GPU power settings. Both of these have been measured to improve tearline stability in the past, but they didn't change response time when tested on my machine.

Post Reply