Interesting project about mouse/ gamepad latency

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: Interesting project about mouse/ gamepad latency

Post by Sparky » 07 Jul 2019, 19:57

ad8e wrote:
Sparky wrote:
ad8e wrote:For visual reaction times with current monitor/mouse input tech, 160 ms average measurements are probably possible, and 140 ms averages are not.
I'm not seeing what current tech has to do with it. If you have a latency test program with v-sync off, high framerates, and the whole screen changing color, the full chain from mouse to monitor can be be under 5ms with careful component selection, and 10ms with less careful component selection. Granted, this does not apply to a browser based test.
The mouse is the obstacle. If you check one of the two research articles, either https://www.semanticscholar.org/paper/S ... b648247bb7 or https://www.researchgate.net/publicatio ... till_valid, you can cut off about 30 ms with different input systems, and even more if you're willing to use invasive techniques (EMG).
There are off the shelf mice with average electrical click latency under 1ms(including USB polling interval), maybe 3ms if you also include mechanical actuation time. Yes there are also mice with 30ms+ click latency, but that's not a limitation of technology, just poor engineering.

The paper in this thread measures logitech g5 at 2.2ms absolute: https://epub.uni-regensburg.de/40182/1/ ... ersion.pdf
And relative latency measurements put several mice faster than that: https://www.overclock.net/forum/375-mic ... piled.html

The theoretical limit for 1khz polling is 0.5ms average 1ms maximum(electrical). That's a practical limitation on mouse latency, though not an absolute limit, as there are commercial mice that overclock to 2khz, and I've seen prototypes up to 8khz.

ad8e
Posts: 68
Joined: 18 Sep 2018, 00:29

Re: Interesting project about mouse/ gamepad latency

Post by ad8e » 07 Jul 2019, 20:04

Chief Blur Buster wrote:Enough tests have been done to show some game stimuli (especially bright full screen flashes) is not necessarily identical to Olympic pistol gunshot stimuli, there is likely a fuzzy zone of multi-ten milliseconds that is poorly studied because of some amplified game-specific stimuli, and the ease of pressing a mouse button less than a millimeter, versus a VHS quality high speed camera pointed at an athletes foot.

We need more researchers on this, who understand to account for the “accidentally beamraced” A+B+C+D test situation.

Do you have ResearchGate credentials? Would like to see your published work.
Audio stimuli is faster than the fastest visual stimuli. Olympic race reaction times are under circumstances that are mainly only relevant to Olympic races (fast stimuli, slow detection). The Olympic results aren't applicable to much, although the research that comes out of it is more general.

I think most researchers don't care since even with the right knowledge, there's no clear way to use it to improve society. My interest in this is more in building correct systems. What the top competitors are able to achieve is not really relevant to me, since I'm sitting here claiming that I can shave 20 ms from their reaction times and I'm not doing it. Many competitors also often use poor equipment out of ignorance, so the ability to improve latency is open to them and they're not doing it either. Currently, the main barrier to lower latency is dissemination of information rather than lack of research.

I'm just a student and have not published anything. I don't think anyone puts their stuff on ResearchGate; it's just an aggregator that shows up high on google rankings.

Tearing is a little questionable...as you know, if you can tear 7 times, then you might as well render 7 strips rather than the whole screen 7 times. And if you're doing that, you should use single-buffered mode rather than double-buffered mode, otherwise you'll have to render each strip twice. But when I tried, glfw didn't work properly with Vsync off and single-buffered mode, and diagnosing the problem further indicated that it was not glfw's fault but Windows 8.1's. So there are some obstacles that are hard to work around.
Chief Blur Buster wrote:choose a display that has subrefresh latency ability (using VSYNC OFF lag measurement metric, aka first-anywhere metric, rather than Leo Bodnar scanout-latency-affected measurement metric)
When I tried it, the top-to-bottom pixel times seemed to be pretty linear with position, other than the vblank interval which you know more about than me. Other than different monitors having different vblank intervals, shouldn't the pixel-in-center method still give correct information over the whole display, by linear extrapolation?
Sparky wrote:There are off the shelf mice with average electrical click latency under 1ms(including USB polling interval), maybe 3ms if you also include mechanical actuation time.
The papers address this point. Relative to a mouse with 0 ms latency, the systems in the research paper will have negative latency. They are better than what you consider to be "perfect", since they remove some human-originated latency.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 07 Jul 2019, 20:22

Blur Busters is interested in commissioning more researchers.... Bonus is if you're a university researcher who is also immersed into the esports community, and also knows how to write both in a journal tone (for the peer reviewed researchers) AND also in an easier popular science style tone (for the gamers who share) -- inquire within [email protected]

I certainly like to improve on what's already published. I was recently mentioned in a NVIDIA paper, page 2, and have been working with others behind the scenes. What's already published is nowhere nearly enough and needs more research.

P.S. Marwan just told me he'll join this thread later this week. Healthy debate is welcome to heavily pick apart the latency chain to understand various error margins better.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 07 Jul 2019, 20:29

ad8e wrote:
Chief Blur Buster wrote:choose a display that has subrefresh latency ability (using VSYNC OFF lag measurement metric, aka first-anywhere metric, rather than Leo Bodnar scanout-latency-affected measurement metric)
When I tried it, the top-to-bottom pixel times seemed to be pretty linear with position, other than the vblank interval which you know more about than me. Other than different monitors having different vblank intervals, shouldn't the pixel-in-center method still give correct information over the whole display, by linear extrapolation?
Some displays have scan-converting electronics, especially when panel scanout velocity or direction diverges from cable scanout. I know that some panels are fixed-horizontal-scanrate (e.g. BenQ XL2540 240Hz and XL2735 1440p) while some panels are variable-scanrate (e.g. BenQ XL2720Z 144Hz@1080p). So that's why the XL2540 has crappy 60Hz lag, it framebuffers the slower scanning 1/60sec cable signal before the fast 1/240sec scanout.

Some displays scan in a different direction (bottom to top), or require full-framebuffer preprocessing, so they have to framebuffer the full refresh cycle before beginning to scanout. RTINGS found a few HDTVs that scanned bottom-to-top. Additionally I know all DLP projectors & plasma displays split a refresh cycle into multiple subfields (although the two technologies do it very differently, each refresh cycles are still effectively split up into lower-bit-depth dithered subfields) -- so DLP/plasma mandatorily framebuffers a full refresh cycle before beginning the first subfield output.

If it is TN, the GtG lag is insignificant enough statistically (it becomes more of a visual complaint, and no longer a lag complaint -- like overshoot artifacts, like 1 pixel bright overshoot artifact at 2000 pixels/sec for imperfect 0.5ms overdrive -- still a human-visible ghosting).

Just another error margin item to pay attention to.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 07 Jul 2019, 20:45

ad8e wrote:I think most researchers don't care since even with the right knowledge, there's no clear way to use it to improve society.
There's demand now. The esports industry is a billion-dollar industry primed to grow 7x from 2019 to 2029. Riding the wave of support of many esports players, Blur Busters is now my full time job, and no longer a hobby job. So, while society benefits may be questionable, I'm one of the few sites on the Internet willing to fund this kind of research since at least a portion of smarter esports players are extremely interested in this.

I love Blur Busters research to be even more extensive and cover more bases to avoid reader misunderstandings. I realize misunderstandings can be common -- for example, GSYNC 101 is so widely cited in some esports communities -- that I see situations where people accidentally think the GSYNC 101 "cap 3fps below" tip also applies to VSYNC OFF.
ad8e wrote:My interest in this is more in building correct systems. What the top competitors are able to achieve is not really relevant to me, since I'm sitting here claiming that I can shave 20 ms from their reaction times and I'm not doing it. Many competitors also often use poor equipment out of ignorance, so the ability to improve latency is open to them and they're not doing it either.
That's definitely true! Not everyone is trying. But the community is so large, that some actually do. They're vested in research, even if it's just milliseconds-nitpicking. Enough that we, Blur Busters, feels the demand for more research. While budgets aren't big, Blur Busters do now pay for some Blur Busters commissioned research.

Society benefit certainly may be marginal but it has a lot of spinoff effects in improving motion technology, like variable refresh rate technology -- by understanding the milliseconds mathematical game of reducing "stutter" by keeping gametime in sync with photontime by avoiding the constraint of the fixed refresh metronome (aka "stutter"). A different science than latency, but still a science related to the human-visible millisecond nontheless. And 240Hz is now found to reduce eyestrain for some people, like this post and dozen others I've helped. Being named Blur Busters means we're one of the few sites to attract people of unusual interests. Although expanded well beyond just motion blur and into anything that improves motion/latency -- the namesake means we're a high-Hz research magnet now. Not all of our research is perfect but we are looked upon as trailblazers willing to look at these kinds of things. Over the last many years, I have visited several vendors who wanted my lag/blur/high-Hz skillz one way or another (Blur Busters has a little-known consulting arm -- services.blurbusters.com) so some of the funds Blur Busters earn from this, is recirculated to paying for extra R&D in the Blur Busters topics sphere.
ad8e wrote:Currently, the main barrier to lower latency is dissemination of information rather than lack of research.
Information dissemination can be an issue -- yup
ad8e wrote:Tearing is a little questionable...as you know, if you can tear 7 times, then you might as well render 7 strips rather than the whole screen 7 times. And if you're doing that, you should use single-buffered mode rather than double-buffered mode
Yes. Much more efficient.

But games don't bother doing that. The only way to accidentally beamrace existing games is to blast framerates far in excess of refreshrate, to essentially brute-force a game into defacto beamraced latencies.

My point is, you can brute most PC games into doing defacto beamraced frameslices, on the condition you're able to meet A+B+C+D requirement. That's it: The flexibility to brute-force an existing game into doing sub-refresh latency. even though the game was never designed to beam race.

Some game engines certainly have malfunctions at excess frame rates, though that's a per-game factor.

Single buffer vs double buffer effective defacto doesn't matter for existing games if you're simply bruteforcing sheer framerate as a workaround to force an existing game to beamrace, if that was a goal. One buffer is currently being scanned out (at frameslice) while the other buffer is being rendered in full. The goal is not to redesign the game to do proper beamracing, but the fact that there already exists various games (Quake engine, Source engines) that you can bruteforce into an defacto stream of beamraced frameslices, simply because of today's overkill GPUs on older engines -- and that not all lag research fully understands VSYNC OFF lag mechanics versus VSYNC ON lag mechcanics. For synchronous cable=scanout displays, VSYNC OFF equalizes average lag for top/center/bottom, while VSYNC ON has a mandatory lag gradient for top/center/bottom. If you do single-buffer, you have to successfully render below the current frameslice, and that's orders of magnitude more complex than a videogame author wants to bother doing. So, to avoid needing the game developer to even know what the hell beamracing is, then we just simply brute-framerate it, for mathematical simplicity, since a full frame means a random-position-in-scanout has sub-refresh scanlines available.

Saying "Tearing is a little questionable...as you know, if you can tear 7 times, then you might as well render 7 strips rather" is an analytical response based on shoulda-coulda, but need to consider the existing situation: How do you reliably brute-force a game into achieving subrefresh latency.... We already know certain Quake-engines and Source-engines can do that reliably (Ingredients: A+B+C+D ... and then voila regardless of system! You still have to choose the right input mouse and the right output display, but A+B+C+D allows you to guarantee system-based subrefresh latency for some engines), so that is the existing-engine goal, not the goal of engine-designed-for-it goal.

There can be two buffers in VSYNC OFF only simply because one is the buffer to scanout frameslice from, while the other buffer is the front buffer to render into. And since even the total of those two frameslices added together, can be less than one refresh cycle, in frame rates more than twice the refresh rate, we're still achieving subrefresh latencies. Just add more overkill to make many existing game engines successfully achieve subframe latencies. (Assuming the engine scales properly into that territory -- bugs of untested framerates can happen).

Should game developers become beamrace-aware? Probably not. Even most esports players don't care at these levels. But it is relevant to reaction-time testing because we're already playing accidentally-beamraced videogames unbeknownst to us, thanks to the sheer overkill, it's a miracle how popular CounterStrike:GO still is today, despite the age of the game engine, which has created an unexpected "subrefresh-latency" situation thanks to continued Moore's Law on GPUs, speeding up a game engine into stratospheric frame rates and creates a reason for these replies to this thread to exist.

It's amazing how much nitpicks we can do over these milliseconds. Lag test methodologies are often annoying black boxes -- some testers stopwatch to the beginning of the GtG, others testers stopwatch to the end of the GtG. Even photons are hitting the human eyes BEFORE the before VESA 10% of the GtG, since GtG 5% means a black has transitioned to dim gray of RGB(12,12,12) (non-gamma-corrected) in a transition from RGB(0,0,0) thru RGB(255,255,255). The benchmarking of how early in the GtG curve the human eye sees photons of a pixel (in sufficient enough numbers to begin reacting), is often a contentious debate, since for lag test equipment we're effectively artifically setting targets in lag-measuring equipment to trigger stimuli. But fortunately thanks to GtG 1ms, this becomes an insignificant error margin in modern reaction-time testing if the researcher smartly chose to benchmark using a TN panel display. There's a haze of error margins especially since many researchers were long benchmarking slower LCDs in those older 2000-2010 scientific papers.
ad8e wrote:otherwise you'll have to render each strip twice. But when I tried, glfw didn't work properly with Vsync off and single-buffered mode, and diagnosing the problem further indicated that it was not glfw's fault but Windows 8.1's. So there are some obstacles that are hard to work around.
The point is getting subrefresh latency in an existing game not originally designed to get subrefresh latency. The fact that this is doable with Quake-engine and Source-engine, and that esports use some games that use such engine (e.g. Counterstrike:GO), means that this becomes a relevant factor in esports lag measurements that many researchers don't quite fully understand.

The point is Blur Busters keeps an open mind. We'd love to see more studies on in the sphere of esports. Society benefits may be marginal but Blur Busters is one of the few sites willing to fund some related research from time to time. Some Youtubers do it (Linus, Battle(non)sense) but don't delve deeply into the universe of subframe latencies and do a scientific-method. Or deficient in another sense (e.g. not understanding interaction factors between scanout and sync tech). Good stuff they do, totally different focuses, but so many missing gaps to fill.

Trying to define the reaction stopwatch start (how a mouse decides that a gamer is pressing a button) and define the reaction stopwatch end (enough photons hitting the eyes to begin reacting to, or the simultaneous frequent leapfrogging visual+aural stimuli (direct and indirect)) puts a big fuzzy error margin band on the reaction time, it's getting tighter and tighter but many old papers have a surprising amount of hidden error margins -- and even the articles on Blur Busters do too -- and we all can strive to keep improving this science.

Building the correct systems is important. But esports work with premade systems (e.g. game engines) and so we have to study imperfect systems and understand the error margins better. The key is building the correct test -- and that can be hard given the incredibly complex lag chain from button-to-photons and the difficulty of defining the lag-stopwatch-start thru lag-stopwatch-end, given the often complex nuances of the input and the complex nuances of the output (example: singlepoint lag tests versus fullscreen peripheral lag tests).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 07 Jul 2019, 21:38

Heck, many modern computer equipment is helping reduce lag in ways that scientific reaction-time tests don't show. Both the input and output end.

I'm sure there's now predictive-help factors like a mouse electronics pretriggering a button press based on an ultra-faint signal from a human finger well before the full mechanical click, means esports mice will invariably react faster than a $5 RadioShack-purchased 5mm-travel pushbutton switch in a jerryrigged reaction time scientific setup of an old reaction time paper of the last century.

Some mice will filter and add lag, but some mice can pretty much get your mouseevent code executing in your software less than 1ms of a mouse LED illumination (for a LED attached to mouse button). Certainly, the muscle lag of your finger pressing the button, but if you're already putting pressure on the mouse, it can be less than ~0.1mm of travel between your trigger-ready pressure to trigger-activation push tipping point. Many esports players already pre-pressure their mouse trigger button to just barely above click pressure, in a super-primed situation and let their startle-reaction do the necessary muscle tic to trigger the button (e.g. sniper aiming at a sudden appearance of a player in a single location). This is a different priming than a common reaction time app, even one designed by a researcher; many people don't put themselves into the pretrained itchy-trigger tension alertness in all kinds of arbitrary reaction time tests (the simple tests don't have the tension of a 100M sprint or the tension of itchy-finger snipering, both of which piggybacks more fully on the startle response). The tension, the spectators, the prizes, the survive-with-your-life feeling, the andrenaline, the esports situation is priming the player to be very pumped and alert -- that may not have been accounted for in some reaction time tests, and creates some error margin fuzzyband influences that may not be currently anticipated.

There's so many ways a player can reduce the button-prime latency, and mouse manufacturers have done so many things to help the player put themselves in that hyper primed state in this regards, probably literally almost (but not quite) to predictive/guess level. I wonder (speculation!), even tomorrow, a mouse button might even have an EKG sensor, to try to detect a human body signal before the finger muscle actually moves, who knows?

Even 100M sprint footpad sensors are not as sensitive as some esports mice! The trigger activation of some esports mouse from a player intentionally near-click pressure to click-actuation pressure, is under 0.1mm in some situations. This is not the click travel, but from the player prepressuring the mouse button to just above the click-trigger point. This currently shaves towards the fastest end of the fuzzy error margin band.

I would not be surprised if at least one esports mouse manufacturer has tried internally at their offices to try to push the envelope further by testing an EKG sensor in an internal prototype (and possibly failing to make it reliable), they are vested in trying to create better mice, maybe to try to win a valuable sponsorship... Trying to reliably guess the click intent earlier and earlier. Right now, the science is mainly focussed on fastest click-actuation. Tightening that error margin fuzzyband in ways that makes your head spin.

Also consider pressure sensors like Apple's Touchpad with its Taptic engine. It feels like a true mouse click but there's 0mm travel. It's a simulated vibration of a mouse click that is the suprisingly most realistic haptic feedback. It is simply triggered by an uncannily accurate pressure sensor that is more reliable than an old mechanical mouse button. The Taptic engine vibrates in a way that feels miraculously near identical to an actual mouse button. So it's probably possible to emulate the feel of a mouse button with 0mm travel now, with modern Apple-laptop-quality haptic feedback. This kind of tech could potentially hit esports mice eventually if mouse manufacturers find that it improves a players' reaction time even further. Who knows!

Even USB transmission latencies of some 1000Hz mice samples can sometimes be pushed to roughly ~0.1ms, especially for a fast USB chip (e.g. a USB3 port handling a USB1 mouse, will have lower lag than a USB2 port handling a USB1 mouse) on a modern computer. USB lag is often crapshoot but it can be much less than one poll cycle. Some devices are really slow (e.g. 4ms, espeically the original Arduino) while in some certain esports mice I've seen sub-1ms LED illuminate to mouse eventcode execution. I'm surprised at how much USB lag can vary from port to port at the hundred-microsecond timescales.

Monitors manufacturers who figured out how to synchronize cable to panel scanout and pushing the GtG transitions faster early in the GtG curve, combined with the sheer overkill VSYNC OFF framerates that enable subrefresh latencies not typically found in researcher screen-based lag-tests (and may refresh-cycle-granularity or multi-refresh-cycle lag error margins).

Maybe some people want to research these error margins and provide hard data of the error margin fuzzy bands in a non-speculation manner? Some of our readers love that stuff. We keep an open mind regardless. We don't dismiss "wow" reaction time numbers around here arbitrarily (though the caveat of error margins was also thrown in too). That's not Blur Busters approach. See a problem with the error margins? Would love researchers to help us understand the less-known error margins in a non-speculation manner. We want to commission improved understanding of reaction time (and error margin fuzzybands) in the modern esports situation.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

ad8e
Posts: 68
Joined: 18 Sep 2018, 00:29

Re: Interesting project about mouse/ gamepad latency

Post by ad8e » 07 Jul 2019, 22:41

My main focus is on society rather than gaming. Faster input and display helps everyone, not just competitors in video games. But the ultra-high end equipment doesn't seem to be useful for anyone except those competitors. 144 Hz monitors historically have poor color behavior or other tradeoffs, and if they were intended to be useful to the general non-gaming public, we'd probably see other refresh rates like 72 Hz and 90 Hz that preserve the abilities of 60 Hz monitors. That these monitors are uncommon means that monitors are stratified into two markets - one for gamers and one for everyone else.

I think the current state of gaming equipment is at a decent level. The smart gamers buy better stuff and gain a small advantage. The dumber ones buy random stuff but don't lose much. But more than that, I like how gaming equipment is also available to the general public at a decent price, such that everyone else benefits from progress in peripherals.

University-affiliated researchers release their work to the public for free, so if they pay attention to a competition, their goal is to improve fairness and accessibility. For example, in running, it's somewhat regarded that all of the top runners are doping. The priority for anti-doping efforts is to detect doping consistently across competitors, and to reduce doping to safe levels so that it has less effect on the competitors' lives. In this way, doping is kept fair and manageable. University researchers don't receive the benefits of selling comparative advantages, but industry does.

As a thought experiment, if Overwatch set a rule that using more than one frame per monitor refresh was disallowed, that would kill off a lot of the possible comparative advantages, but also have no negative effect on the quality of the game. Or a person could show up to a CS:Go tournament with an EMG wire in his jaw, reducing his reaction time by 80 ms. But that would quickly be disallowed because it's unhealthy for the tournament overall.

Our goals are very similar but not exact: you want to push down latency to its bottommost levels, and me too, but keeping costs low is a bigger priority for me. I have planned a public domain frame-presenting example that allows developers to get good vsync behavior without needing to learn how vsync works. Scanline racing is interesting, but if it requires a powerful graphics card, then that's more oriented towards competition than the general public. Vsync off is a decent advantage in competition, as its 2 ms gives ~5% advantage assuming 15 ms standard deviation of reaction time. However, its tearlines make it unsuitable for the casual user, so it's not something I care too much about.

My problem with Marwan's article is not aimed at his results being too "wow", but his methods. For example, the numbers he gets out of flood's video look perfectly legitimate, and it's likely he did count frames in flood's local video, but his presentation in the article needs to show that. And his Quick Draw section understates the advantage that 1 ms gives; my predictions are more aggressive than his. But I want to be careful that claims are supported and free of mistakes, instead of merely arriving at the correct number. For example, I claimed before that 140 ms / 5 trials is possible on the human benchmark test without guessing, but I still take issue with him pulling 150 ms from the human benchmark database, because it's not good science.

With regards to mouse manufacturers: my opinion of them is lower than yours. I agree with Sparky that 5 ms latency is "inexcusable" (his threshold is 1 ms), yet nearly all mice still have it. That includes my current mouse, the Logitech G Pro, which is commonly regarded as a great mouse. Given that they can't even fix this simple thing, I doubt they are pushing the envelope further.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 08 Jul 2019, 12:18

ad8e wrote:With regards to mouse manufacturers: my opinion of them is lower than yours. I agree with Sparky that 5 ms latency is "inexcusable" (his threshold is 1 ms), yet nearly all mice still have it. That includes my current mouse, the Logitech G Pro, which is commonly regarded as a great mouse. Given that they can't even fix this simple thing, I doubt they are pushing the envelope further.
I can confirm that many manufacturers do it differently. I've seen it.

Some accessory makers (i.e. mouse, keyboard, etc) just basically slap together an accessory from known high-performance components, but don't bother to optimize those milliseconds off, just trusting known good sensors and good preinstalled microcontroller firmwares supplied by outsourced parts supplier for their esports accessory. Some are already quite good.

Some accessory makers (i.e. mouse, keyboard, etc) do it to quite almost to a religion thanks to the much bigger monies involved today and all the coveted sponsorships the manufacturers are trying to win, and thus get the best scientific measuring equipment to try to shave milliseconds off.

Blur Busters is indeed interested in the science of new, modern tests. And understanding the science. The new tests invented can simultaneously help society (non-esports) and help gaming. We may not be able to cover enough depth (as we wish sometimes) and have to toe a fine line of easy-vs-advanced writings. We respect both the journal writing and popular science writing, yet have to write in a more popular science way that may gloss over details because we have to be reasonably possible to be understandable by an educated high end player.

It's amazing how differently manufacturers go about this, piggybacking on just selling a fancy-shaped accessory that has off-the-shelf high-end components and off-the-shelf microcontroller firmwares etc -- versus truly caring about all elements of the accessory including getting improved latency via custom hardware & software. The way manufacturers go about it all can be quite dramatically different.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 08 Jul 2019, 12:43

ad8e wrote:My main focus is on society rather than gaming. Faster input and display helps everyone, not just competitors in video games. But the ultra-high end equipment doesn't seem to be useful for anyone except those competitors.
Spinoff talk time.

Consider this. The latency science is also important for VR too. Trying even remotely three-sigma (lest four-sigma or five-sigma) virtual reality comfort at the visual/latency level. Where shaving a few milliseconds off can reduce nausea for a bigger percentage of users. Etc. Picture this - how many percent get comfort from a specific 3D tech.... Like, say 80%, 90%, 91%, 95%, 99%, 99.9%, etc getting eye-comfort with VR. It's tough progress especially with so many failed 3D fads. Different humans will trigger nausea or other discomfort at smaller latencies than others, so there's merit to keep shaving off those milliseconds where possible.

Not related to milliseconds, but the incredible innovations in VR, means more percentage find more vision comfort with VR (at least the calm "sitting on a beach" VR experiences) than with Real3D/Disney3D glasses at the movie theatre. We've passed that uncanny valley already to that "High end VR is now finally more visually comfortable than 3D movie glasses" -- but still nowhere near five-sigma human comfort.

That grandmother who get headaches with watching 3D movies, will often not get headaches with modern VR (Rift-league / Vive-league) for the calm VR apps -- the quality is rapidly improving as long as such users stick to "Comfortable"-rated VR games (not those VR rollercoasters!). The grandmother who wished who visit Grand Canyon but is now in hospital, and will never be able to visit the grand canyon. Shaving a few milliseconds off the VR latency can make it that more likely grandma is happy with visiting Grand Canyon in VR, directly from her hospital bed, without headaches from latency/vertigo/etc. (even better if improved by simply fade-teleporting between sightseeing points, with a virtual beach chair that has the same tilt as the hospital bed).

It's easy to dismiss VR because of a bad experience with Google Cardboard or getting dizzy from an Oculus Rift Rollercoaster, but fundamentally, I've purchased a $3 indie kite-flying game that every family member has told me is more comfortable than visiting a 3D movie. Some accidental little indie gems that are surprisingly simple yet feels like you're standing on a Caribbean beach flying a realistically-behaving kite -- those mini "VR vacation" moment experiences. Curated software with current highend VR is an impressive application of low-latency technology to real world comfort. Now, I'm imagining in a few years, we could include others -- imagine getting son/daughter to join (with a realistic automatic 3D representation of themselves -- true teleprescence, sharing a mini 3D vacation moment -- or a business meeting without needing to fly down).

(Note: For non-sitting-down VR -- roaming VR -- but this is outside the scope of this thread -- there's obviously the fear of crashing into real world objects (which Rift/Vive uses automatic transparent "blue Holodeck-style grids" that automatically appear when you walk too close to your furniture or walls -- to warn you to get back into the center of the room. (or ghost images of your furniture/walls appearing when you get too close, as is happening with those VR headsets with video cameras built into them). Traditionally required a lot of external sensors but is now being integrated into some headsets (like the new Oculus Rift S tethered, and Oculus Quest standalone is beginning to do). And becoming more automatic (less setup required, and probably eventually none; just "put on the VR glasses"). All this roaming VR tech can require even more latency reductions, given the need to properly engineer movement/vertigo concerns -- and this is even without worrying about the automatic visual geofencing systems now found in modern high-end VR to keep you away from bumping into things.

Many say 99% of VR software is crud, but as the science of creating the VR improves, creating properly immersive content improves dramatically by working to reduce all those vertigo-related issues including latency, and then this is where other factors (e.g. latency, performance, motion blur, etc) start to become main bottlenecks of comfort.

Not just lag (traditional lag), but also lag consistency of frames (stutter). In VR, even a <0.5ms microstutter (divergence between rendertime:photontime) has much more amplified human visibility -- even a slow headturn 10000 pixels/sec translates to 5 pixel stutter for a 0.5ms stutter -- so impeccable framepacing and rendertime:photontime must be in relative sync as perfectly as possible. And lag between those as tight as possible too. This is also explained as the Vicious Cycle effect in the 1000Hz-Journey article, where milliseconds become even more important as screens simultaneously approach "retina quality" in the resolution/FOV/persistence departments at the same time.

And consider that high end VR is not as easy as slapping on untethered Oakley sunglasses yet. So access to comfortable high end VR is not easy, we're still at the outlier edge of computer technological/latency/etc limits that finally pushes the VR over the uncanny valley (if you spend & setup properly & choose right software). We still need to keep shaving off those milliseconds there in VR.

The input-to-photons latency science is hugely more important for passing the Holodeck Turing test (not being able to tell apart ski googles from a VR headset in a blind-test A/B realworld versus virtualreality) -- we probably won't be able to pass this Holodeck Turing Test for several decades or possibly even this century (until we veer into quadruple-digit refresh rates for blur-less sample-and-hold to eliminate flicker of traditional low persistence -- retina refresh rate, retina dynamic range, retina resolution, flickerfree, strobefree, phantomarrayfree, and imperceptible lag all simultaneously) -- and continuing to miniaturize this successfully (preferably like an Oakley sunglasses rather than a heavy ski goggles) -- this is a thought experiment where real-world application of this thread is critically important.

For VR, the simultaneous performance / latency / display-progress required for all of this -- far exceeds what's already written in this thread, and therefore, even a small bit of progress here is a great real-world spinoff consideration for all these milliseconds science -- whether special-application VR (organizational) or mainstream VR.

I think this is probably more "for the public good" (it likely will become more rapidly mainstream once you cram "better-than-best-of-Rift-like experience" into a sub-$100 lightweight sunglasses sized untethered VR)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: Interesting project about mouse/ gamepad latency

Post by spacediver » 08 Jul 2019, 14:45

ad8e wrote:
the human benchmark reaction time test
The 150 ms results in their database are from cheating/guessing, there's no point in referencing them.
Here's the quote from my article:
If you look at the distribution, you can see that some people are performing at around 150 ms. These are probably younger folks who have excellent reflexes and are on good hardware (it’s also possible that some people are using clever methods to cheat), but 150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus (at least when it comes to pressing buttons with a finger), although there may be a few people who can push this limit lower.
I think it's perfectly valid to reference these results, especially given that I've qualified them by including the possibility of a pool of cheaters. And (as my article also points out) flod has regularly achieved scores of below 160 ms (based on multiple trials, so these aren't single fluky events where he was guessing), and as low as 140 ms, so I think it is very reasonable to conclude, as I have, that "150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus..."

Yet you are claiming that a score of 150 ms definitely indicates cheating. This seems epistemically irresponsible given the lack of solid evidence to support this claim. The anomalous bump at ~100 ms in distributions you reference (in the archived version of the human benchmark website) certainly suggests cheating, but I fail to see how those distributions prove that the scores of ~150 ms are due to cheating. You claim that there is a cluster after 150 ms which represents real results, and sparse scores from 110 - 140 ms. Can you post an image of this distribution in this thread so we can see what you're referring to?
ad8e wrote:For visual reaction times with current monitor/mouse input tech, 160 ms average measurements are probably possible, and 140 ms averages are not.
Again, you seem so sure of this, yet are not providing evidence, and are dismissing evidence to the contrary (flod achieving ~140 ms on the benchmark). Flod, btw, is someone I have known and interacted with for years on many projects, and is an academic at stanford with a solid publication record in the field of physics and statistics. I trust him completely, and he is most certainly not cheating.

I may ask him to chime in here.

Also, in the linus tech tips video (https://www.youtube.com/watch?v=tV8P6T5 ... DT4AXsUyVI), at 17:30, a trial is shown that has a reaction time of 103.7 ms. Granted, it's a single trial, but still worth noting.
ad8e wrote:
Here is a video of [flood] sniping bots in CSGO.
More than half the frames are dropped in that youtube video, which makes counting frames suspect. Access to the raw video is needed for that method to work.
I explain in the article that the original footage is at 100 fps. I counted the frames myself in the raw video. You seem to have incorrectly assumed that I didn't have access to this 100 fps video.
ad8e wrote: Startle reaction times are limited to startle responses, which appear not to be useful for practical game purposes. (i.e. they are likely skipping large parts of your brain's usual processing pathway)
The first part of my article was to provide an overview of motor reflexes of the human organism. The startle response is an important part of this story, and shows what we are capable of under extreme conditions, where (as you point out), the processing pathway is more efficient. In part 2 of the article, I hypothesize a mode of action which I call the manual tracking reflex, which would also involve a more efficient processing pathway.

So bringing up the startle reflex is important for two reasons:

1) It is an important part of the story of motor reflexes (regardless of whether or not it is harnessed in gaming conditions).
2) It sets the stage for discussing the manual tracking reflex.
ad8e wrote:
there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time
Scientists are aware of this and take it into account already.
If they're aware of the fact that these outliers haven't been well represented in most studies of human reaction time, then by definition they haven't taken it into account. If they had taken this into account, then we'd see more studies that do represent them!
ad8e wrote:
One of the sprinters (female) had an average reaction time in the legs of 79 ms, and her fastest was 42 ms
It's not a good idea to cite those fastest times. 42 ms is below the theoretical limit and is from guessing. The extreme averages should also be viewed with caution.
From my article:
Even if we are conservative and assume that the fastest times for some of these sprinters was based on guessing (the study says that each sprinter performed between 5 and 8 trials, so it’s not clear whether some trials were excluded), the averages do not lie. The authors of this study strongly recommended that the IAAF revise the limit to as low as 80 ms. Also note the reaction times were faster in the arms than the legs. This is important to remember as it has implications for gaming.
I think the above quotation does show a cautious representation of the data. I've made it clear that the fastest trials may have been due to guessing, but also emphasize that the averages are still well below ~100 ms.

Assuming that there was not a methodological flaw or fabrication of data, these averages either represent genuine reflexes, or a huge statistical coincidence (positive replication would help reduce the likelihood of these alternative explanations).

Also, I'd be interested to learn why you think 42 ms is below the theoretical limit. What is the theoretical limit, and how is it derived, in your estimation?

Another point to bring up here is that it may be the case that athletes can perform better in experimental conditions than in a real race. In a real race, where the consequences of a false start incur a massive penalty (possibility of disqualification), athletes may be reacting in a more conservative manner (trading reaction time for reducing risk of a false start).
ad8e wrote: The <100 ms numbers from the research papers are real, but they are caused by extremely good measurement systems, rather than really fast people.
Even if this were true (that these athletes weren't extreme outliers, but rather had the benefit of better measurement systems), doesn't this make my point even more salient, since the real extreme outliers would have even lower reaction times?
ad8e wrote:
Quake LG section
Optimal LG technique aims at the rear of the target rather than the centerline, so this entire section is bogus and should be retracted.
Two points:

1) This was an illustrative simulation that spelled out assumptions in advance to show the effect of input lag. It's meant to be somewhat realistic, but criticizing it because optimal technique might involve aiming at rear of target rather than middle, misses for forest for the trees. The forest, in this case, is that small amounts of lag, so long as they are statistically real, have a real life impact. The magnitude of this impact will vary depending upon the assumptions, but disagreeing with a particular assumption doesn't render the exercise "bogus".

2) Why would you say that optimal technique aims at the rear of the target. One could easily make the argument that aiming at the center of the target is a better strategy, since aiming at the centre will provide a margin of error on both sides of this midpoint which is important for accommodating any variations in aim (due to neuromuscular noise), or variations in enemy movement. It's the same reason that your best chance of hitting any part of a target is to aim for the centre.

ad8e wrote: For a separate goal as a research article, I think the parts that are useful are the citations to the research papers and the Quick Draw section. The author's interpretation of the research articles is very poor, but the underlying research articles are still good.
Can you point to cases where my interpretation of the studies is "very poor"?
ad8e wrote: The Quick Draw section is just looking at the CDF of a normal distribution. The 4 ms frame time is sufficiently small compared to the 20 ms player latency standard deviation that the overall distribution is normal. It's easy to calculate: given 20 ms standard deviation of population latency, 4 ms frame time, and 15 ms standard deviation of an individual's latency (made up number), the total standard deviation is sqrt(20^2 + 15^2 + 4^2/12) = 25 ms. My CDF table gives 57.93% at 0.2 = (5 ms / 25 ms). That actually makes me question the validity of the article's simulation, since 58% > 57% and I used a higher standard deviation.
Given your assumptions, your calculation of 57.9% is indeed correct:

Code: Select all

pWinning = 1 - pnorm(0, 5, 25)
The above code yields 0.579 in R (where pnorm(0, 5, 25) represents the probability of losing when having a 5 ms latency advantage, i.e. the definite integral between negative infinity and 0 of the difference distribution between the two players: mean difference of 5 ms, sd = 25 ms).

1 - pnorm(0, 5, 25) is identical to pnorm(5, 0, 25), but I chose the first formulation since it may be easier for some people to follow along.

I'm guessing that the reason my simulation showed ~0.57 instead of ~0.58 is because you treated the uniform distribution as a normal distribution.

Here is the R code that I used in the simulation (edited so it only looks at the case when the display latency difference is 5 ms):

Code: Select all

numTrials = 1000000;
#Tally of frags for player A (who has 0 ms display latency)
winsA = 0;
# For each trial:
for (i in seq(1, numTrials))
{
  # Player A
  displayLatencyA = 0;
  # Draw a sample from normal distribution with mean of 170 ms, and sd of 20 ms.
  neuralLatencyA = rnorm(1, 170, 20);
  # Draw a sample from uniform distribution between 0 and 4.17 ms.
  refreshLatencyA = runif(1, 0, 4.17);
  
  # Player B
  displayLatencyB = 5;
  neuralLatencyB = rnorm(1, 170, 20);
  refreshLatencyB = runif(1, 0, 4.17);
  
  #Total lag
  totalLatencyA = displayLatencyA + neuralLatencyA + refreshLatencyA;
  totalLatencyB = displayLatencyB + neuralLatencyB + refreshLatencyB;
  # If total latency for player A is less than or equal to player B, add 1 to winsA, else leave winsA as it is.
  winsA = ifelse (totalLatencyA <= totalLatencyB, winsA + 1, winsA);

}

probWinA = winsA / numTrials;
cat(probWinA)
I just ran this with a million trials, and got a score of 0.569974, which is ~ 57%.
ad8e wrote: In fact, it looked so suspicious that I plugged in the article's conditions into Mathematica:

Code: Select all

f[x_] = CDF[TransformedDistribution[x + y, {Distributed[x, NormalDistribution[0, 20]], Distributed[y, UniformDistribution[{-4.17/2, 4.17/2}]]}], x]; f[5]
This gives 59.85%. So the author's simulation code is wrong.
I'm not familiar with mathematica nor do I have the software, so I can't comment much on your code. On the surface, it looks like it might be correct, but I'm not familiar enough with how to properly combine two distributions in mathematica, so I can't say for sure if it's sound.

On the other hand, I can't see an error in my simulation, and the results are clear and consistent (you can download R for free and run the code yourself).

Post Reply