Interesting project about mouse/ gamepad latency

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
ad8e
Posts: 68
Joined: 18 Sep 2018, 00:29

Re: Interesting project about mouse/ gamepad latency

Post by ad8e » 08 Jul 2019, 16:28

I forgot about VR. I don't own a VR system. While the VR software catalogue is looked down upon, it still elicits extremely positive reactions from those who experience it. And some tools like Tilt Brush are only possible in VR; from a futurist's perspective, 3D drawing is far better than 2D drawing.

The 2 ms just-discernable latency on screen becomes a 2 ms significant problem in VR, so you're right that milliseconds really matter in that setting. However, I think one of the reasons I ignored VR was that the low focus distance causes disorientation, not just latency alone. Maybe this will be fixed in the future (back to 3D glasses?), but with head-mounted systems, you'd need something like a lens to change the focus of the screen.
spacediver wrote:I think it's perfectly valid to reference these results, especially given that I've qualified them by including the possibility of a pool of cheaters. And (as my article also points out) flod has regularly achieved scores of below 160 ms (based on multiple trials, so these aren't single fluky events where he was guessing), and as low as 140 ms, so I think it is very reasonable to conclude, as I have, that "150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus..."

Yet you are claiming that a score of 150 ms definitely indicates cheating. This seems epistemically irresponsible given the lack of solid evidence to support this claim. The anomalous bump at ~100 ms in distributions you reference (in the archived version of the human benchmark website) certainly suggests cheating, but I fail to see how those distributions prove that the scores of ~150 ms are due to cheating. You claim that there is a cluster after 150 ms which represents real results, and sparse scores from 110 - 140 ms. Can you post an image of this distribution in this thread so we can see what you're referring to?
I'm not saying they are all cheaters. As do0om points out, there are players who have achieved it, including himself. A 160 ms population average can achieve 150 ms over five trials without much difficulty. The point is, if you know that the pool is tainted by cheaters up to 150 ms, it cannot be used to support your point. It doesn't matter if your point is true (in fact, I even support it).

The graph can be seen in the archive link I posted: https://web.archive.org/web/20170814021 ... ctiontime/
spacediver wrote:Again, you seem so sure of this, yet are not providing evidence, and are dismissing evidence to the contrary (flod achieving ~140 ms on the benchmark).
You need evidence to show that it's possible. The claim doesn't stand without support. I'm saying your evidence is insufficient.

Flood achieving 140 ms could be huge luck from standard deviation, or guessing, or whatever. Bringing up a single trial is not good enough, though I don't doubt his honesty. But in this context of errors and standard deviation, using low-sample results seems irresponsible at best. A more pessimistic view would consider them misleading. If you care about reaction time tests, we have an even better candidate in this thread, do0om, whose reaction times/equipment seem even better than flood's. I can supply him with a top-tier reaction test, if he's willing to do the trials.
spacediver wrote:Also, in the linus tech tips video (https://www.youtube.com/watch?v=tV8P6T5 ... DT4AXsUyVI), at 17:30, a trial is shown that has a reaction time of 103.7 ms. Granted, it's a single trial, but still worth noting.
It's not 103.7 ms, the numbers on the top left are /60. I previously linked to a 8:53 count, which is faster. The trial you mentioned is actually 10/60 + 37/60/60 = 177 ms, pretty good under CS:Go conditions but not amazing for a single trial.

They should not have numbered it that way, it's confusing.
spacediver wrote:I explain in the article that the original footage is at 100 fps. I counted the frames myself in the raw video. You seem to have incorrectly assumed that I didn't have access to this 100 fps video.
The article needs to explicitly mention that the youtube video drops frames and that you didn't use it, otherwise your method appears different from what you present. As I said,
ad8e wrote:For example, the numbers he gets out of flood's video look perfectly legitimate, and it's likely he did count frames in flood's local video, but his presentation in the article needs to show that.
spacediver wrote:The first part of my article was to provide an overview of motor reflexes of the human organism. The startle response is an important part of this story, and shows what we are capable of under extreme conditions, where (as you point out), the processing pathway is more efficient. In part 2 of the article, I hypothesize a mode of action which I call the manual tracking reflex, which would also involve a more efficient processing pathway.

So bringing up the startle reflex is important for two reasons:

1) It is an important part of the story of motor reflexes (regardless of whether or not it is harnessed in gaming conditions).
2) It sets the stage for discussing the manual tracking reflex.
Ok, I'll accept that. That is a valid justification for discussing the startle reflex. There's no problem with clearly marked speculation.
spacediver wrote:
ad8e wrote:
there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time
Scientists are aware of this and take it into account already.
If they're aware of the fact that these outliers haven't been well represented in most studies of human reaction time, then by definition they haven't taken it into account. If they had taken this into account, then we'd see more studies that do represent them!
Here's the full quote from the article for context:
The reaction times in these events are measured with force transducers in the starting blocks, which measure the force produced by the legs. The loud noise signal in a sprint race can produce a startle response, and the IAAF limit of 100 ms seems to be based on assumptions that humans cannot react (with their legs) faster than 100 ms to an auditory signal.

However, it’s not clear whether these assumptions take into account two important facts: First, a startle response improves reaction times, and second, there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time. It turns out that this 100 ms limit is based on wrong assumptions.
Wrong. Because there isn't the slightest indication that the startle response has any ability to improve racers' reaction times. And the 100 ms number shows that the limit already takes into account any possible super-human freaks: otherwise, we'd see it bumped up to about 120 ms, since that is below any racer's fastest start.

Some people are under the impression that 100 ms is a reasonable threshold for reaction time (even once in this thread), when in fact it has a hugely generous tolerance for the conditions they're under.
spacediver wrote:
ad8e wrote:
One of the sprinters (female) had an average reaction time in the legs of 79 ms, and her fastest was 42 ms
It's not a good idea to cite those fastest times. 42 ms is below the theoretical limit and is from guessing. The extreme averages should also be viewed with caution.
From my article:
Even if we are conservative and assume that the fastest times for some of these sprinters was based on guessing (the study says that each sprinter performed between 5 and 8 trials, so it’s not clear whether some trials were excluded), the averages do not lie. The authors of this study strongly recommended that the IAAF revise the limit to as low as 80 ms. Also note the reaction times were faster in the arms than the legs. This is important to remember as it has implications for gaming.
I think the above quotation does show a cautious representation of the data. I've made it clear that the fastest trials may have been due to guessing, but also emphasize that the averages are still well below ~100 ms.

Assuming that there was not a methodological flaw or fabrication of data, these averages either represent genuine reflexes, or a huge statistical coincidence (positive replication would help reduce the likelihood of these alternative explanations).

Also, I'd be interested to learn why you think 42 ms is below the theoretical limit. What is the theoretical limit, and how is it derived, in your estimation?
Those research articles are real, and there is no indication that their results are either outliers or false, with respect to the methodology they state. I think they should be taken seriously as a true product of the described methodology.

However, I think your interpretation is poor because you are pulling out the fastest times and fastest averages from the Finnish article, when we already know that such times are often tainted by guessing. Furthermore, the methodology in that article is under suspicion for its possible low trial count (they obscure their procedure, in how many trials they do, and how they throw away trials). That means the fastest times and averages are the lowest quality data from the article, and an interpretation should not rely on them, or even mention them. The data as a total is more reliable and worthy of bringing up in a summary. The fastest numbers, both the ones in this paper and the others you bring up, are not subject to just the normal cautious interpretation. They must be subject to even greater scrutiny than usual, because we can see the instances of guessing (and fraud in the human benchmark) very clearly.

The theoretical limit comes from research articles; Pain and Gibbs has one. They sum up the minimum possible latency they believe each part in the chain has (ear to brain, brain processing, peripheral nervous system, muscle activation, and others), and then come up with a very low number that bounds minimum latency. The additional latency in reaction tests is from unexplained factors, and is then the object of study.
spacediver wrote:Another point to bring up here is that it may be the case that athletes can perform better in experimental conditions than in a real race. In a real race, where the consequences of a false start incur a massive penalty (possibility of disqualification), athletes may be reacting in a more conservative manner (trading reaction time for reducing risk of a false start).
Before, when disqualification was per-athlete, they all guessed like crazy. The guess forgiveness helped them guess more, but didn't improve their true reactions much. There is still some forgiveness, but the golden era of guessing is over.
spacediver wrote:
ad8e wrote: The <100 ms numbers from the research papers are real, but they are caused by extremely good measurement systems, rather than really fast people.
Even if this were true (that these athletes weren't extreme outliers, but rather had the benefit of better measurement systems), doesn't this make my point even more salient, since the real extreme outliers would have even lower reaction times?
I'm not clear on what your point is; I'm only pointing out all the flaws in your methods. My own speculation already aligns with some of your claims, but I disagree with the way you arrive at them. It is true that I expect some spectacular performers to do even better than the athletes in the papers, when measured under the researchers' conditions.
spacediver wrote:
ad8e wrote:
Quake LG section
Optimal LG technique aims at the rear of the target rather than the centerline, so this entire section is bogus and should be retracted.
Two points:

1) This was an illustrative simulation that spelled out assumptions in advance to show the effect of input lag. It's meant to be somewhat realistic, but criticizing it because optimal technique might involve aiming at rear of target rather than middle, misses for forest for the trees. The forest, in this case, is that small amounts of lag, so long as they are statistically real, have a real life impact. The magnitude of this impact will vary depending upon the assumptions, but disagreeing with a particular assumption doesn't render the exercise "bogus".

2) Why would you say that optimal technique aims at the rear of the target. One could easily make the argument that aiming at the center of the target is a better strategy, since aiming at the centre will provide a margin of error on both sides of this midpoint which is important for accommodating any variations in aim (due to neuromuscular noise), or variations in enemy movement. It's the same reason that your best chance of hitting any part of a target is to aim for the centre.
Science always has errors; sometimes the errors are marginal, sometimes they mean the conclusions need qualification. However, I think the LG error is fatal to the entire method. That reaction time affects LG accuracy is already well-known, and the rest of the section tries to determine a number. But with the wrong technique, the number has no meaning.

Why the rear is optimal: consider as a thought experiment, someone with perfect mouse tracking but with 200 ms fixed reaction time. He aims at the trailing end of the target. As long as the target keeps moving in the same direction, he won't miss the target from the trailing end. But when the target switches directions, now the entire target passes through his LG beam before he starts missing from the new trailing end, rather than just half the target if he aimed at the centerline. So he has lessened his misses from the leading end without compromising his misses from the trailing end. When mouse tracking error is added back to the picture, the optimal aim point is forward of the trailing end by the amount that minimizes the sum of tracking misses and reaction time misses.

If this is not easy to visualize, then consider as another thought experiment if the target is standing against a wall to the left. The LG user will fire at the opposite end of the target, since he knows the target can't press himself any deeper into the wall, but might move away from it. Then, in the open space condition, you can pretend that an imaginary wall is traveling along with the target, in front of him. Since the target can't move any faster than his max speed, this imaginary wall doesn't affect his movement. But in this visualization, the target is still stuck against the imaginary wall, and the LG user will fire at the opposite end.
spacediver wrote:Can you point to cases where my interpretation of the studies is "very poor"?
My issue with your bringing up startle reflexes, and then your interpretation of the research papers, is that you seem to be implying that such reaction times may be possible under normal computer conditions. It misses the point not to mention and emphasize the changed conditions, since that's the entire thesis of Pain and Gibbs, of measuring where each component of latency comes from. They don't claim that their amazing reaction time numbers are achievable under Olympic conditions; they mention very carefully how they are varying the input detection methods. They describe their wavelet detection in detail, how they validated it as non-spurious, and how it compares to the Olympic input system. Then they speculate on the differences. The Finnish paper neglects to stress this aspect out of mediocre scholarship, which is how they arrive at their 80 ms limit revision recommendation. The 80 ms suggestion is unsupported, since the same runners who perform so well in their tests are no better than the rest under field conditions, and they don't address this. The most they do is suggest that race detection switches to high speed cameras.

Some results are able to push our understanding of the limits of human reaction time at a computer, by showing better and better results. However, neither research paper contributes to this limit, as their conditions are too different.
ad8e wrote:I'm not familiar with mathematica nor do I have the software, so I can't comment much on your code. On the surface, it looks like it might be correct, but I'm not familiar enough with how to properly combine two distributions in mathematica, so I can't say for sure if it's sound.

On the other hand, I can't see an error in my simulation, and the results are clear and consistent (you can download R for free and run the code yourself).
You are correct, my calculation neglects that the distribution is sampled twice, once for each player, instead of once. My bad.
My assumption of normality of the uniform distribution came from the Central Limit Theorem, since a small well-behaved distribution is added to a much larger normal one.

I read your code; it looks good. Other than neglecting individual variation in reaction times, the results in this section is fine. I think you should switch over to a probability distribution method, but the simulation is a decent substitute.
Last edited by ad8e on 08 Jul 2019, 16:54, edited 1 time in total.

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: Interesting project about mouse/ gamepad latency

Post by spacediver » 08 Jul 2019, 16:50

Thanks for the thoughtful reply. I'll reply in detail soon, but am out of time for the next 16 hours or so.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Interesting project about mouse/ gamepad latency

Post by Chief Blur Buster » 08 Jul 2019, 21:43

spacediver wrote:Thanks for the thoughtful reply. I'll reply in detail soon, but am out of time for the next 16 hours or so.
I appreciate the time you’re putting into this.

It’s a milestone when increasingly-cited research (even limited-samples research) generates thoughtful vigorous debate in a respectful manner! And generate new ideas of follow-on research.

I’m out of time today too, will catch up on this thread tomorrow.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

open
Posts: 223
Joined: 02 Jul 2017, 20:46

Re: Interesting project about mouse/ gamepad latency

Post by open » 09 Jul 2019, 08:46

However, I think one of the reasons I ignored VR was that the low focus distance causes disorientation, not just latency alone.
One of the purposes of the lesnses is to increase the focal distance of the screen. It is still relatively close but a nearsighted friend still prefers glasses when using my rift so its probably farther than most computer monitors.

From a few searches it appears neither company goes out of their way to advertise focal distance. But the most common answer online is that the vive is 1.3 m and the rift is about 2 m.

There may be other headsets with longer distances. Based on some random answers it seems that some disinformation about the vive was out at one point saying the focal distance was infinite.

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: Interesting project about mouse/ gamepad latency

Post by spacediver » 09 Jul 2019, 14:57

ad8e wrote: I'm not saying they are all cheaters. As do0om points out, there are players who have achieved it, including himself. A 160 ms population average can achieve 150 ms over five trials without much difficulty. The point is, if you know that the pool is tainted by cheaters up to 150 ms, it cannot be used to support your point. It doesn't matter if your point is true (in fact, I even support it).
If my point was that the statistics borne out by the human benchmark data are to be trusted, then yes, I completely agree with you.

However, my point was this (see bolded part):
If you look at the distribution, you can see that some people are performing at around 150 ms. These are probably younger folks who have excellent reflexes and are on good hardware (it’s also possible that some people are using clever methods to cheat), but 150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus (at least when it comes to pressing buttons with a finger), although there may be a few people who can push this limit lower.
I don't need to have assurance that every single data point on that distribution is legit, in order to make this claim. All I need is some reasonable assurance that a subset are legit. Given that I know that flod has achieved scores in that range, it's highly unlikely that he is the only human who has done that test who has scored that low, and thus quite likely that a subset of those scores are legit.

Also note the use of the terms "seem to be" and "ballpark" - I have epistemically qualified my claims quite carefully.
ad8e wrote: The graph can be seen in the archive link I posted: https://web.archive.org/web/20170814021 ... ctiontime/
Yes, I've seen that graph (including yearly, monthly, all time), and I still don't see evidence that ~150 ms are from cheaters. I only see the anomalous bump at ~100 ms.
ad8e wrote: You need evidence to show that it's possible. The claim doesn't stand without support. I'm saying your evidence is insufficient.
Evidence should be commensurate with the certitude of one's claims. I've provided evidence in support (flod's data, kukki's data from the manual tracking reflex experiment), and have appropriately qualified my claims with a conservative degree of certitude.

You, on the other hand, have not provided any evidence, and are stating your claims as fact, rather than qualifying them.

e.g.
ad8e wrote:The 150 ms results in their database are from cheating/guessing, there's no point in referencing them.
ad8e wrote:For visual reaction times with current monitor/mouse input tech, 160 ms average measurements are probably possible, and 140 ms averages are not.
Do you see the difference in the epistemic qualification between my claims and yours?
ad8e wrote:Flood achieving 140 ms could be huge luck from standard deviation, or guessing, or whatever. Bringing up a single trial is not good enough, though I don't doubt his honesty. But in this context of errors and standard deviation, using low-sample results seems irresponsible at best. A more pessimistic view would consider them misleading.
Again, it depends on how you interpret the data. 140 ms could have been luck, although this was the average of 5 trials, so unlikely, especially considering that the regularly gets below 160 ms. And I don't see a problem with reporting the data, so long as it is done with integrity. And even if four out of 5 of those trials were luck, and only 1 was legit, that doesn't matter. The article is talking about the extreme limits of human performance.

Also remember, the heart of this article is the manual tracking reflex, where I measured visual reaction times of below 100 ms (albeit with the potential methodological limitations that I spelled out), and if we assign a degree of confidence to that evidence (the sub 100 ms reaction times), then it can inform our assessment on the probability of the 140 ms scores being fluke or not. Section 1 of the article shouldn't be viewed in a vacuum.

ad8e wrote:If you care about reaction time tests, we have an even better candidate in this thread, do0om, whose reaction times/equipment seem even better than flood's. I can supply him with a top-tier reaction test, if he's willing to do the trials.
I encourage you to perform these tests!
ad8e wrote: It's not 103.7 ms, the numbers on the top left are /60. I previously linked to a 8:53 count, which is faster. The trial you mentioned is actually 10/60 + 37/60/60 = 177 ms, pretty good under CS:Go conditions but not amazing for a single trial.
In that trial, I see a reaction time of 89.3 ms (180 ms - 90.7 ms). I'm not following your logic - can you spell it out in more detail?
spacediver wrote:I explain in the article that the original footage is at 100 fps. I counted the frames myself in the raw video. You seem to have incorrectly assumed that I didn't have access to this 100 fps video.
ad8e wrote:The article needs to explicitly mention that the youtube video drops frames and that you didn't use it, otherwise your method appears different from what you present.
I thought it was fairly clear that I had counted the frames from the original footage, else it would have been redundant to mention that the original footage was captured at 100 frames per second. Wouldn't have hurt to mention that, and that youtube doesn't support 100 fps, but I don't see this omission as a major issue. I saved the detailed methodological exposition for part 2 in the manual tracking reflex.
ad8e wrote: Here's the full quote from the article for context:
The reaction times in these events are measured with force transducers in the starting blocks, which measure the force produced by the legs. The loud noise signal in a sprint race can produce a startle response, and the IAAF limit of 100 ms seems to be based on assumptions that humans cannot react (with their legs) faster than 100 ms to an auditory signal.

However, it’s not clear whether these assumptions take into account two important facts: First, a startle response improves reaction times, and second, there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time. It turns out that this 100 ms limit is based on wrong assumptions.
Wrong. Because there isn't the slightest indication that the startle response has any ability to improve racers' reaction times. And the 100 ms number shows that the limit already takes into account any possible super-human freaks: otherwise, we'd see it bumped up to about 120 ms, since that is below any racer's fastest start.
1) Why do you say there isn't the slightest indication that the startle response has any ability to improve racers' reaction times? Given what we know about the startle reflex, isn't it reasonable to suppose that it can be harnessed by athletes in race conditions? There is certainly strong evidence: https://www.ncbi.nlm.nih.gov/pubmed/18460990 (full text here: https://sites.ualberta.ca/~dcollins/Art ... wn2008.pdf )

2) Even if the startle response isn't involved at all with sprint racing, and even if the startle response wasn't involved in the sub 100 ms reaction times in Pain & Hibbs, or the Finnish study, the important finding is that there is compelling evidence that people are capable of sub 100 ms reaction times.
ad8e wrote: Some people are under the impression that 100 ms is a reasonable threshold for reaction time (even once in this thread), when in fact it has a hugely generous tolerance for the conditions they're under.
One needs to take into account the consequences of an unfair disqualification. Generally, as a society, we assign more moral importance to falsely disqualify (or falsely find someone guilty), than to let a fluke get by. Hence, our thresholds should reflect this, and err on the side of lenience. The data from this study is an important piece of evidence when deciding where to draw this threshold (and, of course, this threshold should depend on the force thresholds being used to detect the start).
ad8e wrote: Those research articles are real, and there is no indication that their results are either outliers or false, with respect to the methodology they state. I think they should be taken seriously as a true product of the described methodology.

However, I think your interpretation is poor because you are pulling out the fastest times and fastest averages from the Finnish article, when we already know that such times are often tainted by guessing. Furthermore, the methodology in that article is under suspicion for its possible low trial count (they obscure their procedure, in how many trials they do, and how they throw away trials). That means the fastest times and averages are the lowest quality data from the article, and an interpretation should not rely on them, or even mention them. The data as a total is more reliable and worthy of bringing up in a summary. The fastest numbers, both the ones in this paper and the others you bring up, are not subject to just the normal cautious interpretation. They must be subject to even greater scrutiny than usual, because we can see the instances of guessing (and fraud in the human benchmark) very clearly.
I do see your point, but again, I do not see a problem with reporting data with integrity. I made it clear in my exposition of their results that they did not provide detailed methodologies. Heck, I even mentioned the possibility that the outliers were a fluke:
Even if we are conservative and assume that the fastest times for some of these sprinters was based on guessing (the study says that each sprinter performed between 5 and 8 trials, so it’s not clear whether some trials were excluded), the averages do not lie. The authors of this study strongly recommended that the IAAF revise the limit to as low as 80 ms.
(this of course depends on the force thresholds being used at any particular race. If, for example, the force thresholds used in any given event are high enough, 100 ms may be too low!)
The theoretical limit comes from research articles; Pain and Gibbs has one. They sum up the minimum possible latency they believe each part in the chain has (ear to brain, brain processing, peripheral nervous system, muscle activation, and others), and then come up with a very low number that bounds minimum latency. The additional latency in reaction tests is from unexplained factors, and is then the object of study.
Fair, and this certainly does lend evidence that those sub 50 scores were flukes. I would, however, be curious to learn more about the 50 ms bottleneck from brainstem to muscles - i.e. how consistent is it, how much individual variation is there, and how much faster would arms respond than legs do.
ad8e wrote:Why the rear is optimal: consider as a thought experiment, someone with perfect mouse tracking but with 200 ms fixed reaction time. He aims at the trailing end of the target. As long as the target keeps moving in the same direction, he won't miss the target from the trailing end. But when the target switches directions, now the entire target passes through his LG beam before he starts missing from the new trailing end, rather than just half the target if he aimed at the centerline. So he has lessened his misses from the leading end without compromising his misses from the trailing end. When mouse tracking error is added back to the picture, the optimal aim point is forward of the trailing end by the amount that minimizes the sum of tracking misses and reaction time misses.

If this is not easy to visualize, then consider as another thought experiment if the target is standing against a wall to the left. The LG user will fire at the opposite end of the target, since he knows the target can't press himself any deeper into the wall, but might move away from it. Then, in the open space condition, you can pretend that an imaginary wall is traveling along with the target, in front of him. Since the target can't move any faster than his max speed, this imaginary wall doesn't affect his movement. But in this visualization, the target is still stuck against the imaginary wall, and the LG user will fire at the opposite end.
In real life situations, targets that are dodging don't just move left and right, but forwards and backwards. So the horizontal speed of the target, as it appears to the aimer, varies depending upon how far away it is (i.e. a closer target will traverse more of the aimer's visual field per second of lateral movement, and thus the aimer will have to track faster).

Also, in real life situations, targets use all manner of dekes, jukes, jitter steps, etc. to throw off the aimer, and in those situations, aiming at the centre can confer an advantage, since there's no guarantee that the target will continue to pass through the beam.

And even if it turns out that people tend to aim at the trailing edge, I disagree that the simulation is bogus and should be retracted. The conditions are clearly spelled out, and one can extrapolate and iterate from there. It has value in illustrating how display latency can affect performance, under the specified conditions. Clearly, the specified conditions are not a perfect match to reality, and there is always a gap.
ad8e wrote: My issue with your bringing up startle reflexes, and then your interpretation of the research papers, is that you seem to be implying that such reaction times may be possible under normal computer conditions.
Well, I addressed the rationale for bringing up startle reflexes, and you seemed to find it justified (see quote below).
ad8e wrote:Ok, I'll accept that. That is a valid justification for discussing the startle reflex. There's no problem with clearly marked speculation.
I think you are unfairly characterizing my interpretation of data, and making some assumptions about my intentions. Throughout the piece, I've been careful about describing the data and the involved methodology, and have also been very careful to epistemically qualify my claims.

And I'm not sure if you read part 2 of the article, where I measured Kukki's reaction times, but they form an important part of this piece (indeed, its heart), and place the research I discuss in part 1 in context.
ad8e wrote: You are correct, my calculation neglects that the distribution is sampled twice, once for each player, instead of once. My bad.
Yea, I was wondering if you had been using a mixture model instead adding the random variables.
ad8e wrote: I read your code; it looks good. Other than neglecting individual variation in reaction times, the results in this section is fine. I think you should switch over to a probability distribution method, but the simulation is a decent substitute.
Agreed that probability distributions are better (when possible, and when one has the statistical know how!). Curious why you think I neglected individual variation in reaction times - those were drawn from a normal distribution (with a non zero standard deviation) in my simulation.

ad8e
Posts: 68
Joined: 18 Sep 2018, 00:29

Re: Interesting project about mouse/ gamepad latency

Post by ad8e » 09 Jul 2019, 18:02

spacediver wrote:I don't need to have assurance that every single data point on that distribution is legit, in order to make this claim. All I need is some reasonable assurance that a subset are legit. Given that I know that flod has achieved scores in that range, it's highly unlikely that he is the only human who has done that test who has scored that low, and thus quite likely that a subset of those scores are legit.
Then you would be citing flood's results, not the humanbenchmark results. My interpretation of the humanbenchmark data on its own, without referencing either flood or do0om's results, is that there is a good chance (>50%) that every single datapoint in the 140-150 ms ballpark is fraudulent/tainted by guessing, and 25% chance of only frauds up to 155 ms. Of course, we know from outside the test that multiple people have achieved under 150 ms on that test, but that's from the outside, not from pulling the database.
spacediver wrote:Yes, I've seen that graph (including yearly, monthly, all time), and I still don't see evidence that ~150 ms are from cheaters. I only see the anomalous bump at ~100 ms.
It's because looking at the table of records from 110-140 ms, we can determine a background noise of fraud. This background noise is strong enough to significantly interfere with the signal at 150 ms, and knowing the potential of guesses, it is not just possible, but certain, that the lower end of the distribution is tainted. When you try to support your claim with humanbenchmark results, you are implying that it's a valid support, even if only weak support. Despite placing weak qualifications on this claim, you are giving the humanbenchmark results some validity, where the validity is zero. Flood's single datapoint of 140 ms has a small measure of validity, but the humanbenchmark results don't. I think your qualifications were way too weak: you should have pointed out directly the high level of fraud, not just saying that "it’s also possible that some people are using clever methods to cheat". And that these were "probably younger folks who have excellent reflexes and are on good hardware": no, the evidence doesn't indicate it being probable.
spacediver wrote:Evidence should be commensurate with the certitude of one's claims. I've provided evidence in support (flod's data, kukki's data from the manual tracking reflex experiment), and have appropriately qualified my claims with a conservative degree of certitude.
I felt your qualifications of your claims were wholly insufficient. The evidence you provided didn't support more than the barest of speculation at 150 ms, the kind that might appear in a chat room, but not mentioned in a forum post and definitely not in an article.
spacediver wrote:You, on the other hand, have not provided any evidence, and are stating your claims as fact, rather than qualifying them.

e.g.
ad8e wrote:The 150 ms results in their database are from cheating/guessing, there's no point in referencing them.
That's because I am only attacking your use of that evidence. It's enough for me to cast sufficient doubt that your interpretation can't be accepted. I should have worded that better; I did not believe for sure that all their 150 ms results were bad, but I knew that many of them were bad, enough to make that part of the data useless.
spacediver wrote:
ad8e wrote:For visual reaction times with current monitor/mouse input tech, 160 ms average measurements are probably possible, and 140 ms averages are not.
The 160 ms was pure speculation, and clearly unbacked by evidence; it is not an actual claim that I am expecting to hold up with support, the way your article tries to. I mentioned that to illustrate that even though the numbers that you pull from the evidence are not right, a more justified reading may still support a more conservative number that is not too far away.

The 140 ms number is actually supported to a limited extent by humanbenchmark numbers since there is no signal there, not even from guessing, that rises above the background noise from fraud. Since we know that the bottom end is tainted by guessing, we expect signal to reach below 140 ms if people have achieved it.
spacediver wrote:
ad8e wrote:Flood achieving 140 ms could be huge luck from standard deviation, or guessing, or whatever. Bringing up a single trial is not good enough, though I don't doubt his honesty. But in this context of errors and standard deviation, using low-sample results seems irresponsible at best. A more pessimistic view would consider them misleading.
Again, it depends on how you interpret the data. 140 ms could have been luck, although this was the average of 5 trials, so unlikely, especially considering that the regularly gets below 160 ms. And I don't see a problem with reporting the data, so long as it is done with integrity. And even if four out of 5 of those trials were luck, and only 1 was legit, that doesn't matter. The article is talking about the extreme limits of human performance.
Extreme limits must be a bound on the person, not a bound on the measuring error. Individual humans have standard deviation on their reaction times, and here are some results from a single person (me) on a different reaction test:

195 220 190 198 216 198 377 185 217 222 187 202 227 212 231 231 209 213 221 196 193 214 202 231 205 200 214 199 215 246 231 363 211 229 221 211 303 215 200 206

My average is 211 over hundreds of results, when throwing away the tail of >300 ms. With this level of variance and sufficiently many trials, it seems very likely that I can find a string of 5 numbers that will give me 195. (Anecdotally, a majority of my >300 ms results consist of blinking at the moment when the test changes colors.)

For flood's 140 ms result, "unlikely" is not the right word to describe its chance of being a fluke, because it only takes one 100 ms result to change a 160 ms average to 148 ms, or lower if it replaces the right trial.

As I mention in response to the humanbenchmark section, reporting data is not always acceptable, even when you do it with integrity. By putting it in your article, it is automatically implied that it is to be taken seriously. Unless you place the most extreme of qualifications on it, repeatedly stressing how unreliable it is, a reader will take it as a valid support. You only mention the possible raw issues without giving the proper severe warnings and explanation of how to interpret it. Most readers are unable to judge methodology or potential issues, so anything that goes in the article without that interpretation has your stamp of approval behind it. Without third party interpretation, the reader must rely on his own ability to tell its worthiness, and this ability is rare, requiring at least a year of formal training. To indicate the depth of this issue, you are surely better at reading data than most of your readers, but even your own ability to judge methodology is quite poor. So most of your readers will walk away with the idea that 150 ms is possible (and it is!) but they will have the idea that the evidence you provided is a good indication that 150 ms is possible (which it is not).

As for a reader who does have the ability to analyze methodology, such as me, an overly optimistic interpretation leaves the impression that you are trying to get away with something.

Yours is not a research article, and should not be held up to that standard. But when an academic reads research articles, the discussion sections are supposed to give full interpretation and qualification of any possible issues, such that the reader's interpretation of the results and errors should be equal to the authors'. If the reader can spot unmentioned problems, that is not good. (Almost no research article can reach this standard; any decent academic can read an article in his field and find some problems with the interpretation. It's only an ideal to aspire to.) While your article shouldn't need to withstand the same scrutiny, it should cover interpretation to a reasonable degree, especially since your readers are not able to do the interpretation for themselves; they are not academics.
spacediver wrote:Also remember, the heart of this article is the manual tracking reflex, where I measured visual reaction times of below 100 ms (albeit with the potential methodological limitations that I spelled out), and if we assign a degree of confidence to that evidence (the sub 100 ms reaction times), then it can inform our assessment on the probability of the 140 ms scores being fluke or not. Section 1 of the article shouldn't be viewed in a vacuum.
Sure.
spacediver wrote:In that trial, I see a reaction time of 89.3 ms (180 ms - 90.7 ms). I'm not following your logic - can you spell it out in more detail?
When it shows 10:20:30, that means the time is 10 + 20/60 + 30/60/60. It's base-60, not base-100. It's very misleading, and Linus and his team should not have done this.
spacediver wrote:1) Why do you say there isn't the slightest indication that the startle response has any ability to improve racers' reaction times? Given what we know about the startle reflex, isn't it reasonable to suppose that it can be harnessed by athletes in race conditions? There is certainly strong evidence: https://www.ncbi.nlm.nih.gov/pubmed/18460990 (full text here: https://sites.ualberta.ca/~dcollins/Art ... wn2008.pdf )
Nice article, I wasn't aware of it. I'll revise my opinion to: there is a chance that startle response may improve racers' reaction times, but the evidence does not yet support this connection, and should be studied more. Here's why I don't think that article is good enough to establish the connection, only to suggest it for further study:
1. We know that reaction times are right-skewed. They don't mention this. (Or at least, I know that my personal visual reaction times are right-skewed, while they measure audio reaction times on people who react 30 ms faster than I do.)
2. Checking their statistical methods, each of their tools seems to assume normality, although I'm not confident enough in my statistics to be sure of this.
3. They don't show the distributions either. They should have done this, given that they have enough samples, and because these three things combine to form an important issue:
The issue is that when testing reaction times, the right tail has an unbalanced effect on the average. The article did not cut off the tail, and also, the improvement of startle times is consistent with cutting off the right tail.

In this possible interpretation, a fast reaction has a chance to come from the same source as a startle response and a slow reaction does not. Hence a startle response won't improve the best times or normal times, because its presence is only correlated with not failing. This is only an interpretation, and there's no reason to believe it's the right one, but it needs to be ruled out before the other interpretation can be considered to be supported by evidence. Graphing the distributions would have gone a long way in eliminating this interpretation, even with their low sample size (23 for startle reactions).

(Also, holy crap, I hope nobody uses 120 dB starter noises by athletes' ears, like the authors did to elicit those startle responses. That will cause hearing damage.)
spacediver wrote: 2) Even if the startle response isn't involved at all with sprint racing, and even if the startle response wasn't involved in the sub 100 ms reaction times in Pain & Hibbs, or the Finnish study, the important finding is that there is compelling evidence that people are capable of sub 100 ms reaction times.
That's right, and the articles you cite support that. Just don't mention the other numbers you did, those are bad.

You can even just say that they support reaction times under 100 ms, without needing to say why you arrived at that number. It's your privilege to give your interpretation without giving the rationale behind it, as long as the interpretation is justified by the evidence you cite, and mentions the relevant factors like input methods.

EDIT: I changed my mind. Although this works in research articles, it requires some authority to do. Otherwise, if you don't show your work and the audience doesn't believe you, then instead of trusting your interpretation, the audience is more likely to just write you off. So it's good in this setting to lay things out.
spacediver wrote:One needs to take into account the consequences of an unfair disqualification. Generally, as a society, we assign more moral importance to falsely disqualify (or falsely find someone guilty), than to let a fluke get by. Hence, our thresholds should reflect this, and err on the side of lenience. The data from this study is an important piece of evidence when deciding where to draw this threshold (and, of course, this threshold should depend on the force thresholds being used to detect the start).
Yes, I agree with your statement on false disqualifications. The 100 ms current limit reflects that too. (I think maybe the IAAF were worried about different technologies improving reaction times over time, technologies either in measurement or in actual reaction performance. Those performance improvements didn't materialize, and the IAAF hasn't bothered to implement anything like high speed cameras, a decision I agree with.)
spacediver wrote:I do see your point, but again, I do not see a problem with reporting data with integrity. I made it clear in my exposition of their results that they did not provide detailed methodologies. Heck, I even mentioned the possibility that the outliers were a fluke:
Even if we are conservative and assume that the fastest times for some of these sprinters was based on guessing (the study says that each sprinter performed between 5 and 8 trials, so it’s not clear whether some trials were excluded), the averages do not lie. The authors of this study strongly recommended that the IAAF revise the limit to as low as 80 ms.
(this of course depends on the force thresholds being used at any particular race. If, for example, the force thresholds used in any given event are high enough, 100 ms may be too low!)
Sure, just mention the input changes and how they are the main cause of the 80 ms. I also felt that your citing of their 80 ms strong recommendation was improper because it concealed these input issues with the paper, then seeming to represent that this number 80 ms was the conclusion to take away. But for the Finnish paper, it was unrealistic to expect you to carefully analyze their paper and spot this issue. Even the authors themselves were not aware of it.
spacediver wrote:Fair, and this certainly does lend evidence that those sub 50 scores were flukes. I would, however, be curious to learn more about the 50 ms bottleneck from brainstem to muscles - i.e. how consistent is it, how much individual variation is there, and how much faster would arms respond than legs do.
Pain and Gibbs give some ranges and I think they mention arms vs legs. I suppose to find the origin of their numbers, you'd have to trawl through their references, which would need some background in experiments and statistics to analyze. Pain and Gibbs' scholarship seems to be decent, so I'll accept their interpretation of the numbers for now until there's a reason to inspect it. I vaguely remember the Finnish paper also has some numbers, but their scholarship is inferior overall and hard to trust. In my experience, a layer of third party interpretation is very positive for helping laymen get a proper understanding.
spacediver wrote:In real life situations, targets that are dodging don't just move left and right, but forwards and backwards. So the horizontal speed of the target, as it appears to the aimer, varies depending upon how far away it is (i.e. a closer target will traverse more of the aimer's visual field per second of lateral movement, and thus the aimer will have to track faster).

Also, in real life situations, targets use all manner of dekes, jukes, jitter steps, etc. to throw off the aimer, and in those situations, aiming at the centre can confer an advantage, since there's no guarantee that the target will continue to pass through the beam.

And even if it turns out that people tend to aim at the trailing edge, I disagree that the simulation is bogus and should be retracted. The conditions are clearly spelled out, and one can extrapolate and iterate from there. It has value in illustrating how display latency can affect performance, under the specified conditions. Clearly, the specified conditions are not a perfect match to reality, and there is always a gap.
Those extra juke styles are not important, they only improve LG accuracy against this optimal aim style. The distance to the target doesn't affect our theoretical model either. This aim style is optimal against every dodge pattern in Quake 3, absent patterns which can be abused after being read. (Note that there is also an optimal dodge style, although that's not too important here. It can be used to give a max LG% against an optimal dodger, given a fixed reaction time.)

How many people in the world can spot that your assumptions are incorrect? Your partner didn't notice it, and you didn't either. Other excellent aimers I've talked to are often unaware; they aim by "feel". So laying out the conditions doesn't help anyone if nobody can sufficiently judge it. The number of people who have both the experience to spot the centerline error and the background to analyze your analysis is probably just me. If everyone except me gets the wrong idea, that's bad.

For the other info in the section, I said that reaction times causing LG accuracy is well-known, but I should retract that statement. I remembered that most teenagers I've talked to are not aware of this fact; they think that railgun accuracy below 100% is caused by human error and can be trained away with practice. So the section does have something going for it, which is telling them why this is not possible given an assumption of minimum reaction time.

However, the LG numbers should still be removed, since they don't make sense. You can leave up the methodology and note the corrections (and also the AABB collision box), so that anyone trying this in the future will have a better starting point. The reason the numbers should be retracted is: with the problems with the methods, can someone keep the caveats in mind and nevertheless learn something from the results? Here, the answer is no, nothing can be learned.
spacediver wrote:Well, I addressed the rationale for bringing up startle reflexes, and you seemed to find it justified (see quote below).
ad8e wrote:Ok, I'll accept that. That is a valid justification for discussing the startle reflex. There's no problem with clearly marked speculation.
I think you are unfairly characterizing my interpretation of data, and making some assumptions about my intentions. Throughout the piece, I've been careful about describing the data and the involved methodology, and have also been very careful to epistemically qualify my claims.

And I'm not sure if you read part 2 of the article, where I measured Kukki's reaction times, but they form an important part of this piece (indeed, its heart), and place the research I discuss in part 1 in context.
Maybe it's true I'm making unwarranted assumptions about your intentions. When I read about startle reflexes, it felt that you were trying to imply that people could leverage startles to be useful in games, without illustrating the drawbacks. If your goal was only to provide your kukkii section with some plausibility, that's more reasonable, although the numbers in the kukkii section are wrong.

However, what I found was that your interpretations were all extremely optimistic and did not reflect a proper reading of the data. With so many mistakes unaddressed, and significant error selectively pulled from the data and emphasized, all in the same direction of advancing a lower reaction time number, it does not look positive.

I did read the kukkii section.
spacediver wrote:Agreed that probability distributions are better (when possible, and when one has the statistical know how!). Curious why you think I neglected individual variation in reaction times - those were drawn from a normal distribution (with a non zero standard deviation) in my simulation.
I read it again, and this is probably why:
The two players are randomly sampled from a population of elite players, and this elite population has a mean reaction time of 170 ms, and a standard deviation of 20 ms
That reads to me like the population has 170 ms and 20 ms stddev, with players fixed at a reaction time. Actually, reading it right now, that still appears to be the right reading, since the population is of players instead of a population of reaction times. You need to change the wording there.

[This reply was written against an early version of your post and took a while to write; if you made any edits in the time between, let me know and I'll adjust it]

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: Interesting project about mouse/ gamepad latency

Post by spacediver » 09 Jul 2019, 19:42

ad8e wrote: It's because looking at the table of records from 110-140 ms, we can determine a background noise of fraud. This background noise is strong enough to significantly interfere with the signal at 150 ms, and knowing the potential of guesses, it is not just possible, but certain, that the lower end of the distribution is tainted. When you try to support your claim with humanbenchmark results, you are implying that it's a valid support, even if only weak support. Despite placing weak qualifications on this claim, you are giving the humanbenchmark results some validity, where the validity is zero. Flood's single datapoint of 140 ms has a small measure of validity, but the humanbenchmark results don't. I think your qualifications were way too weak: you should have pointed out directly the high level of fraud, not just saying that "it’s also possible that some people are using clever methods to cheat". And that these were "probably younger folks who have excellent reflexes and are on good hardware": no, the evidence doesn't indicate it being probable.
Ok, I see the table now - I hadn't seen that earlier, I was just looking at the histograms. Admittedly, the jump from 151 ms to 160 ms supports your point, but I'd be very curious to see a more complete data set rather than the leaderboard table in the link you provided (unfortunately, the website no longer seems to provide any data).

ad8e wrote: The 160 ms was pure speculation, and clearly unbacked by evidence; it is not an actual claim that I am expecting to hold up with support, the way your article tries to. I mentioned that to illustrate that even though the numbers that you pull from the evidence are not right, a more justified reading may still support a more conservative number that is not too far away.
Fair enough, after looking at that particular leaderboard, I'd be much more comfortable saying that the human benchmark data cautiously support a lower limit of 160 ms (and then go on to discuss other evidence that may push this down to 150).
ad8e wrote: The 140 ms number is actually supported to a limited extent by humanbenchmark numbers since there is no signal there, not even from guessing, that rises above the background noise from fraud. Since we know that the bottom end is tainted by guessing, we expect signal to reach below 140 ms if people have achieved it.
Good point, although this level of performance (combined with the hardware that minimizes extra-neuromuscular latency) might be rare enough that it doesn't show up in human benchmark. But also, as I alluded to earlier, we'd need to see the entire data set - were you able to find an archived version of the all time leaderboard?


ad8e wrote: For flood's 140 ms result, "unlikely" is not the right word to describe its chance of being a fluke, because it only takes one 100 ms result to change a 160 ms average to 148 ms, or lower if it replaces the right trial.
You're correct. I erred in my previous response. I'm gonna pull him in here actually, as I'm curious about that particular set of trials.
ad8e wrote: When it shows 10:20:30, that means the time is 10 + 20/60 + 30/60/60. It's base-60, not base-100. It's very misleading, and Linus and his team should not have done this.
Is this the format of the phantom camera output?"
ad8e wrote: Sure, just mention the input changes and how they are the main cause of the 80 ms. I also felt that your citing of their 80 ms strong recommendation was improper because it concealed these input issues with the paper, then seeming to represent that this number 80 ms was the conclusion to take away. But for the Finnish paper, it was unrealistic to expect you to carefully analyze their paper and spot this issue. Even the authors themselves were not aware of it.
That's fair, and I accept this.
spacediver wrote:In real life situations, targets that are dodging don't just move left and right, but forwards and backwards. So the horizontal speed of the target, as it appears to the aimer, varies depending upon how far away it is (i.e. a closer target will traverse more of the aimer's visual field per second of lateral movement, and thus the aimer will have to track faster).

Also, in real life situations, targets use all manner of dekes, jukes, jitter steps, etc. to throw off the aimer, and in those situations, aiming at the centre can confer an advantage, since there's no guarantee that the target will continue to pass through the beam.

And even if it turns out that people tend to aim at the trailing edge, I disagree that the simulation is bogus and should be retracted. The conditions are clearly spelled out, and one can extrapolate and iterate from there. It has value in illustrating how display latency can affect performance, under the specified conditions. Clearly, the specified conditions are not a perfect match to reality, and there is always a gap.
ad8e wrote:Those extra juke styles are not important, they only improve LG accuracy against this optimal aim style. The distance to the target doesn't affect our theoretical model either. This aim style is optimal against every dodge pattern in Quake 3, absent patterns which can be abused after being read. (Note that there is also an optimal dodge style, although that's not too important here. It can be used to give a max LG% against an optimal dodger, given a fixed reaction time.)
I'll try to reflect on this a bit more to convince myself that aiming at trailing edge is correct. As for optimal dodge style, I'm guessing you're referring to a memoryless probability distribution (e.g. exponential).

ad8e wrote:How many people in the world can spot that your assumptions are incorrect? Your partner didn't notice it, and you didn't either. Other excellent aimers I've talked to are often unaware; they aim by "feel". So laying out the conditions doesn't help anyone if nobody can sufficiently judge it. The number of people who have both the experience to spot the centerline error and the background to analyze your analysis is probably just me. If everyone except me gets the wrong idea, that's bad.
ad8e wrote:However, the LG numbers should still be removed, since they don't make sense. You can leave up the methodology and note the corrections (and also the AABB collision box), so that anyone trying this in the future will have a better starting point. The reason the numbers should be retracted is: with the problems with the methods, can someone keep the caveats in mind and nevertheless learn something from the results? Here, the answer is no, nothing can be learned.
I think the point of disagreement between us here is that the assumption being "incorrect" is not that important in this situation. I was modeling a well specified situation under very simple conditions, and made no pretense that it fully captured what actually happens (i.e. nobody dodges in the same way that the dodger in my simulation does). The point I was trying to drive home there was that milliseconds do matter - hell, even nanoseconds matter, but you'd probably have to accumulate billions of years of gaming with a nanosecond advantage to have anything of substance to show for it. Yet many folks think that if x ms is below our detection threshold, it doesn't have an impact.
ad8e wrote: Maybe it's true I'm making unwarranted assumptions about your intentions. When I read about startle reflexes, it felt that you were trying to imply that people could leverage startles to be useful in games, without illustrating the drawbacks. If your goal was only to provide your kukkii section with some plausibility, that's more reasonable, although the numbers in the kukkii section are wrong.
No, I was not making that implication. In fact, that's why I discussed the ideomotor phenomenon in that section.

What was wrong about the numbers in that section?
ad8e wrote: I read it again, and this is probably why:
The two players are randomly sampled from a population of elite players, and this elite population has a mean reaction time of 170 ms, and a standard deviation of 20 ms
That reads to me like the population has 170 ms and 20 ms stddev, with players fixed at a reaction time. Actually, reading it right now, that still appears to be the right reading, since the population is of players instead of a population of reaction times. You need to change the wording there.
You're correct, it is improperly worded.

ad8e
Posts: 68
Joined: 18 Sep 2018, 00:29

Re: Interesting project about mouse/ gamepad latency

Post by ad8e » 09 Jul 2019, 20:30

On the archive link I gave of humanbenchmark, you can click the back/forth arrows to see snapshots of the table at different times. This gives more info on the top times, but it's still incomplete.

I dunno about Linus's phantom camera, but you weren't the only one fooled by it. Chief Blur Buster was too. I was too, but then I got suspicious very quickly seeing non-skilled players pull off world-breaking records, and caught the discrepancy. 160 ms may be a really good reaction time on tests, but 180 ms is a really good reaction time in CS:Go, from the limited data I've seen.

For the optimal dodge style, I am not sure if it's exponential, only that theorems from math prove that it exists (with no information on what it might be). It should be reasonably easy to figure out for a mathematician, since the aimer is memoryless too, but I haven't tried.
EDIT: the aimer is not memoryless

If your goal is only to demonstrate that reaction times are the cause of LG accuracy, then it would be better to build a theoretical model. Pick a dodger model; any model is fine, it doesn't need to be optimal. Then apply the optimal aimer against this and you get a graph of how increased reaction time progressively degrades the best possible LG%.

The numbers I'm claiming are wrong in kukkii's section are the ones you pull out by measuring the offset between two curves. So the four numbers (112.5 ms - 87.5 ms) are not valid. This was what I considered to be the principal conclusion of the section, the part that should be retracted. And its implication, of the existence of manual tracking, is equally unsupported.

I actually misread the method in that section, assuming that you were cross-correlating the positions, since that is the method that makes sense if we assume players aim at centerlines. The method you use, of cross-correlating acceleration, is worse since it doesn't make sense even with the assumption. So you can consider the method wrong whether the optimal aimer is understood or not. But in any case, I don't see any value in pursuing this method, since the caveats are too great, and a theoretical model provides clearer results.

In summary, that section is ok except for the part between these two sentences, which should be removed:
"Now we just need to figure out a way to measure the offset between these two curves"
...
"This marked difference in performance between the two tasks is suggestive of two distinct underlying neural mechanisms."

Audio vs visual reaction is mentioned there and can be extracted, it's an important thing to know.
Last edited by ad8e on 09 Jul 2019, 21:21, edited 1 time in total.

ad8e
Posts: 68
Joined: 18 Sep 2018, 00:29

Re: Interesting project about mouse/ gamepad latency

Post by ad8e » 09 Jul 2019, 21:03

I forgot about something: when I describe the optimal aim as following the trailing edge, the aimer might still want to double back in the opposite direction on occasion, even without seeing the dodger move that way. The trailing edge is only a reference, and also a bound that shouldn't be moved past.

spacediver
Posts: 505
Joined: 18 Dec 2013, 23:51

Re: Interesting project about mouse/ gamepad latency

Post by spacediver » 09 Jul 2019, 21:24

ad8e wrote: I dunno about Linus's phantom camera, but you weren't the only one fooled by it. Chief Blur Buster was too. I was too, but then I got suspicious very quickly seeing non-skilled players pull off world-breaking records, and caught the discrepancy. 160 ms may be a really good reaction time on tests, but 180 ms is a really good reaction time in CS:Go, from the limited data I've seen.
But I mean how do you know that the format is in base 60?
ad8e wrote: For the optimal dodge style, I am not sure if it's exponential, only that theorems from math prove that it exists (with no information on what it might be). It should be reasonably easy to figure out for a mathematician, since the aimer is memoryless too, but I haven't tried.
Flod pointed this out to me during the project, when I was trying to brainstorm followup studies and figuring out how to build an optimal dodger whose patterns were unpredictable. Assuming we're only concerned with left right dodging, and assuming that the dodging is not reactive dodging, the argument is that the random variable that determines when, in time, the next dodge occurs should be an exponential one. I haven't studied the proof that deeply, but this means that there is no information contained in the elapsed time since the last dodge about when the next dodge occurs (a uniform distribution, for example, would contain information, since as more time elapses between dodges, the more likely it is that a dodge is about to occur, since the interval, over which the uniform distribution is defined, is finite).
ad8e wrote:If your goal is only to demonstrate that reaction times are the cause of LG accuracy, then it would be better to build a theoretical model. Pick a dodger model; any model is fine, it doesn't need to be optimal. Then apply the optimal aimer against this and you get a graph of how increased reaction time progressively degrades the best possible LG%.
Yep, this would be a cool exercise to run (although it may not necessarily map to the real world).
ad8e wrote: The numbers I'm claiming are wrong in kukkii's section are the ones you pull out by measuring the offset between two curves. So the four numbers (112.5 ms - 87.5 ms) are not valid. This was what I considered to be the principal conclusion of the section, the part that should be retracted. And its implication, of the existence of manual tracking, is equally unsupported.

I actually misread the method in that section, assuming that you were cross-correlating the positions, since that is the method that makes sense if we assume players aim at centerlines. The method you use, of cross-correlating acceleration, is worse since it doesn't make sense even with the assumption. So you can consider the method wrong whether the optimal aimer is understood or not. But in any case, I don't see any value in pursuing this method, since the caveats are too great, and a theoretical model provides clearer results.
I'm not following your reasoning. Not saying you don't have a point, I simply am not able to understand it. Did you understand the reasoning behind me not using position waveforms?

Post Reply