The 2 ms just-discernable latency on screen becomes a 2 ms significant problem in VR, so you're right that milliseconds really matter in that setting. However, I think one of the reasons I ignored VR was that the low focus distance causes disorientation, not just latency alone. Maybe this will be fixed in the future (back to 3D glasses?), but with head-mounted systems, you'd need something like a lens to change the focus of the screen.
I'm not saying they are all cheaters. As do0om points out, there are players who have achieved it, including himself. A 160 ms population average can achieve 150 ms over five trials without much difficulty. The point is, if you know that the pool is tainted by cheaters up to 150 ms, it cannot be used to support your point. It doesn't matter if your point is true (in fact, I even support it).spacediver wrote:I think it's perfectly valid to reference these results, especially given that I've qualified them by including the possibility of a pool of cheaters. And (as my article also points out) flod has regularly achieved scores of below 160 ms (based on multiple trials, so these aren't single fluky events where he was guessing), and as low as 140 ms, so I think it is very reasonable to conclude, as I have, that "150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus..."
Yet you are claiming that a score of 150 ms definitely indicates cheating. This seems epistemically irresponsible given the lack of solid evidence to support this claim. The anomalous bump at ~100 ms in distributions you reference (in the archived version of the human benchmark website) certainly suggests cheating, but I fail to see how those distributions prove that the scores of ~150 ms are due to cheating. You claim that there is a cluster after 150 ms which represents real results, and sparse scores from 110 - 140 ms. Can you post an image of this distribution in this thread so we can see what you're referring to?
The graph can be seen in the archive link I posted: https://web.archive.org/web/20170814021 ... ctiontime/
You need evidence to show that it's possible. The claim doesn't stand without support. I'm saying your evidence is insufficient.spacediver wrote:Again, you seem so sure of this, yet are not providing evidence, and are dismissing evidence to the contrary (flod achieving ~140 ms on the benchmark).
Flood achieving 140 ms could be huge luck from standard deviation, or guessing, or whatever. Bringing up a single trial is not good enough, though I don't doubt his honesty. But in this context of errors and standard deviation, using low-sample results seems irresponsible at best. A more pessimistic view would consider them misleading. If you care about reaction time tests, we have an even better candidate in this thread, do0om, whose reaction times/equipment seem even better than flood's. I can supply him with a top-tier reaction test, if he's willing to do the trials.
It's not 103.7 ms, the numbers on the top left are /60. I previously linked to a 8:53 count, which is faster. The trial you mentioned is actually 10/60 + 37/60/60 = 177 ms, pretty good under CS:Go conditions but not amazing for a single trial.spacediver wrote:Also, in the linus tech tips video (https://www.youtube.com/watch?v=tV8P6T5 ... DT4AXsUyVI), at 17:30, a trial is shown that has a reaction time of 103.7 ms. Granted, it's a single trial, but still worth noting.
They should not have numbered it that way, it's confusing.
The article needs to explicitly mention that the youtube video drops frames and that you didn't use it, otherwise your method appears different from what you present. As I said,spacediver wrote:I explain in the article that the original footage is at 100 fps. I counted the frames myself in the raw video. You seem to have incorrectly assumed that I didn't have access to this 100 fps video.
ad8e wrote:For example, the numbers he gets out of flood's video look perfectly legitimate, and it's likely he did count frames in flood's local video, but his presentation in the article needs to show that.
Ok, I'll accept that. That is a valid justification for discussing the startle reflex. There's no problem with clearly marked speculation.spacediver wrote:The first part of my article was to provide an overview of motor reflexes of the human organism. The startle response is an important part of this story, and shows what we are capable of under extreme conditions, where (as you point out), the processing pathway is more efficient. In part 2 of the article, I hypothesize a mode of action which I call the manual tracking reflex, which would also involve a more efficient processing pathway.
So bringing up the startle reflex is important for two reasons:
1) It is an important part of the story of motor reflexes (regardless of whether or not it is harnessed in gaming conditions).
2) It sets the stage for discussing the manual tracking reflex.
Here's the full quote from the article for context:spacediver wrote:If they're aware of the fact that these outliers haven't been well represented in most studies of human reaction time, then by definition they haven't taken it into account. If they had taken this into account, then we'd see more studies that do represent them!ad8e wrote:Scientists are aware of this and take it into account already.there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time
Wrong. Because there isn't the slightest indication that the startle response has any ability to improve racers' reaction times. And the 100 ms number shows that the limit already takes into account any possible super-human freaks: otherwise, we'd see it bumped up to about 120 ms, since that is below any racer's fastest start.The reaction times in these events are measured with force transducers in the starting blocks, which measure the force produced by the legs. The loud noise signal in a sprint race can produce a startle response, and the IAAF limit of 100 ms seems to be based on assumptions that humans cannot react (with their legs) faster than 100 ms to an auditory signal.
However, it’s not clear whether these assumptions take into account two important facts: First, a startle response improves reaction times, and second, there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time. It turns out that this 100 ms limit is based on wrong assumptions.
Some people are under the impression that 100 ms is a reasonable threshold for reaction time (even once in this thread), when in fact it has a hugely generous tolerance for the conditions they're under.
Those research articles are real, and there is no indication that their results are either outliers or false, with respect to the methodology they state. I think they should be taken seriously as a true product of the described methodology.spacediver wrote:From my article:ad8e wrote:It's not a good idea to cite those fastest times. 42 ms is below the theoretical limit and is from guessing. The extreme averages should also be viewed with caution.One of the sprinters (female) had an average reaction time in the legs of 79 ms, and her fastest was 42 ms
I think the above quotation does show a cautious representation of the data. I've made it clear that the fastest trials may have been due to guessing, but also emphasize that the averages are still well below ~100 ms.Even if we are conservative and assume that the fastest times for some of these sprinters was based on guessing (the study says that each sprinter performed between 5 and 8 trials, so it’s not clear whether some trials were excluded), the averages do not lie. The authors of this study strongly recommended that the IAAF revise the limit to as low as 80 ms. Also note the reaction times were faster in the arms than the legs. This is important to remember as it has implications for gaming.
Assuming that there was not a methodological flaw or fabrication of data, these averages either represent genuine reflexes, or a huge statistical coincidence (positive replication would help reduce the likelihood of these alternative explanations).
Also, I'd be interested to learn why you think 42 ms is below the theoretical limit. What is the theoretical limit, and how is it derived, in your estimation?
However, I think your interpretation is poor because you are pulling out the fastest times and fastest averages from the Finnish article, when we already know that such times are often tainted by guessing. Furthermore, the methodology in that article is under suspicion for its possible low trial count (they obscure their procedure, in how many trials they do, and how they throw away trials). That means the fastest times and averages are the lowest quality data from the article, and an interpretation should not rely on them, or even mention them. The data as a total is more reliable and worthy of bringing up in a summary. The fastest numbers, both the ones in this paper and the others you bring up, are not subject to just the normal cautious interpretation. They must be subject to even greater scrutiny than usual, because we can see the instances of guessing (and fraud in the human benchmark) very clearly.
The theoretical limit comes from research articles; Pain and Gibbs has one. They sum up the minimum possible latency they believe each part in the chain has (ear to brain, brain processing, peripheral nervous system, muscle activation, and others), and then come up with a very low number that bounds minimum latency. The additional latency in reaction tests is from unexplained factors, and is then the object of study.
Before, when disqualification was per-athlete, they all guessed like crazy. The guess forgiveness helped them guess more, but didn't improve their true reactions much. There is still some forgiveness, but the golden era of guessing is over.spacediver wrote:Another point to bring up here is that it may be the case that athletes can perform better in experimental conditions than in a real race. In a real race, where the consequences of a false start incur a massive penalty (possibility of disqualification), athletes may be reacting in a more conservative manner (trading reaction time for reducing risk of a false start).
I'm not clear on what your point is; I'm only pointing out all the flaws in your methods. My own speculation already aligns with some of your claims, but I disagree with the way you arrive at them. It is true that I expect some spectacular performers to do even better than the athletes in the papers, when measured under the researchers' conditions.spacediver wrote:Even if this were true (that these athletes weren't extreme outliers, but rather had the benefit of better measurement systems), doesn't this make my point even more salient, since the real extreme outliers would have even lower reaction times?ad8e wrote: The <100 ms numbers from the research papers are real, but they are caused by extremely good measurement systems, rather than really fast people.
Science always has errors; sometimes the errors are marginal, sometimes they mean the conclusions need qualification. However, I think the LG error is fatal to the entire method. That reaction time affects LG accuracy is already well-known, and the rest of the section tries to determine a number. But with the wrong technique, the number has no meaning.spacediver wrote:Two points:ad8e wrote:Optimal LG technique aims at the rear of the target rather than the centerline, so this entire section is bogus and should be retracted.Quake LG section
1) This was an illustrative simulation that spelled out assumptions in advance to show the effect of input lag. It's meant to be somewhat realistic, but criticizing it because optimal technique might involve aiming at rear of target rather than middle, misses for forest for the trees. The forest, in this case, is that small amounts of lag, so long as they are statistically real, have a real life impact. The magnitude of this impact will vary depending upon the assumptions, but disagreeing with a particular assumption doesn't render the exercise "bogus".
2) Why would you say that optimal technique aims at the rear of the target. One could easily make the argument that aiming at the center of the target is a better strategy, since aiming at the centre will provide a margin of error on both sides of this midpoint which is important for accommodating any variations in aim (due to neuromuscular noise), or variations in enemy movement. It's the same reason that your best chance of hitting any part of a target is to aim for the centre.
Why the rear is optimal: consider as a thought experiment, someone with perfect mouse tracking but with 200 ms fixed reaction time. He aims at the trailing end of the target. As long as the target keeps moving in the same direction, he won't miss the target from the trailing end. But when the target switches directions, now the entire target passes through his LG beam before he starts missing from the new trailing end, rather than just half the target if he aimed at the centerline. So he has lessened his misses from the leading end without compromising his misses from the trailing end. When mouse tracking error is added back to the picture, the optimal aim point is forward of the trailing end by the amount that minimizes the sum of tracking misses and reaction time misses.
If this is not easy to visualize, then consider as another thought experiment if the target is standing against a wall to the left. The LG user will fire at the opposite end of the target, since he knows the target can't press himself any deeper into the wall, but might move away from it. Then, in the open space condition, you can pretend that an imaginary wall is traveling along with the target, in front of him. Since the target can't move any faster than his max speed, this imaginary wall doesn't affect his movement. But in this visualization, the target is still stuck against the imaginary wall, and the LG user will fire at the opposite end.
My issue with your bringing up startle reflexes, and then your interpretation of the research papers, is that you seem to be implying that such reaction times may be possible under normal computer conditions. It misses the point not to mention and emphasize the changed conditions, since that's the entire thesis of Pain and Gibbs, of measuring where each component of latency comes from. They don't claim that their amazing reaction time numbers are achievable under Olympic conditions; they mention very carefully how they are varying the input detection methods. They describe their wavelet detection in detail, how they validated it as non-spurious, and how it compares to the Olympic input system. Then they speculate on the differences. The Finnish paper neglects to stress this aspect out of mediocre scholarship, which is how they arrive at their 80 ms limit revision recommendation. The 80 ms suggestion is unsupported, since the same runners who perform so well in their tests are no better than the rest under field conditions, and they don't address this. The most they do is suggest that race detection switches to high speed cameras.spacediver wrote:Can you point to cases where my interpretation of the studies is "very poor"?
Some results are able to push our understanding of the limits of human reaction time at a computer, by showing better and better results. However, neither research paper contributes to this limit, as their conditions are too different.
You are correct, my calculation neglects that the distribution is sampled twice, once for each player, instead of once. My bad.ad8e wrote:I'm not familiar with mathematica nor do I have the software, so I can't comment much on your code. On the surface, it looks like it might be correct, but I'm not familiar enough with how to properly combine two distributions in mathematica, so I can't say for sure if it's sound.
On the other hand, I can't see an error in my simulation, and the results are clear and consistent (you can download R for free and run the code yourself).
My assumption of normality of the uniform distribution came from the Central Limit Theorem, since a small well-behaved distribution is added to a much larger normal one.
I read your code; it looks good. Other than neglecting individual variation in reaction times, the results in this section is fine. I think you should switch over to a probability distribution method, but the simulation is a decent substitute.