ad8e wrote:
the human benchmark reaction time test
The 150 ms results in their database are from cheating/guessing, there's no point in referencing them.
Here's the quote from my article:
If you look at the distribution, you can see that some people are performing at around 150 ms. These are probably younger folks who have excellent reflexes and are on good hardware (it’s also possible that some people are using clever methods to cheat), but 150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus (at least when it comes to pressing buttons with a finger), although there may be a few people who can push this limit lower.
I think it's perfectly valid to reference these results, especially given that I've qualified them by including the possibility of a pool of cheaters. And (as my article also points out) flod has regularly achieved scores of below 160 ms (based on multiple trials, so these aren't single fluky events where he was guessing), and as low as 140 ms, so I think it is very reasonable to conclude, as I have, that "150 ms does seem to be in the ballpark of the limits of human reaction time to a visual stimulus..."
Yet you are claiming that a score of 150 ms
definitely indicates cheating. This seems epistemically irresponsible given the lack of solid evidence to support this claim. The anomalous bump at ~100 ms in distributions you reference (in the archived version of the human benchmark website) certainly suggests cheating, but I fail to see how those distributions prove that the scores of ~150 ms are due to cheating. You claim that there is a cluster after 150 ms which represents real results, and sparse scores from 110 - 140 ms. Can you post an image of this distribution in this thread so we can see what you're referring to?
ad8e wrote:For visual reaction times with current monitor/mouse input tech, 160 ms average measurements are probably possible, and 140 ms averages are not.
Again, you seem so sure of this, yet are not providing evidence, and are dismissing evidence to the contrary (flod achieving ~140 ms on the benchmark). Flod, btw, is someone I have known and interacted with for years on many projects, and is an academic at stanford with a solid publication record in the field of physics and statistics. I trust him completely, and he is most certainly not cheating.
I may ask him to chime in here.
Also, in the linus tech tips video (
https://www.youtube.com/watch?v=tV8P6T5 ... DT4AXsUyVI), at 17:30, a trial is shown that has a reaction time of 103.7 ms. Granted, it's a single trial, but still worth noting.
ad8e wrote:
Here is a video of [flood] sniping bots in CSGO.
More than half the frames are dropped in that youtube video, which makes counting frames suspect. Access to the raw video is needed for that method to work.
I explain in the article that the original footage is at 100 fps. I counted the frames myself in the raw video. You seem to have incorrectly assumed that I didn't have access to this 100 fps video.
ad8e wrote:
Startle reaction times are limited to startle responses, which appear not to be useful for practical game purposes. (i.e. they are likely skipping large parts of your brain's usual processing pathway)
The first part of my article was to provide an overview of motor reflexes of the human organism. The startle response is an important part of this story, and shows what we are capable of under extreme conditions, where (as you point out), the processing pathway is more efficient. In part 2 of the article, I hypothesize a mode of action which I call the manual tracking reflex, which would also involve a more efficient processing pathway.
So bringing up the startle reflex is important for two reasons:
1) It is an important part of the story of motor reflexes (regardless of whether or not it is harnessed in gaming conditions).
2) It sets the stage for discussing the manual tracking reflex.
ad8e wrote:
there are humans out there who are on the extreme end of the spectrum and who may not have been represented in most studies of human reaction time
Scientists are aware of this and take it into account already.
If they're aware of the fact that these outliers haven't been well represented in most studies of human reaction time, then by definition they haven't taken it into account. If they had taken this into account, then we'd see more studies that
do represent them!
ad8e wrote:
One of the sprinters (female) had an average reaction time in the legs of 79 ms, and her fastest was 42 ms
It's not a good idea to cite those fastest times. 42 ms is below the theoretical limit and is from guessing. The extreme averages should also be viewed with caution.
From my article:
Even if we are conservative and assume that the fastest times for some of these sprinters was based on guessing (the study says that each sprinter performed between 5 and 8 trials, so it’s not clear whether some trials were excluded), the averages do not lie. The authors of this study strongly recommended that the IAAF revise the limit to as low as 80 ms. Also note the reaction times were faster in the arms than the legs. This is important to remember as it has implications for gaming.
I think the above quotation does show a cautious representation of the data. I've made it clear that the fastest trials may have been due to guessing, but also emphasize that the averages are still well below ~100 ms.
Assuming that there was not a methodological flaw or fabrication of data, these averages either represent genuine reflexes, or a huge statistical coincidence (positive replication would help reduce the likelihood of these alternative explanations).
Also, I'd be interested to learn why you think 42 ms is below the theoretical limit. What is the theoretical limit, and how is it derived, in your estimation?
Another point to bring up here is that it may be the case that athletes can perform better in experimental conditions than in a real race. In a real race, where the consequences of a false start incur a massive penalty (possibility of disqualification), athletes may be reacting in a more conservative manner (trading reaction time for reducing risk of a false start).
ad8e wrote:
The <100 ms numbers from the research papers are real, but they are caused by extremely good measurement systems, rather than really fast people.
Even if this were true (that these athletes weren't extreme outliers, but rather had the benefit of better measurement systems), doesn't this make my point even more salient, since the real extreme outliers would have even lower reaction times?
ad8e wrote:
Quake LG section
Optimal LG technique aims at the rear of the target rather than the centerline, so this entire section is bogus and should be retracted.
Two points:
1) This was an illustrative simulation that spelled out assumptions in advance to show the effect of input lag. It's meant to be somewhat realistic, but criticizing it because optimal technique might involve aiming at rear of target rather than middle, misses for forest for the trees. The forest, in this case, is that small amounts of lag, so long as they are statistically real, have a real life impact. The magnitude of this impact will vary depending upon the assumptions, but disagreeing with a particular assumption doesn't render the exercise "bogus".
2) Why would you say that optimal technique aims at the rear of the target. One could easily make the argument that aiming at the center of the target is a better strategy, since aiming at the centre will provide a margin of error on both sides of this midpoint which is important for accommodating any variations in aim (due to neuromuscular noise), or variations in enemy movement. It's the same reason that your best chance of hitting any part of a target is to aim for the centre.
ad8e wrote:
For a separate goal as a research article, I think the parts that are useful are the citations to the research papers and the Quick Draw section. The author's interpretation of the research articles is very poor, but the underlying research articles are still good.
Can you point to cases where my interpretation of the studies is "very poor"?
ad8e wrote:
The Quick Draw section is just looking at the CDF of a normal distribution. The 4 ms frame time is sufficiently small compared to the 20 ms player latency standard deviation that the overall distribution is normal. It's easy to calculate: given 20 ms standard deviation of population latency, 4 ms frame time, and 15 ms standard deviation of an individual's latency (made up number), the total standard deviation is sqrt(20^2 + 15^2 + 4^2/12) = 25 ms. My CDF table gives 57.93% at 0.2 = (5 ms / 25 ms). That actually makes me question the validity of the article's simulation, since 58% > 57% and I used a higher standard deviation.
Given your assumptions, your calculation of 57.9% is indeed correct:
The above code yields 0.579 in R (where pnorm(0, 5, 25) represents the probability of losing when having a 5 ms latency advantage, i.e. the definite integral between negative infinity and 0 of the difference distribution between the two players: mean difference of 5 ms, sd = 25 ms).
1 - pnorm(0, 5, 25) is identical to pnorm(5, 0, 25), but I chose the first formulation since it may be easier for some people to follow along.
I'm guessing that the reason my simulation showed ~0.57 instead of ~0.58 is because you treated the uniform distribution as a normal distribution.
Here is the R code that I used in the simulation (edited so it only looks at the case when the display latency difference is 5 ms):
Code: Select all
numTrials = 1000000;
#Tally of frags for player A (who has 0 ms display latency)
winsA = 0;
# For each trial:
for (i in seq(1, numTrials))
{
# Player A
displayLatencyA = 0;
# Draw a sample from normal distribution with mean of 170 ms, and sd of 20 ms.
neuralLatencyA = rnorm(1, 170, 20);
# Draw a sample from uniform distribution between 0 and 4.17 ms.
refreshLatencyA = runif(1, 0, 4.17);
# Player B
displayLatencyB = 5;
neuralLatencyB = rnorm(1, 170, 20);
refreshLatencyB = runif(1, 0, 4.17);
#Total lag
totalLatencyA = displayLatencyA + neuralLatencyA + refreshLatencyA;
totalLatencyB = displayLatencyB + neuralLatencyB + refreshLatencyB;
# If total latency for player A is less than or equal to player B, add 1 to winsA, else leave winsA as it is.
winsA = ifelse (totalLatencyA <= totalLatencyB, winsA + 1, winsA);
}
probWinA = winsA / numTrials;
cat(probWinA)
I just ran this with a million trials, and got a score of 0.569974, which is ~ 57%.
ad8e wrote:
In fact, it looked so suspicious that I plugged in the article's conditions into Mathematica:
Code: Select all
f[x_] = CDF[TransformedDistribution[x + y, {Distributed[x, NormalDistribution[0, 20]], Distributed[y, UniformDistribution[{-4.17/2, 4.17/2}]]}], x]; f[5]
This gives 59.85%. So the author's simulation code is wrong.
I'm not familiar with mathematica nor do I have the software, so I can't comment much on your code. On the surface, it looks like it might be correct, but I'm not familiar enough with how to properly combine two distributions in mathematica, so I can't say for sure if it's sound.
On the other hand, I can't see an error in my simulation, and the results are clear and consistent (you can download R for free and run the code yourself).