Arbaal wrote:
Interesting, but you are comparing two different senses now. Our audible sense has a short reaction time / input lag then our visual one (as stated in the paper I linked in my earlier post). It would be nice to have such values for visual stimuli and reaction times.
I think you're missing my point. Why are reaction times in any way relevant to being able to perceive temporal differences?
Suppose that it takes a whole second to become visually aware of a stimulus (ignoring motor responses). Why would that preclude someone from being sensitive to lag on the order of milliseconds?
I'm not saying that we
are sensitive to such small changes; I just don't see how citing reaction time studies is relevant.
You could speculate that noise (related to the perception of time) is somehow a function of reaction time, but you would need to flesh this argument out some more.
Arbaal wrote:
I'm not sure how this is clear yet. I'm sure that there is a threshold that a ordinary human can compensate for / won't affect him in any way. And this threshold might not even be remotely in the sub-MS area.
Think about it this way:
You will no doubt grant that a person with a ping of 250 ms will have a disadvantage vs someone with a 50 ms ping.
Think about why this is. It is because, all else being equal, the person with a 250 ms ping will react to situations 200 ms slower than the other player. In that 200 ms, things can change, and the high ping player will have already reacted to the old information.
The degree of this disadvantage is precisely a function of the length of this delay.
One can conceive of a situation where a 1 ms difference can result in a quantifiable change in performance:
Suppose a one pixel wide target is moving in a straight line towards the crosshair at a rate of 1000 pixels per second (1 pixel every ms). The task is to press fire as soon as the target intersects the crosshair. Assume zero input lag in the hardware, and zero ping. Also assume 50 ms of motor lag. An expert player will have a well calibrated response system, and will examine the trajectory of the target and make assumptions about its extrapolated position between the time he commits to firing and the time the mouse button is pressed. So, when the target is in such a position that if it continues along its trajectory it will intersect the crosshair in 50 ms (i.e. 50 pixels away), the player will initiate the motor response, and by the time the button is pressed, the target is in the crosshair.
Suppose the player develops such proficiency at this task that he is able to hit the target 10% of the time.
Now let's add 1 ms of network lag.
After training, the player will have to recalibrate his response, so that he now initiates the motor command when the target is 51 pixels away.
There is no reason to suppose that he can't reach the same level of proficiency as with zero lag.
But now let's change the rules of the game slightly.
Say 50% of the time, the target suddenly switches direction, rather than continuing towards the crosshair. In such situations, the player should not fire, but rather wait until the next "trial".
Now if this change in direction occurs during the window of lag, then the player will have fired already and it will be too late to do anything about it. The key thing to notice here is that the window of lag is 1 ms wider when you add the 1ms network delay.
This means that the player has to make up his mind 1 ms earlier (one pixel earlier). If the target changes direction 50 pixels away, then the player will have already fired if he had a 1ms ping, but with 0 ping, he'd've noticed it just in time.
Now one might argue that there is noise in the system, both in terms of visually assessing the correct distance, and in the amount of "motor" lag. But this noise has a distribution, which is usually Gaussian, and a distribution has a well defined
mean. (the noise actually explains why the player doesn't hit the target 100% of the time).
This means, that no matter
how noisy the system is, over a
large number of trials, the player with the 0 ms ping will fare better.
Arbaal wrote:
Lower is always better, but what is the target we need to optimize for?
I completely agree that optimization is the key here. The difference between 5ms and 6ms of input lag might equate to an average of 1 extra frag per million hours of gaming, but if it requires a huge sacrifice in image quality to achieve that 1 ms advantage, it is clearly not worth it.