spacediver wrote:I'm not wedded to the idea of manual tracking. I came into this seeking to explain an observation. I'm open to any number of possibilities (including my data interpretation being wholly incorrect). Even if kukkii's reaction times were shown to be faster during lg than during simple detection of luminance onsets, it could be a context dependent effect unique to him (e.g. he's able to get in the zone more easily during the former), rather than reflecting a distinct mechanism of action.
If someone were really able to train himself to that extent, that would be pretty interesting. I know how such a training might be plausible, but seeing it is another thing.
spacediver wrote:But I'd still like to hear more about why you think the idea of manual tracking having a latency advantage over simple detection is implausible. You find the idea of priming, while employing the magnocellular system, plausible. Couldn't a similar story be told about successive responses to changes in visually perceived motion? And what of the idea that some situations may promote a state where recurrent processing is diminished?
I chose luminance as my go-to default, but I don't have any reaction-related knowledge here. My experience is only that the human mind's luminance processing seems to be higher performance than its color processing in tasks such as reading. A combined luminance/hue change would be better, as it avoids this confounding variable. Blue to bright green combines both a huge luminance change and a nice color change, since blue is naturally "dark". (This is in reference to his mention of the magnocellular system.)
Priming could exist in reactions during motion detection, but primed motion detection should lose to the equivalent primed luminance-change detection; the priming should be a general mechanism if there. Motion detection is hard, and the human mind is very good at it, but it can only be good to a limited extent since the underlying problem is difficult to deal with. But detection of giant changes is very easy, the human mind is good at it automatically. Since I don't believe priming is specific to motion, I then naturally want to use a switch-background test, since those are nicer than motion tests.
Reduced recurrent processing: yes, that's one of the possible reasons why faster reactions might exist. For short-term differences (i.e. tests under various conditions), rather than long-term changes (i.e. tests 5 years apart), it's a strong explanation that is considered when investigating every type of faster processing. In other words, reduced recurrent processing is an excellent story, and one that people look to frequently.
Motion vs luminance: imagine a black square moving on a white background and then switching movement directions; this is motion. Then imagine changing the test so that instead of switching directions, the black square instantly fills half the screen on its trailing edge. Directly comparing the two situations, every change in the motion test also exists in the fill test, so under a more general theory of cognition than the human mind, the fill test should have only advantages over the motion test. Applying this general theory of cognition directly to the human mind is wishful thinking, since it's quite likely that the motion detection system of the human mind will fail to produce a useful reaction and shut down when it detects the screen being filled. Then, some other part of the brain will be needed to provide the reaction. But the moral is still there.
Just found this (skimmed study, not sure about quality):
Two-dimensional tracking reveals numerous similarities which exist between eye and manual tracking. Neither system can be adequately described by two independent cases of tracking in one dimension. In addition, both systems appear to use error signals which at some level incorporate both positional and directional error as well as speed mismatch. Furthermore both systems appear to be much less responsive to errors in acceleration (Lisberger et al. 1987). Therefore it is possible that both systems use the same error signals derived from the original retinal error, and may in fact share some of the same trajectory planning apparatus, varying at some point due to the obvious differences in the end effectors.
( https://www.physiology.org/doi/full/10. ... .83.6.3483
The scholarship and experiment design of that paper are superb. Their latency measurements are suspect; I think they didn't realize that a 100 Hz touchscreen and 60 Hz monitor might have much more latency than caused by 1/Hz, which is why they didn't measure it. But they mentioned the exact models they used, so their experiment is replicable, and this omission doesn't affect the thrust of their results. Here's my understanding of the article's relevant takeaways:
1. Motion detection is hard, position detection is much easier. "after the target changed direction, the finger maintained the original target direction for a reaction time period, changed direction, initially headed in a nearly straight line to intercept the target, and then finally curved to merge with the new target direction" That's consistent with position detection being faster than motion detection. By the time the person realizes the position has changed, he still is unable to estimate velocity. This doesn't apply to kukkii's situation exactly, since in Quake, velocity changes in consistent ways and can be estimated through training. In the paper, the people have no training and the velocity change is random.
In more casual terms, this means that Quake's timenudge is an objective advantage and the difference can't be closed by training. This is exactly what is predicted by cognition, and it's nice to have a specific paper to point to for this specific effect, instead of me saying "trust me, I'm an expert", which nobody should take seriously.
2. Acceleration detection is really hard. "partial information about acceleration is present only in the population response of these neurons". In laymen terms: the brain sucks at it, even after considering the task being hard. This is opposed to motion tracking, which is also a hard task, but one the brain tries to be really good at.
3. Training will significantly improve performance on the task in the study, since the constant-time reacquisition behavior they modeled is suboptimal. This provides a clear demonstration of a specific performance gap that training can reduce, and similar considerations should also apply to Quake.
4. Reaction times to changing speed are about the same as reaction times to changing direction.
5. Their model of constant time to reacquisition is quite unusual, and I wonder why it happens. I didn't expect that.
6. When the finger tries to track the box, the finger's speed is slower. Not just a offset, but gradually falling behind. I'm not sure why.
7. Looks like the authors are as surprised as me about constant time reacquisition: "We did not anticipate this result. It is entirely possible that the time to intercept could have been minimized"
8. Their results show similarities between eye tracking systems and positional tracking systems, and imply that there may be shared machinery between them. This is despite the split of eye tracking into smooth and saccadic movements.
spacediver wrote:Is it the idea that manual tracking may use a unique control regime compared to simple responses that you find implausible?
That's not just plausible, it's guaranteed to be true, and I recall some details of the mechanism. I think even the specific responsible parts of the brain have been isolated, but I'm not clear about these things.
spacediver wrote:Or the idea that even if such a system existed, that it would confer a significant latency advantage?
Yes, that's it. I don't think manual tracking can give a latency advantage. Specifically: motion detection is hard, and won't go faster than a simple system. Also, trying to track an object with your hands is not a mechanism that will be the direct cause of improved reaction times, in the sense that if there's an improvement in reaction times, it is caused by some other part of the action, such as eye tracking, or hand motion irrespective of tracking, or tensed muscles, something like that.
spacediver wrote:Moreoever, if random wobbling was responsible for this, you'd expect a low lg accuracy, when in fact he was performing quite well.
The LG accuracy won't decrease, the optimal aiming model has something that is very similar to the wobbling. (I think it's: while tracking the target, pretend that the target switches direction at a specific point in time in the past, and then track the new phantom trailing edge in the opposite direction after it intersects the existing aim path.)
spacediver wrote: Put another way, we already know there was a degree of intelligence and responsiveness at play here, and can therefore have more confidence that the features of the signal that account for the most variance are, in fact, meaningful (i.e. they are in response to the enemy's movement).
Eh, I think it will be hard to convince me here. I don't see a way to give a good demonstration of the integrity of the analysis, short of doing some statistical modeling and providing some hard numbers, and I'm not able to do that myself.
For example, one of the properties of the wavelet detection code I want is that it has to operate in real-time. Note that Pain and Gibbs don't use a real-time detection algorithm, they detect it retroactively. But then they have to do some statistics to prove that their detection isn't cheating and capturing some random noise as the beginning of the wavelet. Even then, it's a little sketchy. By doing real-time wavelet detection, the need for these statistics is removed, and the integrity is built-in, since any mistakes will be captured as false starts.
(Note on the forum software: when writing responses that take a long time, copy them to the clipboard before pressing preview or submit, because the forum often trashes them. I've avoided this problem by an abundance of caution, but if I wasn't so cautious, I think 3 of my responses would have been killed.)