nimbulan wrote:The high framerate input lag issue is still rather fuzzy to me. The only way I see it reducing perceived input lag is for the second frame drawn during a refresh to take up the majority of the screen (so that frame contains the crosshair) which you can't guarantee, and that still leaves the issue of screen tearing causing variable input lag across the screen.
Let's say tearline positions occurs randomly. Tearlines that occur just right above the crosshairs will mean the crosshairs have less lag, than tearlines that occurs further above the crosshairs. Higher framerates means more tearlines, which means more tearlines closer immediately above the crosshairs, which means freshly rendered frames about to be "scanned down" towards the crosshairs location. Viola. Reduced input lag.
We're assuming a game engine that does input reads at very high frequencies, or input reads at render time. It all depends on how the game decouples input reads (e.g. separate thread) from rendering.
nimbulan wrote:So what it sounds like to me is rather than simply reducing input lag, you are introducing input lag variability equal to half the refresh rate depending on the location of the screen tear which should have a negative effect on accuracy, not a positive one.
No. It averages out to a positive effect.
That said, an extra consideration arises: Stutters/stationary/rolling tearline effects caused by harmonics between framerate versus refreshrate. 119fps @ 120Hz will have a slowly-rolling tearline effect that creates one wiggly stutter per second. But once you're away from harmonics and tearline positions begin to look random again, the average net benefit becomes more positive, the higher the framerate you go. The higher the framerate, the less stuttery tearing looks. e.g. tearing looks definitely noticeably less microstuttery at 400fps@144Hz than 200fps@144Hz. (these numbers are intentionally away from noticeable harmonic effects). This is because moving object positions varies away from the correct position by 1/400ths, rather than 1/200ths in the aliasing between timing of frame rendering time, versus frame presentation time, versus current eye tracking position. Input lag can scale too, provided you're doing at least as many input reads as the framerate (e.g. 400 input reads during 400fps). But other game engines will level the playing field by reading input at a tick rate, etc. That doesn't fix the VSYNC OFF microstutters, which clearly noticeably goes down the higher the framerate you go far above the refresh rate.
nimbulan wrote:Am I completely missing the point here?
For some game engines, yes. (especially low tickrates, input reads done independently of refresh rate)
For other game engines, no. (especially if input reads at least match framerate)
With a 1000Hz mouse, you have 1ms granularity between input reads, so benefits are there.
Separately (not an input lag factor, but a microstutter factor), but during turning left/right, I can easily tell the difference between VSYNC OFF 200fps@144Hz and VSYNC OFF >400fps@144Hz. I can personally feel it myself, being a non-competitive casual gamer (but very stutter/motion blur sensitive). The reduced stutters is definitely unquestionably NOT a placebo effect -- Technically, for something like this; it's a sucker bet to me -- I would bet a large amount of money that this is true, and easily win the bet, assuming certain variables and parameters are met (e.g. mouse input reads within 1ms of rendertime). Also, strobed displays (e.g. LightBoost) makes stutters/tearing easier to see, so the detectability thresholds are higher. With LightBoost displays having only about 1 pixel of motion blurring during 1000 pixels/second motion (see
Photos: 60Hz vs 120Hz vs LightBoost). Although this is not the input lag part, and strobing does slightly increase input lag (very slightly) in exchange for improved motion clarity.
Meanwhile, I'd love any visiting scientists to do a double blind study. We need to see more science being done here. Alas, public science (taxpayer funded) on gaming technology is not normally done. Especially with complications such as VSYNC OFF (an invention that mainly being taken advantage of for 3D video gameing, and are not normally accounted for in most science papers).
nimbulan wrote:It honestly sounds like a placebo effect
No it isn't.
Though, skill will definitely compensate for latency differences -- competitive gamers often use 60Hz displays at events, and game engines have leveled the input lag playing field somewhat.
Pro gamers routinely gain a bigger advantage playing at higher framerates, than upgrading to a higher-refresh-rate monitor. Better response is found by upgrading 60fps@60Hz -> 300fps@60Hz (upgraded framerate, constant refreshrate), than when ugprading 120fps@60Hz -> 120fps@120Hz (constant framerate, upgraded refreshrate). Ideally, you want to do both at the same time (e.g. upgrade your framerate and refreshrate simultaneously) but if you were competing in pro leagues, and you were forced to choose one upgrade over the other, more framerate can be preferred even when framerates are higher than refresh rate.
Among competitive game players, better scores do often occur at 300fps@60Hz than at 60fps@60Hz, because of better, snappier mouse response. Input reads are fresher, you have GPU rendertimes of 1/300sec rather than 1/60sec, even if your monitor is only 60Hz. And for zerobuffered displays (current ASUS/BENQ 120Hz/144Hz monitors are zero buffer displays), what's actually displayed at the crosshairs is a frame from the GPU only 1/300sec ago, because the tearline (buffer flip) occured 1/300sec above the crosshairs, and the display took less than 1/300sec to scanout downwards to the crosshairs location. Yes, the tearline occurs randomly, but because there is 300 frames per second, there are tearlines 1/300sec apart, and the display scanning takes only 1/300sec to scan from the previous tearline to the next tearline. (Assuming each frame took exactly 1/300sec to render).
Now if you're doing 600fps @ 60Hz, you've got only 1/600sec between tearlines, and the tearline just above crosshairs is also 1/600sec ago (e.g. At 600fps, there is about 10 tearlines per 60Hz refresh cycle, and thus there always ends up a tearline within 1/10th display height above the crosshairs, that always takes 1/600th of a second to scanout towards the crosshairs location)
There is up to as many tearlines per second as there is framerate (although some of them are sometimes hidden because they take place during the blanking interval by chance. Note that tearline offsets become half the size at double framerate, so tearlines becomes progressively fainter at higher framerates. Less skew (smaller disjoints), thus harder to see.
Obviously, this excludes a lot of the input lag chain (e.g. mouse USB cable latency, other game processing, multiple buffer layers, DisplayPort cable latency, pixel response, etc), but hopefully this conceptually explains this better.