jorimt wrote: ↑05 Oct 2021, 09:07The only lag G-SYNC "adds" is directly from hiding the tearline, nothing more, and it's technically not an "addition," since it's adhering to the native scanout time of the display, which is what single, tear-free frame delivery speed is limited by.
Tearing is a form of input lag reduction as well, one that many prefer to opt out of due to the very appearance of the artifact that allows the reduction, and something the OP was specifically asking about; avoiding screen tearing.
If you've heard of Scanline Sync, which is a no sync form of "V-SYNC," so to speak, G-SYNC is the superior (dynamic) version of that. It steers the tearline off-screen without forcing the GPU to deliver in fixed intervals of the display like traditional V-SYNC does, which is the only thing that causes V-SYNC input lag and stutter in the first place; the GPU output syncing to the display.
At 360Hz, crosshair-level "lag" is indeed virtually identical with G-SYNC on/off (assuming all other settings in both scenarios are the same but for that) at the same framerate within the refresh rate.
I find it odd you're suggesting we ignore the most obvious and easy-to-fix forms of latency and focus solely on the most elusive and difficult to objectively measure and achieve, and only call the latter worth pursuing. I believe we should instead address the former first and work our way down. I.E. start with macro, end with micro.
For instance, going by some of your recent posts on this forum, are you suggesting it's perfectly fine to ignore substantial latency increases due directly to things such as average FPS, max refresh rate, the render queue, and traditional syncing methods (such as double and triple buffer V-SYNC, which can add frames of delay if not properly configured), and limit ourselves to something like 60Hz with uncapped FPS (V-SYNC on or off) so long as we're running certain legacy CPU models with an OS, bios, and (DDR3) RAM that are tuned to the nth degree?
Even no sync with an uncapped FPS can have 2 full frames more delay (that's 33.2ms at 60 FPS, for instance) if the system is GPU-limited at any point. And just going from 60Hz with a 60 FPS average to 120Hz with a 120 FPS average (not considering any form of sync or GPU-limitation-related delay) reduces average latency by 8.3ms, and that's just from the render time and scanout cycle time reduction.
Achievable average FPS and max refresh rate are kind of a big deal where latency is concerned. In fact, they're the primary path to guaranteed native latency reduction.
The higher the refresh rate, the less tearing artifacts are visible as well, regardless of framerate, which means the less "need" for syncing methods, even G-SYNC. In fact, at 1000Hz, syncing methods will effectively no longer be needed for tearing prevention due to that refresh rate's sheer scanout speed.
Most of your suggested (micro) tweaks (which, don't get me wrong, I'm not knocking) may reduce latency by 1 to sub-milliseconds on average (and thus would instead tend to provide an increased feeling of consistency—another important metric—more than a notable latency reduction), while those like mine (macro) reduce it by dozens of milliseconds or more, and with much less effort required by the end-user.
I try to avoid any unnecessarily confrontational interactions like this, but if you're going to address me directly with such dismissive comments, perspective please; there's a place for both micro and macro. It's all part of one bigger latency picture.
Reducing latency jitter (max & min latency) is far more important than just the average. Using g-sync will inherently increase latency (obviously), but i don't consider tearing a form of latency. I'm not saying don't use g-sync btw, I can see it's usefulness. I'm more interested in the nitty gritty of systems. I want to know why this jitter is present. What is changing within the systems operation that causes a lack of consistency. You know, I won't say this as a fact but simply an observation, my USB 2.0 port is much more responsive than my 3.0 ports (probably due to it being closer but the lack of power saving features on 2.0). Keep in mind, I own two 8000 polling rate peripherals connected to the same "usb hub". These are the types of latencies which are really hard to measure with clicks as when you're playing you're moving your mouse larger distances and your eyes can recognize changes which aren't exactly testable with click to photon or other methodologies. DRAM accounts for at LEAST 10% of system latency at any given time but people see/say the word nanoseconds & automatically think that tuning ram will only result in nanosecond latency improvements which is just showing the lack of understanding.
I forgot the exact name of someone who said this but the quote goes something like: " I'd rather have 144 FPS on a i7-3770k & DDR3 (assuming it's tuned to the gills), then 144 FPS on a modern system". I don't think your suggesting humans CANT perceive sub millisecond changes in latency (assuming it's something directly affecting the mouse's input?)
The comment there is saying you're either forgetting or not understanding that a frame is simply a frame which is "encapsulating your systems output from any given moment. FPS Is not the only latency metric and the fact that it's such a popular benchmark people start misunderstanding that a increase in FPS (a tweak/hardware change/upgrade etc) may COST you higher latency & or jitter, than the increase of FPS gives you from a specific "tweak or change in hardware". It's like people forget how computers work. They weren't meant to be a toy, or be operated by people who don't understand them. Which is fine i suppose, but now this is the product. We get a massively massively uneducated "audience" or market that is easily manipulated by FPS benchmarks etc. I'm not saying it's the end of the world at all, after all we're talking about milliseconds here.
The thing about g-sync is the latency penalty. Sure you can consider tearing a form of latency but you're essentially making a trade off that will directly affect your perception (etc go down the chain of how humans perceive latency blah blah) so ultimately affecting the player.. I personally dislike anything that comes at a relatively significant cost of latency / latency jitter. I also don't believe g-sync is necessary, although useful for many scenarios which don't require peak system performance. You're making assumptions that a system can always and accurately deliver a frame at the theoretical minimum. Which simple things like tile based rasterization inherently add delay to when you finally get the frame, the trade off is less power & more FPS at the expense of latency. There's a reason companies have spent millions upon millions of dollars into this topic. Real-time systems do still have a place in our world & if you look at how they're built/tuned you'll understand my perspective much more. If you've ever played on a relatively low latency setup, nothing compares. I'm not saying forego refresh rates either, Let's take a scenario (280hz LCD 1.7ms of "lag" vs a 100hz CRT) You get more visual updates of what's going on, OR you go for a clearer & lower latency display. In this scenario it's really preference, and some competitive high level players have completely difference opinions/choices when it comes to these things.
TLDR: Latencies can be preference and traded off for certain improvements. Windows sucks, systems are complex and not simple by any means and are easily affected by thousands of variables.