Optimization Hub for beginners

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
User avatar
jorimt
Posts: 2484
Joined: 04 Nov 2016, 10:44
Location: USA

Re: Optimization Hub for beginners

Post by jorimt » 28 Dec 2020, 19:55

Brainlet wrote:
28 Dec 2020, 19:29
Exactly why I'm concerned in general. People WILL produce all kinds of data to mislead the majority of consumers when it comes to input lag impacting user experience.

Here is the source article:
https://www.nvidia.com/en-us/geforce/gu ... ion-guide/
Oh, that article.

Accompanying that image you previously posted was:
The polling rate is how often the USB host (your PC) asks for information from the device. For low or full speed devices, that is 1000Hz. Higher polling rate means that your mouse can deliver more frequent clicks and movements to the PC.
First time I've heard a source outright claim polling rate is tied directly to click latency. Not entirely sure that's fully applicable in all cases for all mice (or at least not the whole story), but okay then.

Regardless, higher mouse polling rate is most impactful to tracking, which, again, is not necessarily directly tied to average render or display latency, but more to how frequently updates are captured during continuous mouse movement, something your typical click-to-photon test can't practically capture.

You could have the same average input lag in multiple scenarios, but distribution (and/or lack/introduction of tear slices) can change the overall feel. Again, that's not something easy to measure, and I wouldn't attempt it with a highspeed or photodiode setup myself, as you'd need something like thousands of samples well under 1s intervals to accurately reflect continuous and spontaneous user input.

It's a reason we keep the test scene static (even though the scanout and frame render process is still happening) up until the point of the sample capture with highspeed/photodiode testing, else if there was a chain reaction to multiple inputs occurring ms between each other, we may not fully be able to match up and differentiate each input from their ultimate reflection on-screen.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

User avatar
MaxTendency
Posts: 59
Joined: 22 Jun 2020, 01:47

Re: Optimization Hub for beginners

Post by MaxTendency » 29 Dec 2020, 14:00

Brainlet wrote:
28 Dec 2020, 19:29
Exactly why I'm concerned in general. People WILL produce all kinds of data to mislead the majority of consumers when it comes to input lag impacting user experience.
Some people already are, happens to be the same image you posted :lol:

Image

jorimt wrote:
28 Dec 2020, 13:27
When performed and interpreted properly, click-to-photon test results can and should be treated as hard fact where the absolute spread of min/avg/max input lag values are concerned in the given test scenario, but they are indeed not the complete picture where distribution of said values over a period of uninterrupted frames are concerned.
Underlined quote is basically what I have been saying. Which brings me to the next point, how many people are fully aware of the hundreds of different variable that may skew results? How many people are going to test high enough sample (should be ATLEAST 2000 according to @josefspjut, researcher at nvidia), when nvidia's reflex analyzer comes with 20 sample by default?

I can easily see the next couple years being a shitshow of people doing flawed tests and coming to incorrect conclusions.
jorimt wrote:
28 Dec 2020, 13:27
Anyone criticizing the shortcomings of highspeed/photodiode methods are barking up the wrong tree, as no method currently exists to test for the differences you are suggesting.
This is true. However it doesn't mean we should not or can not advocate for better, more complete latency tests. As chief said, latency tests are still at their infancy. They have a long way to go.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Optimization Hub for beginners

Post by Chief Blur Buster » 29 Dec 2020, 14:56

Brainlet wrote:
28 Dec 2020, 17:47
I assume we can all agree that the jump from 500 Hz to 1000 Hz mouse polling rate is significant, yet NVIDIA research measured a measly 0.4ms latency decrease
Most of the 500-vs-1000 benefits is not the input lag. It's the fluidity.

At 5000 pixels/sec mouse movement, 500Hz is one coordinate every 10 pixels.
At 5000 pixels/sec mouse movement, 1000Hz is one coordinate every 5 pixels.
These clearly become stroboscopically visible, and this continues to be so for 8000 Hz.
Also, even when we go ultra high Hz (8000 Hz mice on 360Hz monitors) -- mouse microstutter also translates to extra motion blur at high jittering/microstuttering frequencies. Like how a slow piano/harp/guitar string vibrates, and a fast piano/harp/guitar string blurs.

Even 1-pixel jitter blends to 1-pixel extra display motion blur if the jitter is vibrating fast enough to blend to blur! In this context, stutter/judder are the same thing as blur, and blur is the same thing as stutter/judder. Those watching framerate ramps at www.testufo.com/vrr figure out the stutter-to-blur continuum rather quickly. With a 1000Hz mouse, a 360Hz monitor only improves 1.1x in motion clarity to 240Hz. But with an 8000Hz mouse, a 360Hz monitor improves by about 1.3x in motion clarity versus 240Hz (ideal perfect improvement is 1.5x but is retarded by GtG too slow to allow that yet).

At 360Hz, we are at 2.8ms per refresh cycle, and theoretically 2.8 pixels of motion blur per 1000 pixels/sec in perfect framerate=Hz GtG=0, but due to all the weak links (GtG nonzero, and jitter nonzero), we have little difference between 240Hz and 360Hz. The upgrade in poll rate actually amplifies the difference between 240Hz and 360Hz.

There are a lot of weak links in the refresh rate race to retina refresh rates, and the mouse is now a significant (double-digit percentage) contributor to display motion blur as we hit 360Hz+

It's the "Humans Can't Tell 30fps vs 60fps" stuff all over again, not recognizing the other benefits.

Even when things stop stuttering/juddering/tearing/jittering, there's still motion blur caused by persistence of all the various hold effects (whether sample-and-hold granularities, the frametime granularities, or the pollrate granularities) -- it's always the weakest links.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
jorimt
Posts: 2484
Joined: 04 Nov 2016, 10:44
Location: USA

Re: Optimization Hub for beginners

Post by jorimt » 29 Dec 2020, 16:41

MaxTendency wrote:
29 Dec 2020, 14:00
I can easily see the next couple years being a shitshow of people doing flawed tests and coming to incorrect conclusions.
Indeed, but this will be the case for as long as the world turns. No way to fully avoid this in any field, unfortunately. I foresaw it as soon as they announced the Reflex in-monitor mouse tester:
https://forums.blurbusters.com/viewtopi ... lex#p56959
MaxTendency wrote:
29 Dec 2020, 14:00
jorimt wrote:
28 Dec 2020, 13:27
Anyone criticizing the shortcomings of highspeed/photodiode methods are barking up the wrong tree, as no method currently exists to test for the differences you are suggesting.
This is true. However it doesn't mean we should not or can not advocate for better, more complete latency tests. As chief said, latency tests are still at their infancy. They have a long way to go.
I agree.

My point was suggesting traditional click-to-photon methods as being incomplete or flawed for the real-time testing of latency distribution from continuous spontaneous user input is like calling Star Wars a bad French film; it's evident they were not originally intended for such testing, and in my estimation, without an entirely different or more radical approach using such methods, they aren't suited to it in their currently practiced form.

That said, there's no reason to suggest in any way that these methods have failed, or all their results should be taken as invalid.
They merely need to be contextualized on a case-by-case basis, and they still have their place for testing averaged input lag.

The type of tweaks we're talking about would change distribution and consistency of latency more than they would significantly (or even visibly) alter the min/avg/max spread already generated by click-to-photon results.

Admittedly, it is unfortunate that some sources (Nvidia included) have the effect (whether intended or not) of decontextualizing such testing methods, and making them appear more capable or final than they actually are, but all I can do as a user of said methods myself, is clarify my position when asked, or chime in when I see a discussion such as this (as I'm doing now).
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

User avatar
MaxTendency
Posts: 59
Joined: 22 Jun 2020, 01:47

Re: Optimization Hub for beginners

Post by MaxTendency » 29 Dec 2020, 18:51

jorimt wrote:
29 Dec 2020, 16:41
Indeed, but this will be the case for as long as the world turns. No way to fully avoid this in any field, unfortunately. I foresaw it as soon as they announced the Reflex in-monitor mouse tester:
Yes, unfortunate. What makes the situation worse is Reflex Analyzer being 20 sample by default. Sure, it can detect huge changes like 60hz vs 360hz for example. But 20 samples is basically nothing if you're trying to get a better picture of the latency distribution and consistency.
jorimt wrote:
29 Dec 2020, 16:41
That said, there's no reason to suggest in any way that these methods have failed, or all their results should be taken as invalid.
They merely need to be contextualized on a case-by-case basis, and they still have their place for testing averaged input lag.

The type of tweaks we're talking about would change distribution and consistency of latency more than they would significantly (or even visibly) alter the min/avg/max spread already generated by click-to-photon results.
Pretty much agree with everything, however I never suggested that click to photon test should be taken as invalid, but rather that they should not be thought of as end all be all.

Since there are optimizations that may increase the latency consistency which may be felt by the end user but may or may not show up in a click to photon test (especially if sample size is too low). And then there's optimizations that may have other benefits such as increased smoothness (500hz vs 1khz for example) which may not show up in click to photon test but once again maybe felt by the end user.
jorimt wrote:
28 Dec 2020, 19:55
You could have the same average input lag in multiple scenarios, but distribution (and/or lack/introduction of tear slices) can change the overall feel. Again, that's not something easy to measure, and I wouldn't attempt it with a highspeed or photodiode setup myself, as you'd need something like thousands of samples well under 1s intervals to accurately reflect continuous and spontaneous user input.
Basically what I've been saying. I guess there's a misunderstanding or failure from my end to clarify what I meant, but I never said click to photon tests are invalid, but rather incomplete. Ideally we would want to test things objectively since humans are susceptible to placebo, but the way things currently are there maybe situation where user maybe able to feel a difference which may not show up in a standard click to photon test.

User avatar
jorimt
Posts: 2484
Joined: 04 Nov 2016, 10:44
Location: USA

Re: Optimization Hub for beginners

Post by jorimt » 29 Dec 2020, 20:07

MaxTendency wrote:
29 Dec 2020, 18:51
What makes the situation worse is Reflex Analyzer being 20 sample by default. Sure, it can detect huge changes like 60hz vs 360hz for example. But 20 samples is basically nothing if you're trying to get a better picture of the latency distribution and consistency.
it's plenty if all you're trying to do is determine if a setting is adding 1 or more frames of input lag on average in an otherwise like-for-like scenario, but no, it's not enough to determine anything else.
MaxTendency wrote:
29 Dec 2020, 18:51
I never suggested that click to photon test should be taken as invalid, but rather that they should not be thought of as end all be all.
I wasn't targeting your comments specifically there, just stating clearly that such results shouldn't be taken as invalid just because they don't apply or are misapplied to a given topic.
MaxTendency wrote:
29 Dec 2020, 18:51
Basically what I've been saying. I guess there's a misunderstanding or failure from my end to clarify what I meant, but I never said click to photon tests are invalid, but rather incomplete.
There's a reason why click-to-photon tests have min and max values; response from input to input, frame to frame is not constant, even with a theoretically perfect frametime sustained at a constant framerate, else there would be no need to average results after such tests.

Further, even if you had a perfectly tuned system and a bloat-free, to-the-metal OS, variable input latency would still occur due to 1) mismatching parallel cyclical processes of modern computing components (the reason ever increasing refresh rates, framerates, polling rates, and even game tick rates are so vital to an improvement in input consistency), and 2) modern game engines, most of which are not consistent, reliable, or predictable renderers in the least, and there's little to nothing the end-user can do in this respect.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

Brainlet
Posts: 100
Joined: 30 May 2020, 12:39
Contact:

Re: Optimization Hub for beginners

Post by Brainlet » 29 Dec 2020, 20:25

Chief Blur Buster wrote:
29 Dec 2020, 14:56
Brainlet wrote:
28 Dec 2020, 17:47
I assume we can all agree that the jump from 500 Hz to 1000 Hz mouse polling rate is significant, yet NVIDIA research measured a measly 0.4ms latency decrease
Most of the 500-vs-1000 benefits is not the input lag. It's the fluidity.
Exactly, and there are a lot of software optimizations that produce similar results (fluidity, but a bit different than polling rate): minor average latency decrease but very significant decrease of min/max latency deltas with a great impact on user experience.
It was more of an example of how click to photon results can easily be used to discredit impact of certain things ("oh, its just 0.4ms, it doesn't matter" as seen in MaxTendency's image).

Imagine the following hypothetical scenario:
- modification X reduces avg. click to photon latency by 0.3 ms and min/max delta by 10% across 20 samples but also decreases the min/max delta by 30% across 1000000 samples
- user A claims modification X helps a lot with mouse input
- user B says "proof or nonsense"
- user A does 20 tests and concludes it's "only" 0.3 ms
- user B says "it doesn't matter" while the min/max delta will get discarded because it's "only 10%"

IMO, a large part of the problem stems from the fact that most people did never experience significant changes in mouse input due to various reasons. It requires some form of trigger. At first I didn't believe in all this either until the switch from 1803 to 1809 happened (nowadays I refuse to even play games on any W10 version, using W7 exclusively for that purpose in <= DX11 games) and I noticed a subtle impact on cursor response (even on my at that time terrible hardware and bloated + unoptimized OS with insane latency bottlenecks) so I started going down the rabbit hole and even more so after upgrading hardware and peripherals since everything became much more apparent due to a generally much lower latency floor. It's hard for people who never experienced these things personally to grasp how much humans can really perceive these low numbers (assuming modern hardware and peripherals). Another factor is convenience (it's tiresome to change lots of settings, most people prefer not to bother).
Starting point for beginners: PC Optimization Hub

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Optimization Hub for beginners

Post by Chief Blur Buster » 29 Dec 2020, 22:56

jorimt wrote:
29 Dec 2020, 20:07
There's a reason why click-to-photon tests have min and max values; response from input to input, frame to frame is not constant, even with a theoretically perfect frametime sustained at a constant framerate, else there would be no need to average results after such tests.
<Rabbit Hole>

Fortunately, the good news is there is two evidence based approaches, that lends credence whether we trust analysis or not:

(1) A simple math formula (like a mouse equivalent of the Blur Busters Law) also dicates that doubling sampling rates can approximately halve latency for that tiny subsegment of latency chain, if there's a waiting/rounding-effect to the next fixed sampling interval. In other words, the increase from 500Hz poll rate to 1000Hz poll rate generates a pure mathematical latency improvement of (1/500sec) - (1/1000sec) = maximums would show 0.5ms latency improvement. Averages (halftimes) would show a 0.25ms latency improvement. Within error margins, this is consistent with the very subtle latency improvement observed by increasing poll rate from 500Hz to 1000Hz -- the 0.4ms measured by NVIDIA falls near these numbers, and thus, I would be inclined to trust that number within quite reasonable error margins (e.g. GtG speeds, increased OS load of higher poll, refresh cycle granularities, number of test passes, etc.), thanks to the pure math doublecheck confirming the correctness of the numeric magnitude.

(2) Tests from an Arduino mouse-click simulator from 125Hz, 500Hz, and 1000Hz, show that click latency generally decreases at higher poll rates, assuming same-device poll-rate changes, at least on uncongested USB ports (no competition with other USB traffic). The Arduino-compatible simulates a click, and a photodiode attached to the same Arduino-compatible microcontroller. Connected back to the same program running on the Ardunio. So synthetic data results are made possible because the same Arduino-comaptible program is configuring the poll rate, and then sending the simulated USB HID mouse click (lag stopwatch start) and measuring for the photodiode (lag stopwatch end). Results are fully consistent with the math formula in (1) within error margins.

By pure math alone, an 8000Hz poll rate versus 1000Hz poll rate creates a (1/1000sec) - (1/8000sec) = theoretical maximum 0.875ms latency improvement. But if we're looking at averages, the halftime means 1000Hz pollrate means an average of 0.5ms latency [range of 0ms...1ms given randomized click between polls], and 8000Hz means [0ms...0.125ms given randomized clicks between polls] = 0.0625ms average. So in reality, pure math dictates that average lag improvement of 1000Hz->8000Hz is actually only 0.5 * 0.875 = only a 0.4375ms improvement (437.5 microseconds) for a click on the same 8000Hz mouse configured to 8000Hz versus 1000Hz respective.

The pure math lots of assumptions though:
- That you've got a synchronous poll clock that doesn't realign itself
- The click is rounding to the next poll interval.
- The different poll rates doesn't cause any additional USB-chip-loading / OS-loading / buffering latency behaviours
- The pollrates are compared on the same device (to exclude the mechanical differences between competing mice)
- There's no startup latencies (e.g. when USB polling starts again from an idle mouse sending no polls)
- There's no antibounce algorithm distortions/etc.

Caveat: On USB2 capable of 8000Hz, it is possible that 1000Hz may realign to a finer granularity especially if poll stops then restarts -- e.g. 1000Hz poll stops because mouse stopped moving, then resumes again exactly 50.875 milliseconds later instead of 50ms later or 51ms later, because of the finer 0.125us poll ticks made possible by USB2). I am not sure if there are 1000Hz mice capable of doing this. This will distort whatever latency improvements may occur, because that means it is no longer a fixed 1000Hz (1ms) in a poll stop-resume situation (meaning the snap-to-nearest-interval effect no longer applies).

TL;DR: Naturally, by pure math, a higher poll rate is indeed lower latency. However, it is in the sub-millisecond averages and hard to measure organically on real mice in the statistical noise of everything else (including antibounce algorithms, OS loading changes, USB chip loading, etc). Rather, most of the pollrate-increase benefits is in clearly improved mousefeel.

Also, poll rates need to be a minimum of 5-6x above refresh rate based on my experience for the mouse to stop feeling jittery -- you milk most of the fluidity benefits even at the first step of 1000Hz->2000Hz with a 360Hz monitor. The 2000Hz->8000Hz is much more marginal even the improvement is still noticed. Thus, 360Hz monitors benefits quite noticeably even merely raising 1000Hz poll to 2000Hz poll.

</Rabbit Hole>
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

diakou
Posts: 83
Joined: 09 Aug 2020, 11:28

Re: Optimization Hub for beginners

Post by diakou » 30 Dec 2020, 00:53

Chief Blur Buster wrote:
29 Dec 2020, 14:56
Most of the 500-vs-1000 benefits is not the input lag. It's the fluidity.
At 5000 pixels/sec mouse movement, 500Hz is one coordinate every 10 pixels.
At 5000 pixels/sec mouse movement, 1000Hz is one coordinate every 5 pixels.
These clearly become stroboscopically visible, and this continues to be so for 8000 Hz.
Also, even when we go ultra high Hz (8000 Hz mice on 360Hz monitors) -- mouse microstutter also translates to extra motion blur at high jittering/microstuttering frequencies. Like how a slow piano/harp/guitar string vibrates, and a fast piano/harp/guitar string blurs.
I want to chime in on this point as a person who is currently competitive in Brawlhalla, world champion/top 10 globally 2017/2018, top 30 globally 2019 and "top 30 globally" 2020 (no LANs in 2020... so ignore global ;P)

This is often where I am able to make magic happen in regards to avg input latency / effects of displays and more. 2ms increase or decrease may sound little and unnoticeable especially with keyboard inputs and not mouse motion. (And quite in-fact, unless I was able to use multiple methods of testing, it would be hard/impossible for me to outright ace an A/B test of 2ms with keyboard inputs - mouse motion is probably a bit easier)

The way I've personally been bothered about latency and ridiculously small numbers (sub-ms stutters, raw 2-10ms input-lag differences etc.) Is primarily due to exactly the paragraph you just wrote. In just within a second I will always do these button presses, just obviously modified to either forward holding or back or whatever the situation needs, but the inputs done is roughly the same (all of these are movement related with the exception of J/K/L which are attacks)

Code: Select all

(WASD cycle 3x, I - 1x/2x, spacebar 1x/2x, J or K or L, 0x-1x)
So a scenario would be; (I keylogged)

Code: Select all

DAS SPACEBAR AWDSADADSAS DWIJA (raw inputs in order from keylog file)

D(move right small pixel) - A(move left small pixel) - S(Down) - Spacebar(Jump)
 - A - W(hold up) - D - S(fastfall - falling fast from jump inputted three inputs earlier)
  - A - D - A - D - S - A - S - D+W(so diagonal forward)+I(Dash)+J(Light attack)+A(hold back right after)
at the minimum, it's 10 inputs related to movement within a second often. This is repeated tons of times over and over, sometimes reaching 15-20 inputs. If I am already in position, I will do this so I can constantly micro-space within my opponents range. If not - this will be done regardless constantly to keep up pace (i.e think starcraft APM) and the bigger reason; to constantly re-adjust according to each frame of stimuli that is happening on screen (60 fps game)

If in some freakish scenario, if there was a raw 2ms latency increase (and not a big varying min/maxes as frametimes are never consistent, no matter how low we go for now) that's 20ms missed per second, which again - sounds ridiculous, but that legitimately might have been the ever so slightly miss-timing to cause me to miss this extremely frequently occurring situation in fighting games where you miss by a frame too early/too late OR a pixel (that was due to you not being in a frame earlier/later) like this hitbox/hurtbox scenario, it is quite literally me missing by a pixel or so. Had I been there a frame earlier, this hits.;
(This does not remove the fault that I could have simply... missed and there's still practice left - but % can change based off HW alone as well)
Image

But the reality is not like that. The numbers are flying everywhere, in most cases it is never just 2ms raw decrease or 2ms raw increase, min/max values and a ton of other things not quantified that simply occurs. They genuinely are beyond enough to throw off the fluidity of the gameplay. Human body can adjust to timings if it's in a consistent beat or rhythm (heck even inconsistent rhythm can be adjusted to), but fluid movement in and out of itself is always ready to be irregular. Stutter steps, incredible precision and more requires irregularity in movement and play and coordination to sync up to the actual gameplay for that perfect timing. I don't want to get too much into this as it's obviously a thing of its own. But I'm simply trying to convey that this fluidity is also the exact problem outlined in "ESPORTS; Latency Perception, Temporal Ventriloquism & Horizon of Simultaneity" it has many effects and one of the worst offenders is specifically this - and it is varying too! From person to person. To an extent (and poor comparison) it is like how in the monitor world, some are more perceptible to flicker while others motion blur.

And lastly - I have great friends in the Brawlhalla community and being able to have someone who is currently a 3X world champion confirm and also agree that the overall fluidity increase from a small jump such as a 240Hz to a 360Hz (with a few other things of course) made a big impact on what he was able to present of his own skill in the recent world championship. He is already a person who is winning tournaments prior but he has no qualms about saying how it allowed him to just be himself more (which frankly is terrifying in its own right, when a person who is already winning majority of the tournaments says something like this.)

Attaching a few pictures of me and him conversing about it ever since we did a few changes with the 240Hz to 360Hz jump;
Image
Image
Image
Image
Image

And just lastly - this is a 60FPS fighting game. The one type of game most people would discourage a 360Hz let alone a 240Hz!
But we live and die by the frame, we only have 60 frames - so attempting to cherish every single 16.6666666667ms frame through lowering lag... does actually have a ton of merit. (Especially with juicy g-sync @ 360Hz - ty for that one chief) disclaimer; as long as whatever monitor you have is low lag (signal processing and response times G2G) be it a 120/144/240, it's enough. This isn't a vouch for a 360Hz, this is more a discussion on fluidity and... **queue drum noises**

The Amazing Human Visible Feats Of The Millisecond
Last edited by diakou on 30 Dec 2020, 23:16, edited 1 time in total.

User avatar
jorimt
Posts: 2484
Joined: 04 Nov 2016, 10:44
Location: USA

Re: Optimization Hub for beginners

Post by jorimt » 30 Dec 2020, 10:16

Chief Blur Buster wrote:
29 Dec 2020, 22:56
TL;DR: Naturally, by pure math, a higher poll rate is indeed lower latency. However, it is in the sub-millisecond averages and hard to measure organically on real mice in the statistical noise of everything else (including antibounce algorithms, OS loading changes, USB chip loading, etc). Rather, most of the pollrate-increase benefits is in clearly improved mousefeel.
Yup.
diakou wrote:
30 Dec 2020, 00:53
This is often where I am able to make magic happen in regards to avg input latency / effects of displays and more. 2ms increase or decrease may sound little and unnoticeable especially with keyboard inputs and not mouse motion.
In regards to average accumulative input lag, perhaps, but in regards to a 2ms improvement in consistency, while some players can indeed adapt and condition themselves to more/less input lag, such a reduction can absolutely be effective, especially if the game in question relies on a player to perfectly time input/combos to character animations in successive repetitions, such as in fighting games.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

Post Reply