Optimization Hub for beginners

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
Brainlet
Posts: 100
Joined: 30 May 2020, 12:39
Contact:

Re: Optimization Hub for beginners

Post by Brainlet » 11 Dec 2020, 18:40

There are far too many variables and inconsistencies in monitors which just pile on top of the already lackluster method of click to photon latency testing. IMO, continuous measuring of "mouse to monitor port" latency (all packets, no exceptions) is the only really accurate way to measure impact of hardware/software optimizations.
Starting point for beginners: PC Optimization Hub

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Optimization Hub for beginners

Post by Chief Blur Buster » 11 Dec 2020, 19:39

Brainlet wrote:
11 Dec 2020, 18:40
There are far too many variables and inconsistencies in monitors which just pile on top of the already lackluster method of click to photon latency testing. IMO, continuous measuring of "mouse to monitor port" latency (all packets, no exceptions) is the only really accurate way to measure impact of hardware/software optimizations.
The port is, alas, blurred boundary compared to the old days. How drivers and monitors now cooperate via many sync technologies (like VRR). With software knowledge of current sync technology and software's ability to improve upon lag based on that knowledge, partially -- things can get lower lag and framepace better.

Even slightly better, is "mouse to scanline number at monitor port". The global latency gradient of VSYNC ON and VRR, versus the subdivided latency gradients of the individual frameslices of VSYNC OFF. In an ideal situation, you want perfect 1:1 sync to Scanline #1 for VSYNC ON, and perfect 1:1 sync to first scanline of each frameslice for VSYNC OFF.

This can necessarily require slight timestamp handling differences based on knowing how the sync technology is configured. Since the software can know scanline number (e.g. via a time offset between two VBIs, or via D3DKMTGetScanline() API). Such raster-based compensation can improve things, but is usually overkill. For VSYNC ON, precalculating gametimes to predicted time of scanline #1 is best, while for VSYNC OFF and VRR, it's best to generate gametimes in realtime on the fly relative to predicted end-of-render (unless rendertimes are fixed, then beginning-of-renders are OK). This produces lowest lag + best framepacing etc.

So we need multiple lag benchmarks (like the VSYNC ON lag benchmark and the VSYNC OFF lag benchmark).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Hotdog Man
Posts: 6
Joined: 01 Dec 2020, 13:35

Re: Optimization Hub for beginners

Post by Hotdog Man » 12 Dec 2020, 20:29

Great guide. The more I dig, the deeper I realize the PC optimization rabbit hole goes. It totally makes sense as a logical progression of optimizing your gaming performance, especially given the incredible stakes of professional eSports gaming. For those who think this is niche and unneeded - all I can say is I hope you can have a moment of reflection when you hear young gamers saying to each other "bro... you play on unoptimized Windows?!"

diakou
Posts: 83
Joined: 09 Aug 2020, 11:28

Re: Optimization Hub for beginners

Post by diakou » 13 Dec 2020, 10:52

Hotdog Man wrote:
12 Dec 2020, 20:29
Great guide. The more I dig, the deeper I realize the PC optimization rabbit hole goes. It totally makes sense as a logical progression of optimizing your gaming performance, especially given the incredible stakes of professional eSports gaming. For those who think this is niche and unneeded - all I can say is I hope you can have a moment of reflection when you hear young gamers saying to each other "bro... you play on unoptimized Windows?!"
The lower latencies get, the less we are able to accept inconsistencies. It’s vicious.

It is IMPOSSIBLE to feel problems that are nuanced if you are at 80ms and have NEVER FELT LOWER.

Once you reach 20ms, even 1-2ms variations start subconsciously mattering a lot and you can often tell depending on how sensitive your action is or how perceptive and in-tune you are as a person.

This is the undeniable fact as to why we have 13 year old fortnite players obsessed about lag already. The landscape has changed and keeps changing. Denial is not possible anymore because it is genuinely only noise now that is filtered out. Even big companies have started utilizing latency and optimizations as marketing strategy (nvidia and reflex - it covered 1/3 of their presentation)


This is a new trend that is very very exponential and steep the very moment affordable lower latencies started becoming a commodity. (From 100ms to 40-50ms) 144hz monitors that are a lot, lot more affordable. Powerful setups that push more frames than in the past for much cheaper etc.

But the moment people are finally getting these lower latencies they’re starting to notice problems that never was able to exist as it wasn’t an effect you could noticeably feel. I have friends who change from a 60hz oblivious to 144 and then later 240. First day they’ll be shocked how much of a difference it makes to their gameplay at a high level. Next month they’ll be messaging me asking about why there’s stutters and “weird input lag” (they mean input variation / inconsistency)
What happened?
Well at 80ms they were varying between 70-80, but that variation is incredibly hard to notice when there’s such a high lag number to begin with. Then they got to 25ms and were varying between 20-25. Now that is incredibly noticeable (for most in tune with their game at least)

Did their skill change with lower latency? No, not necessarily. But are they able to draw out more of their potential and ability that was already present but not necessarily able to be used? Absolutely.

In a fighting game - there are certain moves balanced upon the fact that there are +1/2 or -1/2 frames added or removed (16-32ms)) in a surface level practical way, removing 16ms of latency to an extent is as if you undid the balance! (Not exactly, this is a surface level look)

When people say “a pro would beat you no matter the hardware” as response - it constantly indicates that better gear makes their skill increase. This can obviously happen, but not instantly, that happens with time - better progression. What does happen however as I explained is the player is able to unleash more of what they already knew and had innately. It’s genuinely a lot like removing a limiter for some.

It’s very frustrating to see either misconceptions - or false understanding of what’s happening in terms of latency reduction, latency perception and skill.

There is one undeniable fact; the pros are starting to slowly adapt the low latency landscape more and more - and pros copy pros FAST. Then the fans and average players will copy the pros and before you know it, in 2022 it is unacceptable to have more than 30ms total system latency if you’re trying to compete on fair terms as an ENTRY LEVEL REQUIREMENT.

———————————————-

Regarding brainlets optimization hub, out of a lot, and I mean a lot of similar looking things. This is very “down to work” approach. Very direct links to research for yourself and is a surface level look into where to start and where to learn. It’s not some hocus pocus looking approach as seen done by others in the past.

User avatar
xeos
Posts: 43
Joined: 12 Jul 2018, 14:54
Contact:

Re: Optimization Hub for beginners

Post by xeos » 24 Dec 2020, 20:50

Brainlet wrote:
10 Dec 2020, 15:19
While I agree that evidence based measurements are very important I also dislike the current approach of click to photon measurements.
Sure. It's not great to have to hand-analyze a bunch of photos, and most people don't have the patience to do it enough times to get reliable numbers . At least for measuring the monitor's contribution to lag, the kind of tool I make is much preferable.
Measure display input lag the cheap way or the best way (IMHO, but I'm biased).

Brainlet
Posts: 100
Joined: 30 May 2020, 12:39
Contact:

Re: Optimization Hub for beginners

Post by Brainlet » 24 Dec 2020, 22:08

xeos wrote:
24 Dec 2020, 20:50
Brainlet wrote:
10 Dec 2020, 15:19
While I agree that evidence based measurements are very important I also dislike the current approach of click to photon measurements.
Sure. It's not great to have to hand-analyze a bunch of photos, and most people don't have the patience to do it enough times to get reliable numbers . At least for measuring the monitor's contribution to lag, the kind of tool I make is much preferable.
I dislike high speed cameras as well. I can't stress enough how important it is to get full data of all latency deltas (+ further conversion into min/max/avg/0.1%/1% etc. if necessary). Hypothetically, at rock stable 10 ms latency deltas a variation of 0.5ms has a gigantic impact on perceived mouse movement but would be written off as "normal deviation" in click to photon tests. In reality, monitor variables will also heavily skew results (colors, darkness etc. of area you're testing in) hence my desire for a dual point packet intercept device to eliminate monitors from the system latency part of testing (since most optimizations are done for that chain). Of course, extending it to a triple point device (LDAT-like device as last step) would be ideal.

With click to photon tests, time is also a huge factor (even if automated). Testing every random setting for 24/7 for 1-2 weeks straight @ 1-2 clicks per sec (imo minimum to get any conclusive data) is EXTREMELY time consuming (you'd spend YEARS constantly testing).
Starting point for beginners: PC Optimization Hub

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Optimization Hub for beginners

Post by Chief Blur Buster » 24 Dec 2020, 22:12

Brainlet wrote:
24 Dec 2020, 22:08
I dislike high speed cameras as well.
Photodiodes are defacto single-pixel cameras. Easy for a hobbyist to automate.

But, cameras are superior to photodiodes when properly automated.

Imagine a high speed camera like a photodiode matrix.

A 100x100 photodiode array capable of latency-mapping the entire 2D surface of a LCD panel.

I wish there were proper APIs and high speed buses that could accept high speed camera data into a GPU for real-time processing -- automated latency tests of the entire screen's surface! You can use QR-code rectangles to allow an app to initially align the screen matrix to the camera matrix, then properly dynamically autocalibrate camera gamut to screen gamut, then automation becomes possible. With an accurately pixel-mapped 100x100 camera (like 10,000 photodiodes), you can theoretically measure 10,000 separate LCD GtG's going on simultaneously, speeding up full-range GtG heatmapping too.

Thus, cameras are (theoretically) superior in automated analysis depth versus photodiodes (single-pixels).

The tech just isn't yet affordable to hobbyists.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Brainlet
Posts: 100
Joined: 30 May 2020, 12:39
Contact:

Re: Optimization Hub for beginners

Post by Brainlet » 24 Dec 2020, 22:20

Chief Blur Buster wrote:
24 Dec 2020, 22:12
Brainlet wrote:
24 Dec 2020, 22:08
I dislike high speed cameras as well.
Photodiodes are defacto single-pixel cameras. Easy for a hobbyist to automate.

But, cameras are superior to photodiodes when properly automated.

Imagine a high speed camera like a photodiode matrix.

A 100x100 photodiode array capable of latency-mapping the entire 2D surface of a LCD panel.

I wish there were proper APIs and high speed buses that could accept high speed camera data into a GPU for real-time processing -- automated latency tests of the entire screen's surface! You can use QR-code rectangles to allow an app to initially align the screen matrix to the camera matrix, then properly dynamically autocalibrate camera gamut to screen gamut, then automation becomes possible. With an accurately pixel-mapped 100x100 camera (like 10,000 photodiodes), you can theoretically measure 10,000 separate LCD GtG's going on simultaneously, speeding up full-range GtG heatmapping too.

Thus, cameras are (theoretically) superior in automated analysis depth versus photodiodes (single-pixels).

The tech just isn't yet affordable to hobbyists.
I agree, actual professional grade high speed cameras (5 digit framerate) would be great but due to the absurd prices it just won't happen sadly.
Starting point for beginners: PC Optimization Hub

User avatar
xeos
Posts: 43
Joined: 12 Jul 2018, 14:54
Contact:

Re: Optimization Hub for beginners

Post by xeos » 26 Dec 2020, 15:58

Brainlet wrote:
24 Dec 2020, 22:20

I agree, actual professional grade high speed cameras (5 digit framerate) would be great but due to the absurd prices it just won't happen sadly.
sure. But why ignore the totally affordable single-pixel cameras that do run at that frame rate? Aka leo bodnar, piLagTesterPro, timeSlueth?
Measure display input lag the cheap way or the best way (IMHO, but I'm biased).

Brainlet
Posts: 100
Joined: 30 May 2020, 12:39
Contact:

Re: Optimization Hub for beginners

Post by Brainlet » 26 Dec 2020, 16:40

xeos wrote:
26 Dec 2020, 15:58
Brainlet wrote:
24 Dec 2020, 22:20

I agree, actual professional grade high speed cameras (5 digit framerate) would be great but due to the absurd prices it just won't happen sadly.
sure. But why ignore the totally affordable single-pixel cameras that do run at that frame rate? Aka leo bodnar, piLagTesterPro, timeSlueth?
A little bit of prejudice. I've seen way too many people claim "it doesn't matter" or "margin of error" when they see a sub 0.5ms difference while in reality it can completely alter motion perception since min/max is often impacted as well.
Starting point for beginners: PC Optimization Hub

Post Reply