Measurement Device for Display Lag + Discussion of Lag Standards

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: Measurement Device for Display Lag

Post by flood » 15 May 2020, 18:04

almost as breakthrough as Einstein's formula, in our opinion
this is a comparison between the theory of input lag to "Einstein's formula". that's what i'm calling far fetched. the comparisons between yourself and einstein... that's a different thing.

nothing personal. just saying, the concepts/theories/equations/formulas/whatever relevant to the subject of input lag, are far far far simpler than any of einstein's works.

User avatar
Chief Blur Buster
Site Admin
Posts: 11725
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Measurement Device for Display Lag

Post by Chief Blur Buster » 15 May 2020, 18:42

Fair, fair.

The reason the comparison was mentioned:

Basically, I turn complex display concepts into something much more easily understandable, with human-visible comparisions, high speed cameras, video game demos, motion tests, and real-world hands-on applicability, etc.

My students have compared this to merging complex concepts -- into something easy to understand. Much like the sudden concept-simplification levels achieved E=mc² .... For THAT classroom-simplifying antics that I do, that was no exaggeration that my students in my class have told me.

Before my class, people thought of display mechanics as "unknowable calculus or greek", then afterwards, the same students now understand displays more simplified close to the beautiful simplicity of E=mc² ... Ignore the numbers, but focus on the "scale of simplification of concepts". By that line-item comparision, Blur Busters tends to be no exaggeration in simplifying high-Hz concepts.

Although I don't have a university degree (partly because I am profoundly deaf and rely on my eyes, and I had difficulty following teachers), I grasp a lot of new refresh rate concepts far faster than the majority of university graduates. (My deafness since birth, also happens to be why I have a Caption Screen in my classrooms too, and have unconventional visual-teaching methods.)

My work generated researcher invitations already because I release some brilliances that are paper-worthy -- often long after I published the concepts or test or device, etc.

But yes, you're right, Einstein did far more complex math than I ever will do.

However, something distinguishes me, that helps me get an advantage in the display temporal mechanics that few has:

My brain can emulate motion artifacts before seeing the display being invented (much like a display emulator equivalent of photogenic memory), this is how I invented many of the TestUFO patterns and pursuit camera -- it took me less than 30 minutes to invent the Sync Track and it was published at TestUFO more than a year before I got invited by researchers to create the peer-reviewed conference paper with NIST/NOKIA/Keltek. So my Sync Track unceremoniously existed for more than a year before the paper did! Now it revolutionized display motion blur photography by reviewers and YouTubers, including LinusTechTips, RTINGS, TomsHardware, etc, having replaced a $30,000 lab invention with an equally-accurate $0-to-$100 invention.

Few people understand what generates www.testufo.com/eyetracking or www.testufo.com/persistence yet I saw it in my brain long before I created the test pattern. Even correctly brain-emulated how the pattern interestingly changes at 5ms GtG versus 1ms GtG. And I correctly mentally emulated the algorithm for www.testufo.com/vrr and it worked on the first try! And I have dozens new interesting TestUFO tests I want to create, time-permitting -- they're already imagined correctly, just a bit ADHD or time-starved to post them all.

See, I brain-see a lot of motion artifacts long before a display or test pattern is invented. Few people can do that! Some people have weird things like synthesia or photogenic memory or other unusual attributes. I have something slightly different (I don't know why, possibly a rare side effect of being born deaf).

I have this attribute of being able imagine a display in my brain -- to emulate a display or a display prototype in my memory (including temporals such as GtG, ghosting, contouring, temporal dithering, etc) -- at least to a certain depth (but far better than practically everyone else I've met). I bet a few can too, but most can't.

One NVIDIA scientist has already reached out to me to take advantage of my display brain-emulation already, and I explained a specific type of artifact far better than all their internal employees did, and they ran some tests that confirmed what I told them (without me even looking at a display, except for one static photograph that only gave a faint hint of what was happening, which was enough of a clue to run a mental emulation that matched exactly what they saw).

(...Spend $100,000 building a prototype, or trial-balloon it on my brain first to narrow the field a little bit? No brainer. Pun intended...)

Long time readers are familiar with my partnering with NVIDIA for various initiatives (like this NVIDIA paper (see page 2 for Blur Busters), or the NVIDIA partnering for motion test for the 360Hz monitor (scroll halfway down for Blur Busters). Casually I collaborate with about a dozen scientists there now. Further back, the G-SYNC giveaway was what launched Blur Busters Forum too. And Blur Busters was boosted by LightBoost (an NVIDIA technology) in Year 2012. And my display testing inventions helped dozens of reviewers, YouTubers, sites that help sells NVIDIA product indirectly (i.e. high Hz). It's clearly no accident that NVIDIA researchers are huge fans of Blur Busters -- and even both personally and businessly.

My brain having the ability to predict artifacts in a prototype -- well before it is made -- helps me greatly when working in advance of seeing displays. I see large numbers of displays with my frequent convention travels (pre-COVID), running TestUFO on-the-go from my Razer 240Hz laptop connected to various displays. To disbelief of quite a few, I generally reliably know when algorithms look good / look bad on many displays before seeing them (e.g. Generic software BFI can look so-so on many common TN LCDs) and looks amazing on others (e.g. Generic software BFI can look amazing on OLEDs, indistinguishable from hardware BFI). I show people the actual results, and people instantly agree. TestUFO is a great show-and-tell. Blur Busters training class (as seen in the photo on previous page) is a quite fun way to teach with show-and-tells, hands-on high-Hz with 240Hz displays, high speed videos recorded and played back on the spot in front of class, and more.

Now, back to the Blur Busters classroom:
Classroom training is something I only began doing in year 2019, although COVID has put a hold on my classes though. I am unusually wordy in forums (side effect of my born-deafness and reliance on text-means of communications, never made phone calls until age 15) -- but managed to compress thousands of words into visual demos I do in person, to maximize teaching throughput, and most fun hands-on display temporals exercises, in the shortest possible time. That's where people I train (even managers who left knowledge to their engineers) have profusely complimented my teachings to Einsteinesque levels. I thought it was an exaggeration, but right now I think I'll let myself boast a little sometimes...

Either way:

Currently, most people tell me that no other websites teach high-Hz concepts as well as Blur Busters does -- at this time, and apparently, I'm told I do a kick-ass job in person too.

Nontheless, you're right -- Einstein did more complex math than I ever will do.

So fair enough. ;)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

1000WATT
Posts: 391
Joined: 22 Jul 2018, 05:44

Re: Measurement Device for Display Lag

Post by 1000WATT » 16 May 2020, 04:36

Chief Blur Buster wrote:
15 May 2020, 15:59
Even TOP > CENTER > BOTTOM can change to TOP < CENTER < BOTTOM can change to TOP = CENTER = BOTTOM on the same display based on different settings.
Are you talking about hz + fps? And the number of break lines?
I often do not clearly state my thoughts. google translate is far from perfect. And in addition to the translator, I myself am mistaken. Do not take me seriously.

AndreasSchmid
Posts: 3
Joined: 15 May 2020, 08:51

Re: Measurement Device for Display Lag

Post by AndreasSchmid » 17 May 2020, 13:09

flood wrote:
15 May 2020, 17:21
imo the only thing i care about is
how much slower is a display than a "perfect display", if both are receiving the same video signal (and hence running at same refresh rate and whatever).

the best way to measure would be with an oscilloscope, probing the video signal (or some signal inside the monitor), and a photodiode+amplifier also connected to the scope.

comparing against a crt isn't too bad either. crt's are a pretty good approximation of a perfect display. the issue is of course that crts cannot always receive the same signal as lcd/whatever displays.
As far as I can tell, this simplifies the problem a bit too much:

1. A "perfect display" would be a display with zero lag, wouldn't it? This obviously does not exist, so if we would do a comparison, we would have to compare to a "non-perfect" display. This way, we do not get absolute values for our results. Additionally, latency is not the same each update but rather a distribution of times. If both, reference display and display under test, have varying latencies, even measurement rows would be confounded by this variance.

2. I agree that you could theoretically measure display lag with an oscilloscope and a photo sensor, but HDMI has a bandwidth of around 5 (HDMI 1.0) to 48 GBit/s (HDMI 2.1). An off-the-shelf oscilloscope (while already quite expensive) misses such sampling rates by orders of magnitude, therefore you would need a really high end device with this approach. A good while ago, the german display technology website prad.de actually built such a setup to validate established testing methods for display lag (can be found here).

3. Apart from not being an actual perfect display, a CRT (like each other monitor) has to get its signal from a video source. How can we be sure if the VGA signal for the CRT and the HDMI signal for the display under test are really sent simultaneously? Graphics cards are typically proprietary products and have to be viewed as black boxes - we can not tell exactly what happens under the hood and therefore the measurements are not reliable.

Because of those limitations we used an open source device (Raspberry Pi) so we can accurately trigger the measuring timer when the signal is sent. It also makes our tester affordable and easy to replicate which could lead to comparable results across the community.

User avatar
Chief Blur Buster
Site Admin
Posts: 11725
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Measurement Device for Display Lag + Discussion of Lag Standards

Post by Chief Blur Buster » 17 May 2020, 13:18

flood wrote:
15 May 2020, 17:21
imo the only thing i care about is
how much slower is a display than a "perfect display", if both are receiving the same video signal (and hence running at same refresh rate and whatever).
Yup, this is excllent, and agree with flood here.

Lag standards also needs to be adjustable to permit using real life as a valid lag reference, because displays sometimes have to be benchmarked against real life.

Sometimes we definitely need IDMS method. It's perfect for the right job. But the toolbox is big. Right tool for right job sometimes makes the IDMS standard total crap -- just like a Philips screwdriver isn't good for flat top screws. Or worse, trying to use a staple gun as a screwdriver. Or using a hammer as a knife. Flatly, sometimes IDMS lag standard is the wrong tool for the wrong job.

Bottom line, IDMS can be perfect for some applications, but woefully imperfect for other applications. The IDMS standard is not futureproofed in the refresh rate race to retina refresh rates, many esports scenarios, and many real-life emulation scenarios. IDMS error margins do shrink at higher refresh rates, but we're not going to be eliminating a lot of those error margins for a long time.

IDMS is a chiefly a 0-framebuffer-depth VSYNC ON GtG 50% lag measurement standard, and does not accomodate scanout-bypassing subrefresh latency methods (Virtual reality raster beam racing techniques, VSYNC OFF techniques, and variable-scanout-velocity displays, interactions between sequential cable delivery versus global strobe flash backlights, etc). Also, it doesn't account for weird lag-interaction behaviours (both beneficial and deleterious) between different components of the GPU-to-display chain. There is some scalers/TCON that has some scanout-distorting mechanics (more visible on Plasma/DLP than LCDs, but some LCDs occasionally have this too -- e.g. 240Hz panels that needs to buffer lower-Hz to scanout at full 1/240sec velocity; distorting lag mechanics of VSYNC OFF or "virtual reality beamracing" techniques, like NVidia VRWorks front-buffer techniques), and some of these behaviours are fully co-operative between GPU and display. Yesterday it was analog at a perfect constant pixel clock. Today it's digital. Now it's micropackets. Then we've got VRR and VSYNC OFF. There's now some DSC too. And tomorrow, additional layers such as rame rate amplification layers (e.g. display co-GPUs). There is some slowly increasing integration of latency co-dependence between GPU and display that cannot always be siloed. Sometimes we DO need to silo it, but sometimes we do need to measure Present()-to-photons to measure proper gametime:photontime relativity to real world, for every single pixel of the display, as gametime:photontime can jitter differently for different sync technologies and monitor settings, etc, meaning lower-lag but worse-lag-jitter (and lag-jitter range differences for different pixels, like worse lag range at bottom edge versus top edge, and vice versa -- e.g. [0..4.1ms] for one edge of screen and [0...16.7ms] lag jitter range for opposite edge, because of the multi-layered unexpected interactions between technologies, because of things like scanout-compression artifacts (e.g. scan-converting TCONs). Anyway, I've successfully thought-through and formula'd most of these common interactions. I've seen almost all scanout patterns and latency interactions.

Also, erratic-stutter is a sub-branch of lag-volatility mathematics. Mathed correctly, latency volatility and erratic stutter both come the same gametime:photontime divergences. (I'm not talking about regular stutter, like perfect low-framerate 20fps framepaced exactly 1/20sec -- but erratic stutter). In the Vicious Cycle Effect, during virtual reality, even a 1ms stutter can become human visible.

For example, a head turn in virtual reality at 8000 pixels/sec is a slow head turn on an 8K VR headset (even 8K is not sharp enough to be retina when used for VR, because it's blown-up to bigger-than-IMAX sizes). 1ms translates to 8 pixels at 8000 pixels/sec. So a 1ms stutter is an 8 pixel jump. Assuming motion blur is sufficiently low (e.g. 0.3ms MPRT like Valve Index), that 1ms stutter can be visible. 1ms stutter is mostly invisible on a 60Hz 240p CRT Nintendo game because of low resolution.
1ms stutter is mostly invisible on a 60Hz 1080p LCD because of high persistence of LCD.
But 1ms stutter is very human-visible on a low-persistence 8K VR display.

So tiny sub-refresh lag error margins are important, and we've seen displays where lag error margins of top-vs-bottom is quite noticeable (e.g. top having varying absolute lag samples of [15ms...19ms] relative to computer's frame Present(), and bottom edge having varying absolute lag samples of [4ms...19ms] relative to computer's frame Present() ....) because of all those multilayered lag interactions. These milliseconds often don't matter to most, but can be critical when we're trying to develop a display that mimics real life.

(It's also a small portion of the generic complaint "Lightboost looks stuttery/jittery during VSYNC OFF gameplay", and even the opposite edge of screen have different stutterfeel, not noticed by most, but noticed if paid attention, and measurable when doing 1000 lag samples during 120Hz strobing at 240Hz VSYNC OFF on a panel with a scan-converting TCON/scaler -- there's a lag volatility difference of 8ms for top vs bottom, that can be applicable if you're beam racing, or if you're doing scanout-lag-bypassing VSYNC OFF).

The IDMS lag measurement standard is a narrow lag tool in a big lag toolbox.

Even the Blur Busters display lag standard initative necessarily won't be perfect, but it would provide more futureproof standardization upgrade paths too simply by allowing the introduction of more optional variables (e.g. strobing / strobe crosstalk / faint pre-strobes that cheat lag tests). The first standard will cover the most common reference profiles.
AndreasSchmid wrote:
17 May 2020, 13:09
As far as I can tell, this simplifies the problem a bit too much:

1. A "perfect display" would be a display with zero lag, wouldn't it? This obviously does not exist, so if we would do a comparison, we would have to compare to a "non-perfect" display.
However, we need a "perfect display" reference. It's a necessary part of a future lag standard.

The problem is we now have virtual reality and Holodeck simulators.

The fact is that a "perfect display" already exists: It's called "real life".

And guess what a display sometimes does? It emulates real life.

The great news is that real life is mathematically simple, and a lot of things about real life can be calculated.

The IDMS is a simplistic lag standard that does not have a venn diagram big enough to include this reference.

I am cross posting here:
Chief Blur Buster wrote:
hmukos wrote:
17 May 2020, 12:20
That's what I don't understand. Suppose we made an input right before "Frame 2" and than it rendered and it's teared part was scanned out. How is that different in terms of input lag than if we did the same for "Frame 7"? Doesn't V-SYNC OFF mean that as soon as our frame is rendered, it's teared part is instantly shown at the place where monitor was scanning right now?

Image
Yes, VSYNC OFF frameslices splice into cable scanout (usually within microseconds), and some monitors have panel scanout that is in sync with cable scanout (sub-refresh latency pixel-for-pixel).

What I think Jorim is saying is that higher refresh rates reduces the latency error margin caused by scanout. 60Hz produces a 16.7ms range, while 240Hz produces a 4.2ms range.

For a given frame rate, 500fps will output 4x more frameslice pixels per frame at 240Hz than at 60Hz. That's because the scanout is 4x faster, so a frame will spew out a 4x-taller frameslice at 240Hz than 60Hz for a specific given frame rate. Each frameslice is a latency sub-gradient within the scanout lag gradient. So for 100fps, a frameslice has a lag of [0ms...10ms] from top to bottom. For 500fps, a frameslice has a lag of [0ms...5ms] from top to bottom. We're assuming a 60Hz display doing 1/60sec scan velocity, and 240Hz display doing a 1/240sec scan velocity, with identical processing lag, and thus identical absolute lag.

Even if average absolute lag is identical for a perfect 60Hz monitor and for a perfect 240Hz monitor, the lagfeel is lower at 240Hz because more pixel update opportunities are occuring per pixel, allowing earlier visibility per pixel.

Now, if you measured average lag of all 1920x1080 pixels from all possible infinite gametime offsets to all actual photons (including situations where NO refresh cycles are happening and NO frames are happening), the average visibility lag for a pixel is actuallly lower, thanks to the more frequent pixel excitations.

Understanding this properly requires decoupling latency from frame rate, and thinking of gametime as an analog continuous concept instead of a digital value stepping from frame to frame. Once you math this correctly and properly, this makes sense.

Metaphorically, and mathematically (in photon visibility math), it is akin to real life shutter glasses flickering 60Hz versus 240Hz while walking around in real life (not staring at a screen). Your real life will feel slightly less lagged if your shutters of your shutter glasses are flashing at 240Hz instead of 60Hz. That's because of more frequent retina-excitement opportunities of the real-world photons hitting your eyeballs, reducing average lag of the existing apparent photons.

Once you use proper mathematics (treat a display as an infinite-refreshrate display) and calculate all lag offsets from a theoretical analog gametime, there's always more lag at lower refresh rates, despite having the same average absolute lag.

Most lag formulas don't factor this in -- not even absolute lag. Nearly all lag test methodologies neglect to consider the more frequent pixel excitation opportunity factor, if you're trying to use real world as the scientific lag reference.

The existing lag numbers are useful for comparing displays, but neglects to reveal the lag advantage of 240Hz for an identical-absolute-lag of 240Hz-vs-60Hz, thanks to more frequent sampling rate along the analog domain. To correctly math this out, requires a lag test method that decouples from digital references, and calculates against a theoretical infinite-Hz instant-scanout mathematical reference (i.e. real life as the lag reference). Once the lag formula does this, suddely, the "same-absolute-lag number" feels like it's hiding many lag secrets that many humans do not understand.

We have a use for this type of lag references too, as part of a future lag standard, because VR and Holodecks are trying to emulate real life, and thus, we need lag standards that has a venn diagram big enough -- to use this as a reference.

Lag is a complex science.
So you see, two different-Hz displays can have identical lag numbers with certian sync technologies, but the higher-Hz display is less laggy relative to real life thanks to more frequent photon opportunities.

We need an ironclad lag-measurement standard flexible and powerful enough to meet the needs of the 21st century.

Displays are getting more complex. Sync technologies are getting more complex (e.g. invention of variable refresh rate) that inject GPU-display co-operative behaviors that cannot be siloed like in classical fixed-Hz displays. Also, some future displays will have a co-GPU to help with frame rate amplification technologies, creating more intimate connection between the display and the GPU for some important applications. Most resarchers are still stuck in "fixed-Hz think", while Blur Busters is thinking far beyond.

Also, you might have seen some of my Tearline Jedi work controlling VSYNC OFF tearlines to create microsecond-latency for TOP/CENTER/BOTTOM, to generate pixels streamed out of the GPU output in realtime (bypassing the framebuffer), see Tearline Jedi Thread. This is how Atari 2600 did it, since the Atari 2600 did not have a frame buffer, it had to generate pixels in real time. This is possible to do with GPUs by using VSYNC OFF to bypass frame buffers, since VSYNC OFF tearlines are just scan line rasters. I'm able to create sub-millisecond Present()-to-Photons for the entire screen surface of a display (at least on CRT), with these tricks. None of the TOP,CENTER,BOTTOM lag-dissonances. It also partially explains why some esports players use VSYNC OFF, for CS:GO ultrahigh framerates (500fps) to create consistent TOP-to-BOTTOM lagfeel. Most lag formulas neglect to acknolwedge the realtime streaming of VSYNC OFF frameslices as a scanout-lag-bypasser mechanism, and this is why Leo Bodnar numbers (and IDMS numbers) are useless for CS:GO players who win thousands of dollars in esports.

Now, this lag math becomes complex when we involve scan-converting TCONs (e.g. 240Hz fixed-horizontal-scanrate panels like BenQ XL2546 (DyAc OFF) can only refresh a 60Hz refresh cycle in 1/240sec, rather than scanrate-multisync panels (like BenQ XL2411P) where their scanout slows down to match the Hz and scanrate of the cable signal. And becomes even more complex when we add strobing (which converts scanout visibility to global visibility). We can have up to 3 different multilayered vertical latency gradients and lag-gradient compressions (some of them opposite-direction than the others), because of sync technology setting changes at the GPU level, to things like monitors' VRR setting or strobing setting, all interacting simultaneously with each other to create latency behaviours that never reveal themselves in VESA / IDMS / etc.

The lag formula needs to be flexible and future proof enough to accomodate such situations too. The great news is that there's a path to such a lag standard, and I would like to help work with this.

That's why, some classic lag measurement formulas are garbage for some real-world lag-measurement applications. But we do it for an excellent and legitimate scientific reason.

We're the people who conceptualized the idea of "Holodeck Turing Test", e.g. real life matching virtual reality, and vice versa, blind test between VR goggles and transparent ski goggles. This will be long time before this is successful, but one component is ultrahigh refresh rates, and measurement standards that is flexible enough -- whereupon adjusting a variable allows you to use a lag measurement standards of "real life" as being a valid measurement reference, not just only versus a CRT. (real life = mathematically infinite Hz infinite framerate, for all practical intents and purposes).

The bottom line, is "real life" is a valid lag measurement reference. Sometimes we need that. The problem is IDMS hides a lot of lag (that varies a lot between identical-IDMS displays), that prevents IDMS lag measurement method from being used in all lag-measurement applications that we need.

Use Case #1: Two displays of identical IDMS numbers, but different sync settings, create a lag-number-swap effect between two different displays. Creating a display superior/worse for Console game (one sync setting) and worse/superior for PC esports game (different sync setting).

Conclusion #1: IDMS is an insufficient standard when professional gamers needs to shop for a display for a specific game. This is thanks to the non-separatable interaction/integration between GPU+display lag behaviors that is defacto co-operative.

Use Case #2: Two displays of identical IDMS numbers, can have different lag relative to real life, when we're trying to emulate real life (VR or Holodeck application).

Conclusion #2: IDMS is an insufficient standard when we need to compare against real life as a chosen lag reference

There's even a #3, #4, or #5, but I think I've dropped enough microphones, slam-dunk, in hushing this debate for the need of superior latency measurement standards/formulas. But there exists GPU interactions where "Display A goes higher lag than Display B" at one setting that becomes "Display B goes higher lag than Display A" at a different GPU setting on the SAME display. Yes, GPU-side setting. Yes, SAME display. Non-separatable interaction!

The boom of the billion-dollar esports industry and VR industry revealed massive shortcomings of current lag-measurement standards.

IDMS is a great standard. It is useful when it is Right Tool for Right Job.

But, unfortunately, it is but merely only one tool in a huge "display lag measurement" toolbox.

Blur Busters is often the Skunkworks or Area 51 of the future of displays, and so lag measurement standards are far behind Blur Busters.

Also, if any papers or research is modified because you read something on Blur Busters, we appreciate my name (Mark Rejhon) and Blur Busters credit to be included in any future lag papers. We're looking to increase our lag credentials over the long term -- happy to be providing research for free in exchange for peer review credit. Much like I did for a lot of other things, like pursuit camera.

A universal lag standard (essentially metaphorically a grand unifying theory of physics in importance in the latency-measuring unvierse), is sorely needed. The universal latency standard is intended to allow allow you to choose a preferred lag reference, and adjustable to match the majority of past lag measuring standards (SMTT, Leo Bodnar, IDMS, VESA, etc). That way, existing monitor reviewers can publish numbers backwards compatible with their existing test methodologies, WHILE also publishing superior standardized numbers alongside. Otherwise, it becomes like the xkcd cartoon of a new standard just becoming another standard, because it won't be adopted. To be the real standard, it has to be powerful and unifying, to wide critical acclaim (like other Blur Busters initiatives).

We have successfully found a formula that becomes a superset of nearly all lag measurement standards in the past, combined. (Within knowable error margins, of course -- e.g. SMTT method is subject to camera sensor scanout artifacts as well as human-vision subjectivity)

We believe Blur Busters / TestUFO has enough brand recognition in the testing industry to trailblaze a popular new display lag standard that is compatible with everybody's lag methodologies (just simply by tweaking the variables in the standard), while also converging into unified industry standards that allows long-term cross-comparision (Site A lag data more easy to compare to Site B lag data). Colorimetry finally achieved that (universal standards that are now consistently used by all testers), and Blur Busters aims to standardize such temporal unification.

One can easily continue to use IDMS standard as a subset within the Blur Busters display lag testing standard initiative -- simply by disclosing the lag stopwatch variables that are plugged into the Blur Busters display lag testing standard. So it doesn't have to replace IDMS. But be a superset standard that helps improve the latency understanding. Much like people still do newtonian mathematics for things that don't have enough speed to generate Relativity Theory error margins. Or simply sticking to geometry for things that don't require complex calculus. My thinking is very left-field different, much like Geometry-vs-Physics, and that's how I innovate with Blur Busters.

Even competitors to Blur Busters products have adopted several Blur Busters ideas / trailblazes (with our full blessing). So even the Blur Busters initiative of the unified-but-flexible latency measurement standard -- will quickly be copied by many competitors -- and/or created as a selectable sub-profile within the capability of the Blur Busters display lag measurement standard (Profiles, much like how you select "Rec.2020", "sRGB", "Adobe" for colorimetry). Thanks to these known standards, they are formula-convertible to each other (within limited circumstances), but today's lag measurement standards are like thousands of opaque numbers that cannot be compared across multiple different websites. Open source devices and commercial devices alike. A proper lag standard and lag-variables disclosure requirement, also gives pressure for commercial devices to comply with the standards, and force full-disclosures of latency measurement standards in commercial devices and commercial reviewers, too. Part of the intent of a new Blur Busters display lag standard initiative is disclosure-honesty -- both by tester manufacturers & display reviewers.

To some brains, it feels like greek/chinese/calculus. To me, it's turned more into a simplified geometry problem. The industry needs it badly, and I'm happy to help lead the way.

If there are researchers and university students here, I welcome participating in a collaboration for a more universal latency standard that allows retargetting latency-reference -- mark[at]blurbusters.com ....

The IDMS work is fascinating and extremely important. Just need to be cognizant of the limitations of the latency-measurement portion of the IDMS standard, at least as an acknowledgement of known error margins
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: Measurement Device for Display Lag

Post by flood » 19 May 2020, 13:44

AndreasSchmid wrote:
17 May 2020, 13:09
As far as I can tell, this simplifies the problem a bit too much:
this is how you know i'm a physicist :D
1. A "perfect display" would be a display with zero lag, wouldn't it? This obviously does not exist, so if we would do a comparison, we would have to compare to a "non-perfect" display. This way, we do not get absolute values for our results.
i introduce the concept for the sake of a theoretical, not necessarily functional, definition of latency
Additionally, latency is not the same each update but rather a distribution of times. If both, reference display and display under test, have varying latencies, even measurement rows would be confounded by this variance.
yes and no. as far as evaluating displays is concerned, my personal interest is in evaluating the amount of unnecessary latency it has (i.e. in signal processing/decoding, and in gtg times). and these aspects almost certainly have negligible variation in latency.

think about it this way: if there is a 60Hz signal in the cable, with minimal jitter, and your display is outputting at 60Hz, with minimal jitter, you can be pretty sure that the latency (per my definition of latency) is nearly constant.

***this can get complicated when you have, for instance, backlight strobing. then the latency depends on the area of the screen you're considering. but if you focus on one particular region (strictly speaking, pixel), there is still no variance in the latency.
2. I agree that you could theoretically measure display lag with an oscilloscope and a photo sensor, but HDMI has a bandwidth of around 5 (HDMI 1.0) to 48 GBit/s (HDMI 2.1). An off-the-shelf oscilloscope (while already quite expensive) misses such sampling rates by orders of magnitude, therefore you would need a really high end device with this approach. A good while ago, the german display technology website prad.de actually built such a setup to validate established testing methods for display lag (can be found here).
i don't know much about the hdmi protocol, but perhaps if you look a the signal's trace on say a 100MHz oscilloscope, you won't be able to decode the signal, but you will be able to determine the timing of when a frame starts, to a precision limited by your oscilloscope's bandwidth.
3. Apart from not being an actual perfect display, a CRT (like each other monitor) has to get its signal from a video source. How can we be sure if the VGA signal for the CRT and the HDMI signal for the display under test are really sent simultaneously? Graphics cards are typically proprietary products and have to be viewed as black boxes - we can not tell exactly what happens under the hood and therefore the measurements are not reliable.
i recently ordered one of these things: https://www.amazon.co.uk/gp/product/B01NAWRQF1/
not should when it will arrive though as i'm in the US

i have some ideas for how to calibrate the difference in latency between its various outputs
Because of those limitations we used an open source device (Raspberry Pi) so we can accurately trigger the measuring timer when the signal is sent. It also makes our tester affordable and easy to replicate which could lead to comparable results across the community.
yup, using a rasp pi certainly has advantages.
does it support all the output modes of high-end monitors though? (e.g. 240Hz)

Post Reply