Blur Busters School Time
I cannot confirm if the numbers are accurate, but I can confirm that some new digital displays are now successfully detecting <1ms numbers now (correct + realistic)
My
biggest problem with reviewers (Notice: I'm not afraid to criticize reviewers on a 50%:50% good:bad critique ratio), is they DO NOT ALWAYS DISCLOSE LAG TESTING METHODS. M'kay? That's my gigantic beef with reviewers, as much as I love helping the reviewers. I can be an ally but I can also criticize/scold, for sure!
But it's no conspiracy (so the fake topic title was edited).
Just various incompatible (but generally legitimate, unless there's errors) lag testing methods.
e.g. VSYNC ON lag testers, versus VSYNC OFF lag testers
e.g. Lag stopwatching at GtG 2% versus 10% vs 50% vs 90% vs 99% vs 100%
Now, I can confirm that SOME displays will correctly have low input lag numbers at the very top edge (scanout position zero), for a sensor-at-top. CRTs will generally have 0ms over there, and some of the world's fastest displays will manage to get pretty close to that.
Another method is an ultra-high-framerate VSYNC OFF system (e.g. FSE output at >3000fps) will generally produce lag numbers on a CRT tube that averages to half a frameslice time. So 3000fps output to a CRT at zero overhead, will have 0.5/3000sec latency (middle of frameslice latency). GTX and RTX cards can output blank rectangles at roughly 10,000fps, so lag testing <0.3ms is actually definitely possible.
Latency measurement methods and latency mathematics is tricky.
The Major Latency Stopwatching Variables
This is a dumb simplification for the obvious dunces around here -- winky winky -- you know who you are.
The smart ones will figure this out, but I have to properly science this out like a professional.
So here goes...
Sensor Position
-
Reviewers don't always disclose sensor position. Sensor at top edge will always have lowest lag numbers, due to scanout latency. Please see high speed videos of scanout,
www.blurbusters.com/scanout ... A perfect zero latency CRT display will have literally 0ms at top edge (if you can get the sensor photodiode onto Scan Line #1, and the sensor doesn't have latency processing delay)
-
Reviewers don't always disclose stopwatch start/stop. Stopwatch start is usually easier to determine than stopwatch stop, because stopwatch stop is based on pixel response speed and threshold chosen.
...Sometimes it's an absolute nits threshold ("stop lag stopwatch when pixel response cause pixel to finally hit XX nits")
...Sometimes it's a range to give allowance for GtG pixel response (TFTCentral)
...Sometimes it's a GtG percentage threshold. Sometimes it's GtG2% (RTINGS). Sometimes it's GtG50%. Sometimes it's GtG100%. Remember, your eyes sees GtG50% as a grey color in the journey from black to white, so YOU WILL REACT before GtG100%. Waiting for GtG100% is just pointless, given most of the lumens are finished beginning to hit your eyes well before then. You just have worse ghosting (less refresh rate compliance, etc). The engineer holy war (Apple-vs-Android style, Mac-vs-PC style) on where to stop lag stopwatching (GtG % threshold) is still pretty contentious behind the scenes, even today. VESA uses a GtG 10%->90% threshold which I
frequently complain about, and I've been historically a big fan of GtG 10% as the standard threshold to sync up with this, but I'm also a fan of GtG 50% (gamma-corrected). I also like just informing ranges. I just want to
improve reviewer disclosure.
-
Sometimes it's "first anything" reactions - high speed video camera testing, instead of single-pixel testing. Since not all pixels refresh at the same time, this is like a race to see "which pixel refreshes first?". This is useful when you're doing button-to-photons tests, like seeing a muzzle flash or a mouseturn reaction; because YOU as an ESPORTS player, will react to the first major stimuli (e.g. enemy movement, etc). We did this for GSYNC-101 as
we were the world's first people to measure latency of GSYNC in January 2014. However; it can be highly problematic in comparing displays (and not 1:1 comparision to other sync technologies, especially if stimuli you need is near top or bottom on a different display, or you change sync technology).
VSYNC ON Lag Testers
- Example: Leo Bodnar Lag Tester
- For VSYNC ON lag tester on most signal=panel synchronized scanout (
not all displays scanout the same speed on cable and panel), you have a TOP<CENTER<BOTTOM effect (strobing off) and TOP=CENTER=BOTTOM (strobing on without crosstalk), although that will vary quite badly if you have worse crosstalk at top and/or bottom edge (since the lag test will trigger on the first duplicate image, even if the first duplicate is fainter than the next duplicate).
- These lag numbers are good for console players, VR players, or people who prefer VSYNC ON
- VSYNC ON lag tester (e.g. Leo Bodnar) = stopwatch starts at VBI, preferably end of VBI (last Porch scanline) for easier comparision with VRR, but Microsoft Present() API pageflips at beginning of VBI (next scanline offscreen after final visible scanline).
VSYNC OFF Lag Testers
- Example: SMTT or RTINGS lag tester device
- VSYNC OFF stopwatch starts at time of Present()
- Latency is always lowest right below a tearline, and highest right above a tearline. Average latency of randomized tearing is half a frameslice, e.g. 1000fps = average 0.5ms scanout latency. You can have multiple tearlines per refresh cycles, as seen in
Are There Advantages To Frame Rates Above Refresh Rates?
- For VSYNC OFF lag tester on most signal=panel synchronized scanout, you have a TOP=CENTER=BOTTOM effect (strobing off) and a TOP>CENTER>BOTTOM effect (strobign on, assuming no crosstalk).
- On a CRT tube, VSYNC OFF lag testers will show half a frametime of lag. So if your VSYNC OFF lag tester is spraying 1000fps, you'll get a 0.5ms number.
- This will also happen on ultrafast digital panels with subrefresh latencies, as long as the pixel can at least initiate its GtG pixel momentum at least to the stopwatch stop threshold. I've seen <1ms before, as long as you don't have much port transceiver delay (HDMI/DP transceiver latency can be <0.1ms if optimized properly).
- These lag numbers are good for esports players and anybody who uses VSYNC OFF
- You still have the same problem of reviewers not disclosing latency-stopwatch-stop (GtG 1% is always lower lag than GtG 99%).
Lowering Lag Numbers
This is metaphorically the most "conspiracy" I will get, but it's still 100% genuine science with 100% genuine numbers, completely explainable in display scanout physics;
The reason some do this is to filter out computer lags / GPU lags / scanout lags / sync technology lags, away from display-only lag. I understand the rationale, but I'd rather reviewers publish multiple lag numbers for multiple refresh rates / settings / variables -- And disclose each.
- Only lag test only the highest Hz. Some displays go really laggy at low Hz (worse than a lower Hz display), due to scan conversion (
see information)
- Using VSYNC OFF lag testing method at highest frame rate your tester device/computer can output, because most VSYNC OFF lag testers do not subtract sync technology latency, and even VSYNC OFF has a scanout latency between two consecutive tearlines.
- Completely subtracting sync technology lag (e.g. using scanline #1 display top edge of VSYNC ON, or using first scanline right below a tearline at VSYNC OFF), or simply mathematically subtracting it (e.g. half a frametime latency is kosher, as long as disclosed that it is an ultra-high-framerate VSYNC OFF lag tester, and you're excluding sync-technology / scanout lag in your lag numbers).
- Partially accidentally/intentionally excluding HDMI/DP port transceiver latency (e.g. sensor only measuring display-end video input tranceiver, not sensor-side video output tranceiver)
Lag stopwatch "start late" methods
...Timestamp at end of VBI for a VSYNC ON lag tester (*Note: Windows does start-of-VBI Present() unblock, so you're also measuring lag of VBI prior to the next refresh cycle, so you're measuring that additional sync-technology-derived lag)
...Timestamp only right after Present()+Flush() for a VSYNC OFF lag tester connected to a PC application
...Intentionally beam-racing the VSYNC OFF tearline position,
ala Tearline Jedi, to intentionally position tearlines right above photodiode sensor, basically Present()+Flush() right before the raster beam physically reaches the scanout position of the sensor photodiode.
Lag stopwatch "stop early" methods
...Using low GtG% threshold (applies to both VSYNC ON and VSYNC OFF lag testers)
...Intentionally positioning sensor on first scanline of new frame (top of screen for VSYNC ON, top of frameslice for VSYNC OFF, right underneath a tearline)
Frankly, simply put, I complain about reviewers not disclosing their lag stopwatch start and lag stopwatch end. People often want to measure lag only of display, since you still have to use an adaptor to connect to a CRT, so you still have HDMI/DP port transceiver latency if you connect a digital output to a CRT. So sometimes some reviewers want to filter the port-transceiver latency (HDMI/DP codecs). Other times reviewers don't bother and just measure Present()-to-photons. So you're measuring transciever1(GPU output)+tranceiver2(display input)+display latency. It's still a Holy War raging behind the scenes. But it is what it is, my beef is
PUBLIC DISCLOSURE. Personally, I just want MOAR DISCLOSUREZ.
Major Errors
I've seen the Leo Bodnar sensor (and other testers) malfunction giving abnormally low or high numbers, due to refresh-cycle aftereffects or pulsing behaviors that produces red herring effects. For example, DLP projectors have a color wheel and each DLP pixel flickers at up to 2880 Hz (1-bit modulation), and some testers go wonky with that. It's possible that sensors screwup on some displays.
Very rarely, refreshing sheninigians in displays (like the faint pre-strobe during Eizo Turbo 240 on FG2421 years ago, for a 120Hz panel that is brief-long double-strobed) will trip up the testers. The pre-strobe was sub-1% crosstalk, but artificially (possibly accidentally) lowered lag numbers by up to a full refresh cycle, despite most of the photons coming from the second strobe.
Conclusion
- Lag testing is a giant rabbit hole, sometimes full of politics. I prefer to
science this out.
-
TL;DR Testing process disclosure (lag stopwatch start + lag stopwatch stop) is the correct tree in the correct forest to complain / criticize about
- 0.3ms numbers are actually realistic, if you optimize the lag testing methodology maximally as above. Outputting >2000fps VSYNC OFF to a CRT will generally achieve this for any scanline. Same for a near-zero-latency scaler/TCON (with only 2-6 scanline (pixel rows) rolling window picture-processing buffers. It can also be achieved via input-lag-measuring top edge only on a VSYNC ON lag tester that does end-of-VBI latency stopwatch-start and has zero queue (waitable swapchain techniques or force-flush techniques).
NOTE: Even flush latency is still usually at least ~0.1ms-0.2ms on most GPUs though, if you're using a hyperpipelined GPU to render a latency test
I find it mighty annoying about the lack of full disclosure about latency stopwatch start/stop, but I have long wanted to start an initiative about latency stopwatching disclosure, because GtG starts depends on whether you're using VSYNC OFF.
Educate Responsibly
Please consider this rabbit hole when reposting in other forums. I do not like it when people spread misinformation on other forums without understanding science/physics of this.
Lag Rabbit Hole Permalink:
Code: Select all
https://forums.blurbusters.com/viewtopic.php?f=10&t=12875&p=100495#p100495
Share the "sciencing it out" real information instead of "manufacture tinfoil hats" fake information.
Thank you for being a responsible member of the Internet. And you're welcome.