RLCScontender wrote: ↑25 Jun 2020, 03:25
Acer Predator XB273X review.
[snip]
Now...
Math classroom teacher mode...
- incorrect-rounding.png (5.86 KiB) Viewed 4978 times
Incorrect rounding procedure.
Now, 1/60 = 16.6666666666666666666666666666666667
In actuality, the 6 repeats infinitely, and 7 is the last digit rounded off. Now, 60 Hz is often not always exactly 60 Hz on all monitors and GPUs, often fractionals show up at
www.testufo.com/refreshrate such as 60.001 or 59.997. So your 16.666666666666[...] will be even more imperfect. Nontheless, it'll usually be a 16.67 when rounded to the nearest 2 decimal digits -- NOT 16.6.
So, if you're trying to sync-up significant digits, 2 digits, you're supposed to use 16.67 and not trunctate. A math teacher would give you a mark knockoff for that. Also, while the other test data is very valid, the 60-144-240 is a redflag of potential mistests, so I'm going to have to ask for more information.
So more accurately, if you do keep using that many significant digits, it should have been:
16.67ms + 0.03ms = 16.70ms
You data additionally suggests this panel is scanrate multisync at 144Hz and 240Hz, but not at 60Hz. I've never seen that before, so please re-provide the output data that shows this. Most 240Hz panels have tended to show scan-converting behavior at both 144Hz and 60Hz, so requires testing passes again.
Also, most LCD panels are unable to start GtG that quickly at the 0.03ms, unless you're testing at literally GtG1% (or a very early curve-start value). So I'm assuming you're watching well before human reaction time trigger. I'm not a fan of arbitrary latency values that can vary a lot -- I have seen many "0.0X" and "0.X" numbers actually be "2.x" vs "1.x" because of different curve shapes for different colors, so I prefer human-visible-photons GtG% triggers like GtG10% or GtG50%, for more real-world comparision of display lag to human reaction time. One can continue to use early GtG-triggers for stopping the lag stopwatch. But even rated GtG=3ms of different panels (or even of different colors) can end up becoming different lags to midpoint GtG, such as GtG10%. Because of different curve shapes especially for different colors/different temperatures/etc. Also the GtG curve sometimes kicks differently at lower Hz vs higher Hz because of pixel-row-buffering delays of a slower-transmitting signal versus faster-transmitting signal, so seeing duplicate 0.03's (rather than 0.02 vs 0.03 vs 0.05 etc.) creates potentially suspicious data.
0.03 also veers into DisplayPort micropacket latency granularity territory (and even, sometimes, latency jitter becomes visible and GPU-specific latency behaviors -- especially if multiplexed with multiple display streams and audio), and micropacket latency varies at different refresh rates sometimes due to different clock speeds, but newer DisplayPort has switched to a fixed bandwidth link (max data rate) with micropackets spacing apart the pixel row delivery in display scanout.
Now, even if this data is already 'perfect', I also tend to dismiss synthetic lag numbers (when it's not yet generating human-visible photons) much like I dismiss synthetic hard disk benchmarks. It's ultra-low-lag, but if we're cherrypicking that second decimal digit (and I caught some errors on this), then it has to be done correctly -- I'm going to call this out...
Please watch the lag data like a hawk. I'll call out lag disclosures problems a bit more, I'm afraid I'm going to have to say your lag results are partially corrupted/invalid because of some mis-steps and error margins.
So to help amend this -- I think you should write a thesis/essay-quality paper (1 page) about your lag testing methodology. No all-caps, no color codes, no social-media-meming, to help this thread understand your lag tests better. Your previous post about latency tests does exist, but I'm asking you to write a brand new post now -- properly re-describing. Oscilloscope is good, but there's always weak links, and reusing 0.03 the way you did does not appear to be correct test procedure. Re-describe your lag testing methodology, more fully, including what you do for 60Hz, 144Hz and 240Hz, and repost correct results., and needs to be correspondingly upgraded. So I need to see a higher quality testing disclosure.
Given scanout latency -- how consistent is your photodiode location? Also, even mis-positioning the photodiode by 1 millimeter actually change lag numbers significantly at the 2nd decimal digit already, so you must be putting the photodiode quite flush to the top edge in a very consistent location, to get consistent glassfloor 0.03. You'll see lag increases for every 1 pixel you move the photodiode downwards, due to scanout latency found at
www.blurbusters.com/scanout
Some details:
-- For scan-converting TCONs doing 60Hz on a 240Hz panel, only 3/4ths of a refresh cycle is buffered (~12-13ms) relating to 240Hz versus (240Hz minus 60Hz), to allow the diverged scanout (cable, panel) to 'meet' at bottom edge. This creates a strange latency linearity behaviour along the screen vertical dimension because of different scanout differentials for different pixel rows. So you have a latency range of [~12ms....~16.7ms] for TOP-vs-CENTER-vs-BOTTOM for a scanrate-converting scaler/TCON (a panel that buffers a slow-scanning 60Hz signal and does an accelerated-scanout).
Post about why 60Hz fixed-Hz mode often is laggy on 240Hz monitors. (This usually also occurs with all refresh rates below 240Hz, but there are some weird exceptions and weird scan velocities at different Hz on different panels).