AW3423DW officially available for everyone at dell website

Everything about displays and monitors. 120Hz, 144Hz, 240Hz, 4K, 1440p, input lag, display shopping, monitor purchase decisions, compare, versus, debate, and more. Questions? Just ask!
namcost
Posts: 21
Joined: 02 Dec 2021, 19:18

Re: AW3423DW officially available for everyone at dell website

Post by namcost » 16 May 2022, 15:52

greenenemy wrote:
16 May 2022, 05:55
namcost wrote:
14 May 2022, 12:02

Firstly, the HUB review for input lag is bullshit. They make those numbers up. Ask me how I know. Because they claim a processing lag of 4.7ms, processing plus refresh rate lag of 7.6ms, and processing lag plus refresh lag plus response time of 9.1ms.... TWO of these numbers I know for a fact. At 175hz refresh rate lag would be 5.71ms because 1000/175hz=5.71ms. According to HUB and their absolute LIE of information, they are claiming refresh rate lag of 2.9ms (4.7ms minus 7.6ms = 2.9ms). that 2.9ms would be akin to 344.82hz.... we both know the alienware is not capable of that fast of a response time as its hard limit is 5.71ms aka 175hz. So they lie there. I have also done full 0-100 pixel response testing which I posted my graph in this very thread I started. I get sub 1ms response times. Now hub did average out to 1.5ms earlier in the video, and sure enough 7.6ms minus 9.1ms = 1.5ms. So they at least were honest about that. But their refresh rate lag does not make ANY logical sense. In fact I went through a few monitors in that list based on refresh rate and ALL OF THEM are rated faster refresh rate wise than they are actually capable of. The LG C1 was rated 0.5ms input lag (lie) and 4.7ms input lag + refresh rate. That's another flat out lie as refresh rate lag for a 120hz display is 8.33ms. Which is basically double the rating HUB gives.... and we all know the LG televisions have AT LEAST 6ms input lag when gaming. HUB does nothing but lie to people with their made up bullshit results. The only input lag results I will believe is usually TFTcentral, sometimes RTINGs depending on how different they are vs TFTcentral reviews. They are more trustworthy than Hardware Unboxed.
...
HUB is right with their refresh rate lag. When a frame is being displayed for example on 120hz screen you get 0ms at the start and 8.33 at the end so obviously average lag is 4.16ms.
Except now you are just placating for HUB by ignoring literally everything else. You want to make that point, fine. But what about input lag? 0.5ms for the LG C1, when EVERY OTHER REVIEWER has stated 6ms at 120hz and 10ms at 60hz when in game mode. So how does the one and only review end up with 0.5ms? Not to mention that isn't physically possible in televisions due to the processing via the chipset. That's why ALL TELEVISIONS have rather high input lag, rather its 80ms+ typically or 6ms+ in game mode. There isn't a single television on the market that is capable of 0.5ms input lag. NOT ONE. But HUB with their LG fanboyism claim otherwise. So in reality, that should be 6ms input lag + 4.16ms refresh rate for a total of 10.16ms on that graphed plot point.... which they did not show.... I DARE YOU to actually research their results and counter them with multiple other sources. HUB is consistently wrong.

I can tell you right now. HUB was contracted by LG to start lying about LG products due to the "fiasco" that happened. Otherwise they would lose out on free products to test. I literally guarantee it. Because in the world of product reviews, everyone knows you get products for free (to keep) for favorable reviews. Look at reviews that HUB had to buy the display on their own. They have literally no patience for claiming a bad display to be good, where as with LG paid reviews, they will say a bad display is actually "worth buying."

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: AW3423DW officially available for everyone at dell website

Post by Chief Blur Buster » 18 May 2022, 18:25

namcost, there are some big problems with test methodology, but your post is a bit over the top disrespectful...

Reviewers are not perfect.

There are problems with test methodology by all reviewers, but you have to understand it's possible to have a real-world 0.5ms lag AND a real-world 6ms input lag simultaneously.

It's simply different stopwatching methodologies, since not all pixels refresh at the same time. The pixels at the top edge can have 0ms lag, and the pixels at the bottom edge can have 16ms lag.

A panel can have 0.5ms lag during VSYNC OFF but 6ms lag during VSYNC ON.

It's a matter of how you start the lag stopwatch:
- Do they use a full-screen camera (first-any-pixel lag) or a single-point measurement (top, center, bottom)
- Do they use only Present(), or do you use Present()+Flush() as the timestamp for start?
- Do they use RasterStatus.InVBlank as the timestamp for start?
- Do they use a VSYNC detector on the video cable as the timestamp for start, or do you use the API on WIndows side? (video port codec delays, etc)
- What sync technology do they use? (see below)

And how you stop the lag stopwatch:
- Do they detect the pixels via a photodiode? Other method?
- Which GtG% do they trigger at? Pixels don't change color instantly, they "fade from one color to the other".
- Some reviewers stopwatch-stop at GtG2%, others at GtG10%, and yet others at GtG50%, yet others at GtG100%
- What temperature was lag measurements done at? (temperatures shifts the GtG curve, as colder LCDs have slower GtG)
- What color is the lag measured on? GtG's of different colors are different, so you get different lag numbers.
- Remember: Human reacts to pixels before they're GtG 100%. if you're trying to publish numbers matching human reaction time, you cannot lagtest to GtG100%!

And how the sync/strobe technology is configured
(Important: Do 100-1000 test passes each for TOP, and for CENTER, and for BOTTOM, and average the lag)
- VSYNC ON + nonstrobed means TOP < CENTER < BOTTOM
- VSYNC OFF + nonstrobed means TOP == CENTER == BOTTOM
- VSYNC ON + strobed (0 crosstalk, or crosstalk below GtG% stopwatch threshold) means TOP == CENTER == BOTTOM
- VSYNC OFF + strobed (0 crosstalk, or crosstalk below GtG% stopwatch threshold) means TOP > CENTER > BOTTOM
- strobed (bad crosstalk) sometimes goes weird like TOP < CENTER > BOTTOM or TOP > CENTER < BOTTOM

Certainly, some reviewers are often influenced by manufacturers to choose a methodology that favors specific conditions (e.g. VSYNC OFF esports lag with early GtG cutoff thresholds). They certainly get free samples from manufacturers, but obviously there are additional considerations at play.

Great napkin exercise conundrum: You know, the honorable GtG100% stopwatch end for lag testing is well-intended but totally misguided when GtG50% for a black->white transition means a pixel is already emitting half the light of a fullwhite -- and that's easy for eyes to already see. So a GtG50% lag test is much more human-reaction-time accurate than GtG100%, but it's hard to scientifically determine the exact threshold, as different human clicks -- some will see the GtG much sooner (e.g. by GtG10%) while others a bit later. Split the difference, GtG50% is a more noble lag-test compromise, but it is "influenced" by the need to match human reaction behaviours, rather than isolate to display-processing behaviours (GtG 2% is more representative of that, because once GtG has already started, we know the display has already "processed" the pixel). So one asks oneself -- is the goal to isolate lag to display, or try to match lag numbers to the approximate trigger point of a human reaction time? So that's why I'm no fan of GtG100% for latency tests. Now you see the conundrum!

But I am a messenger to inform you that I've seen "0.5ms" and "6ms" from exactly the same display, depending on how the above is tweaked.

As so many of us know, many 240Hz monitors have bad lag at 60Hz, so using a console-optimized 60Hz VSYNC ON lag tester like Leo Bodnar will produce more useful numbers relevant to console users than for PC-esports users (like RTINGS VSYNC OFF lag tester).

It's fine to criticize reviewer methodology, but I feel that this is beyond bounds of Blur Busters decorum. Ideally, please criticize the chosen methodology is our mantra, rather than venting anger at the reviewer or manufacturer.

I definitely have a legitimate complaint too: Not all reviewer websites publish their stopwatch-start and stopwatch-stop methodology for their lag tests. And indeed, methodologies can be flawed and sometimes bad thresholds that diverge from human thresholds can be chosen, but you have to realize this reality.

We are considering a campaign "Blur Busters Reviewer Test Methdology Disclosure Best Practices" to solve this mess. But, whoa, buddy.

I'm also a fan of websites publishing multiple lag numbers.
And lag stopwatching methodology published.
e.g. multiple numbers with corresponding lag-test disclosures.
"Lag of max Hz from Present() to GtG50%, screen center, VSYNC OFF" (more relevant to esports)
"Lag of max Hz from Present() to GtG50%, screen center, VSYNC ON" (more relevant to windows compositors & VSYNC ON gaming)
"Lag of 60Hz from Present() to GtG50%, screen center, VSYNC ON" (more relevant gaming consoles)
"Lag of max Hz from Present()+Flush() to GtG2%, screen top, VSYNC OFF, room temp calibrated 20C±0.5C, 24hour pre-warmup"
(more isolates display processing lag and warmup maximizes GtG speed, like a monitor left on 24/7)

Because of this, and if you start bypassing timestamping a GPU API and instead timestamp externally with a VSYNC detector on the video cable instead, I've seen <0.5ms lag and 20ms+ lag on the same monitor -- yes the spread is pretty huge.

Not possible to standardize around one number. The lag number on RTINGS is VSYNC OFF + GtG2%, so RTINGS will always be lower than websites that use VSYNC ON + GtG100%. Gigantic difference. Console users and PC-esports users need their own separate lag numbers because of obvious reasons clearly succulently mic-drop explained in this very post.

Timestamps of start is immediately after the Present() or Present()+Flush(), and never beforehand. GPUs have pipelining that does background rendering in a shingled manner (much like CPU instruction pipelining of running multiple instructions concurrently), so Flush() is important for removing GPU latency from lag test numbers by forcing the GPU to finish rendering the current frame and begin outputting the first raster scanline of that specific frame within microseconds of the flush, otherwise GPU lag can vary by multiple milliseconds (especially if GPU utilization is so low that it uses a low-power slower pipelining mode to save power).

Publishing latency test methodology is what we should shame "reviewer test methodology" (but not shaming reviewer names directly), because that's a big information deficit of input latency numbers. Blur Busters Advocacy on improving future lag testing will be optimized towards improved latency test disclosure. Sites won't unify testing methodology, but we can at least standardize a few important common use cases.

Broken record here. I've seen too many photodiode-tester users start reviewersplaining, and then become humbled once I fully explain how complex things actually is, with all the error margins they forgot about. Even for reviewers that aren't even influenced by manufacturers to choose a specific lag stopwatching parameters.

Yes, lots of messed up lag-testing methodologies. Being that said, a flawed methodology is still outputting numbers that are accurate to the flawed methodology and the methodology's own error margins. And sometimes "flawed methodology" is never unamiously agreed! Console users should always view PC-esports lag numbers (VSYNC OFF lag tester) as flawed for them, and PC-esports users should always view console-optimized lag numbers (VSYNC ON lag tester) as flawed for them. And that's not the only disagreement. Some of us wants to know the latency of the display itself, completely isolated from the GPU and GtG. There's so many needs that one number, just simply ain't possible.

Also, breaking in a newly-shipped LCD for 1 week of 24/7 operation is important for accurate GtG's too, because pressure spots (like the foam pieces of a monitor box, or mishipped flat causing a bezel-braket pressure imprint in the middle), take many days to fade and for GtG numbers to stabilize -- reviewers gotta break-in a newly received panel too. I've seen GtG numbers vary by more than 25%-100% in many squares between exactly the same panel for GtG heatmaps, just because a reviewer forgot to "break-in" a panel and also warm-it-up before testing too (e.g. some testers just power it on and begin testing right away -- but many like RTINGS will properly break in a monitor first, followed by making sure it's also pre-warmed up if it is already broken-in).

I also highly suggest you do not even bother to reply to this post, until you retest the darks of your GtG heatmap after putting your monitor temporarily in your basement box freezer for 30 minutes (simulate a cold LCD in a winter lab), and after 24 hours of re-warming back to room temperatures re-test after putting the LCD outdoors in direct summer sunlight (or hot car) for 30 minutes. You will be shocked at the major GtG differences and major lag differences. Totally. And many reviewers don't bother to publish their room temperature -- even 18C vs 22C has major lag differences if you're using a temperature-sensitive GtG trigger. Many pro testers will use a lab thermometer to calibrate their room temperature exactly to 20C. This was less important back in the "33ms" GtG days twenty years ago, but now critically important in the "1ms" GtG days.

TL;DR: The numbers are not lies, even if test methodology may be influenced, and sometimes diverge from real-world reaction time. They are just different methodologies, possibly executed in imperfect conditions (e.g. room temperatures, panel lottery effects, GPU pipelining performance).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: AW3423DW officially available for everyone at dell website

Post by Chief Blur Buster » 19 May 2022, 11:59

I forgot to add additional considerations, important in the science of lag testing:

-- Some reviewers test over USB, which has a latency. Good lag testers can keep USB latency down to about 0.1-0.2ms but old Arduinos had 4ms latency. Ideally this should be automatically calculated (ping-pong between PC and tester) and compensated for. Synchronous polling (like 1000Hz or 8000Hz mouse) can also make this error margin a little more predictable and compensatable.

-- Some photodiode circuits have a strange lagbehind effect. The simplest circuits can have some capacitance effects that shifts the GtG curves a little, and thus change the cutoff threshold. It's subtle, but in a "exit-the-noisefloor" situation of darks, it can be a large error margin (>10%) with some circuits. Using a better circuit design and selecting a better opamp can improve accuracy quite a bit and lower the noisefloor further.

-- Long photodiode wires add noise to the noisefloor, e.g. taped photodiode wired all the way to a distant oscilloscope's inputs. Especially if you're bypassing an opamp (not recommended to skip an opamp amplification stage for a photodiode tester, big difference in GtG10% trigger accuracy).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: AW3423DW officially available for everyone at dell website

Post by Chief Blur Buster » 20 May 2022, 17:17

Rewording your post to better forum decorum and "benefit of doubt" reviewer respect:
namcost wrote:
14 May 2022, 12:02
[Reworded to with preferred forum decorum / reviewer respect]
Firstly, the HUB review for input lag seems inconsistent based on my testing experience, which may indicate their test methodology may be flawed. They are producing numbers that don't seem to make sense. Ask me how I know. Because they claim a processing lag of 4.7ms, processing plus refresh rate lag of 7.6ms, and processing lag plus refresh lag plus response time of 9.1ms.... Two of these numbers I know for a fact, at least based on the lag test methodology [describe here]. At 175hz refresh rate lag would be 5.71ms because 1000/175hz=5.71ms. According to HUB and their lag results inconsistent with mine, they are claiming refresh rate lag of 2.9ms (4.7ms minus 7.6ms = 2.9ms). that 2.9ms would be akin to 344.82hz.... we both know the alienware is not capable of that fast of a response time as its hard limit is 5.71ms aka 175hz. So they lie there. I have also done full 0-100 pixel response testing which I posted my graph in this very thread I started. I get sub 1ms response times. Now hub did average out to 1.5ms earlier in the video, and sure enough 7.6ms minus 9.1ms = 1.5ms. So they at least were honest about that. But their refresh rate lag does not make ANY logical sense. In fact I went through a few monitors in that list based on refresh rate and ALL OF THEM are rated faster refresh rate wise than they are actually capable of, though I define "refresh rate lag" as "scanout lag". I'm not sure what Hardware Unboxed defines "refresh rate lag", as I have a beef about the lack of disclosure of what "refresh rate lag" means from their perspective. The LG C1 was rated 0.5ms input lag (lie) and 4.7ms input lag + refresh rate. That's inconsistent with my data as refresh rate lag for a 120hz display is 8.33ms. Which is basically double the rating HUB gives.... and we all know the LG televisions have AT LEAST 6ms input lag when gaming. HUB does not seem to produce results that I can trust. The only input lag results I will believe is usually TFTcentral, sometimes RTINGs depending on how different they are vs TFTcentral reviews. They are more trustworthy than Hardware Unboxed.

With that in mind, your issue with the display should not be input lag or processing lag. As an actual competitive gamer who is high ranked in every FPS I play (global elite in cs:go for example) I have had little to no issue adapting to the new display. I went from a 240hz 5ms average display to the AW3423DW which is 175hz and <1ms. Both displays are on par in terms of visual fidelity. There is literally zero input lag difference from other top end gaming monitors and I'm not sure what they mean by "processing lag" because in my experience this is not easily measurable with the equipment they have at hand, unless they are doing something they're not disclosing. and even then everything you plug into your pc has to certified for ldat. which means a handful of mice, only nvidia monitors, etc. And HUB gives cpu time from non nvidia displays. last I checked LDAT wont even let you use it unless the display is nvidia. so they can't check freesync displays at all. I deem these very inconsistent results from HUB and their test methodology & disclosure needs improvement.
The lack of disclosure of test methodology is a huge flaw, I agree. Blur Busters is looking into an advocacy drive to improve reviewer testing methodology -- see latency stopwatching start/stop.

Also, processing lag refers to the monitor motherboard (a tapedelay lag), not the CPU lag.
It's possible to measure with an ultra high framerate VSYNC OFF from a simple app, using:
"Lag of max Hz from Present()+Flush() to GtG2%, screen top, VSYNC OFF, room temp calibrated 20C±0.5C, 24hour pre-warmup"

So you timestamp right after Flush(). That's because the first pixels of the new frame starts transmitting out of the GPU output almost immediately, in a beam-raced fashion (see Tearline Jedi).

On CRT, there is practically almost 0ms lag between Flush() and the first pixel row immediately below a tearline. I measured the lag of Tearline Jedi demo, and I can confirm that ultra low latencies are possible if you surgically locate a photodiode right below a VSYNC OFF tearline, since VSYNC OFF frameslices are simply latency gradients of [0..frametime] along the vertical dimension of the frameslice. So to get as low numbers as possible, you need to measure right below a tearline, or just spray ultra high framerates (and average the VSYNC OFF lag).

Assuming you stopwatch to an early part of the GtG curve, you can get display processing lag (monitor motherboard only, not CPU) this to as low as ~0.5ms lag on some digital flat panels, but I agree 0.5ms is possibly an incorrect methodology when it comes to the OLED TV since I know it often buffers a full frame rather than rolling-window subrefresh processing, unless the newer LG OLEDs now has subrefresh processing in Game Mode (if so, then brilliant -- 0.5ms is realistic (it's achieveable on CRT) if it's a beamraced tearline read or an ultrahighframerate VSYNC OFF designed to ignore GPU/CPU lag and simply measure GPU-output-to-Photons lag (to isolate display motherboard processing lag). Then the 0.5ms lag is a honest reading under this criteria.

It would apply only to just right below a VSYNC OFF tearline -- aka a beamraced latency read or an ultrahighframerate VSYNC OFF latency read (and then averaged) -- sometimes a tester can just spray random tearlines (add 100 microseconds of Present() jitter, to help randomize it) and simply subtract half the trailing average frameslice latency from the final result to get a very accurate monitor-motherboard tapedelay lag (the industry standard "processing lag").

Since a randomized single-point photodiode read of a frameslice is any latency adder of [0...frameslice] then you simply subtract frametime/2 to eliminate the frameslice scanout latency (without needing to beamrace the photodiode location precisely), and isolate your latency number to display processing lag -- it is possible to measure display processing lag this way via a generic non-LDAT Present()-Flush() stopwatch to an Arduino photodiode. This is because an average of random[0...frametime] repeated a thousand times will produce a number of (frametime/2). This is verifiable in a test, and thus provides the algorithmic math correction necessary to isolate display motherboard-specific processing latency.

Assuming the test app just sprayed simple rectangles at 1000fps or 10,000fps, then you can pretty much measure the monitor motherboard's own processing delay (tapedelay lag) using any ordinary Arduino tester, via the ultra high framerate VSYNC OFF method. Many testers do this. RTINGS does the wise thing and publishes multiple lag numbers.

Being that said, read my previous post to know how complicated latency stopwatching is.

Ideally, I think HUB should probably better inform about methodology -- and to improve their glossary of what they mean by "processing lag" and by their terminology, as well as disclose all variables of their latency stopwatch start/stop triggers.
namcost wrote:
16 May 2022, 15:52
[Reworded to with preferred forum decorum / reviewer respect]
I can tell you right now. HUB was contracted by LG to change their latency test criteria, without disclosing to public what latency test measurement thresholds are changed. They may have had to change thresholds upon request by LG. Otherwise they would lose out on free products to test. I can vouch for this information because I got this information from [please specify] Because in the world of product reviews, everyone knows you get products for free (to keep) for favorable reviews. Look at reviews that HUB had to buy the display on their own. They have literally no patience for claiming a bad display to be good, where as with LG paid reviews, they will say a bad display is actually "worth buying."
The reviewer industry is full of influence by free samples, indeed -- I agree.

RTINGS avoids this by buying from scratch. Others such as TFTCentral keep good reputation by refusing to vary their testing criteria upon request by manufacturers, despite receiving samples from manufacturers.

I would like all reviewers, industry-wide to publish better lag stopwatch stop/start criteria, and a proper glossary for their terminology such as "refresh rate lag" and "processing lag". There are already home theater standards for "processing lag" which actually refers to the monitor motherboard processing lag, not CPU lag. However, since you got confused by this -- I certainly have no issue blaming lack of disclosure by the majority of reviewers (Not just HUB, but also RTINGS and TFTCentral included -- it's an industrywide lag-test disclosure problem).

This is why I always do benefit-of-doubt approaches, because disclosure needs to be fixed before we can begin slandering numbers as lies -- that is out of bounds, because it's still within all the lag-test error margins of most/least conservative lag stopwatching methodology.

Also, a capped frame rate in a lag test, while running in Power Saver Mode (or sometimes Balanced), can sometimes trigger GPU power management that adds more GPU lag after Present() unless Flush(). And many reviewers don't bother to Flush() before getting a lag stopwatch-start timestamp, to reduce the pipelined-GPU-lag error margin -- if you're trying to measure a display processing lag and trying to cancel out CPU/GPU lag.

I do know RTINGS and TFTCentral use VSYNC OFF lag testing methodology for their tests, although the error margin is probably +/- 1ms (ish) due to lack of lag test disclosure (e.g. thresholds to trigger a timestamp for lag stopwatch start / lag stopwatch stop).

SMTT (that TFTCentral historically used in the past) is not as accurate as RTINGS' photodiode tester, and I was only able to find out that RTINGS use GtG2% as the latency stopwatch-end -- which means they don't include GtG lag in their lag-test results.

Even at 1000fps VSYNC OFF that many reviewer lag testers use (to get esports-relevant display lag numbers), I also don't think they subtract the requisite half-a-frametime latency (trailing average frametime) to isolate display lag away from GPU/CPU/driver lag. So that's about a 1ms error margin added there, from not knowing if they correct for frametime latency (either by beamracing methods or via mathematical methods like subtracting half a frametime for a tearing-randomized ultrahigh-framerate approach to VSYNC OFF lag testing).

I believe we'll launch some kind of lag-disclosure advocacy drive at some point.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

namcost
Posts: 21
Joined: 02 Dec 2021, 19:18

Re: AW3423DW officially available for everyone at dell website

Post by namcost » 23 May 2022, 14:58

Chief Blur Buster wrote:
19 May 2022, 11:59
I forgot to add additional considerations, important in the science of lag testing:

-- Some reviewers test over USB, which has a latency. Good lag testers can keep USB latency down to about 0.1-0.2ms but old Arduinos had 4ms latency. Ideally this should be automatically calculated (ping-pong between PC and tester) and compensated for. Synchronous polling (like 1000Hz or 8000Hz mouse) can also make this error margin a little more predictable and compensatable.

-- Some photodiode circuits have a strange lagbehind effect. The simplest circuits can have some capacitance effects that shifts the GtG curves a little, and thus change the cutoff threshold. It's subtle, but in a "exit-the-noisefloor" situation of darks, it can be a large error margin (>10%) with some circuits. Using a better circuit design and selecting a better opamp can improve accuracy quite a bit and lower the noisefloor further.

-- Long photodiode wires add noise to the noisefloor, e.g. taped photodiode wired all the way to a distant oscilloscope's inputs. Especially if you're bypassing an opamp (not recommended to skip an opamp amplification stage for a photodiode tester, big difference in GtG10% trigger accuracy).
Why use an arduino? You can literally buy USB oscilloscopes.... like my Picoscope 2204a.... there is no reason to try to adopt an arduino to do a task where dedicated products exist. To me that's just laughable. And then I also use the ThorLabs SM05PD3A mounted photodiode. I also happen to have bought their cable that converts SM to BNC to plug into the picoscope.... It's an incredibly accurate combo. I dare you to buy them yourself. Its not that expensive. $160 for the picoscope (more money for faster variants but I bought their lowest end) and then $80 for the photodiode and $20 for their SM to BNC cable.... test for yourself.

As far as noise floor, I suppose you mean how the lower and upper end of a reading will vary between voltage. You can EASILY find the min/max of that noise area and still find the true start/stop of a pixel change. Which is what I did for my AW3423DW test. I was LAZY on the XG2431 test and only did 10-90 but I wasn't feeling good back then so I wasn't up to taking the time to find start/stop times manually. Again I don't get paid to review, I do it for fun.

IF MY TOOLS can find the AW3423DW to having sub 1ms response times (0.1 to 0.8) then there is no reason why the tools PROFESSIONAL reviewers have should get significantly worse.... I admit I don't know everything about everything, but I do know my tools are cheap, and work exceptionally well. In the case of the CHG70 my times were almost perfectly matching most reviews. Slight variation but as you said that would be akin to panel lottery. I highly doubt my times for the AW3423DW were because "hot environment" as one of your other posts stated.

And you are right, I am angry as hell. I am tired of reviewers not doing their job properly. I am tired of companies that make products getting away with consistently selling us garbage products. And the reviewers not blasting them for it. Even looking at the difference between gaming monitors and televisions, televisions never stopped innovating. Meanwhile gaming monitors are stagnant. We should be so far ahead right now. I am sure you could agree. You apparently work with monitors.... you know we could be further ahead but brands slack off.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: AW3423DW officially available for everyone at dell website

Post by Chief Blur Buster » 23 May 2022, 15:37

namcost wrote:
23 May 2022, 14:58
Why use an arduino? You can literally buy USB oscilloscopes.... like my Picoscope 2204a.... there is no reason to try to adopt an arduino to do a task where dedicated products exist. To me that's just laughable.
The problem is automating.

Automating a GtG heatmap is easier if you have a display and an oscilloscope co-operating each other to measure GtG color combinations. You can run a setup overnight to do 65,536 GtG measurements, with an Arduino-controlled oscilloscope.

In fact, did you know you can connect an Arduino/Raspberry PI to a programmable external Tektronix oscilloscope connected to a ThorsLab photodiode? It's a bit expensive to have a high end oscilloscope that can be GPIO or I2C or API controlled, but more API-controllable external oscilloscopes are becoming available now. The cheapest API-controlled photodiode oscilloscope is an Arduino, unfortunately, and that's all a display reviewers can afford...

What API-controlled oscilloscope do you use to automate your GtG heatmapping?
[Edit: You just mentioned manual. Ouch.]
namcost wrote:
23 May 2022, 14:58
And then I also use the ThorLabs SM05PD3A mounted photodiode.
Excellent choice. Most reviewers don't use such high end photodiodes, and they don't even bother to do a BNC cable. You did the right things, but that does not make reviewers who skimped equipment, "a lie" -- just flawed equipment necessitating thresholds that may not do accurate jobs of full range GtG0%-100% measurements.
namcost wrote:
23 May 2022, 14:58
It's an incredibly accurate combo. I dare you to buy them yourself.
I guess you never saw the photo of the ThorsLab in the background of one of my tweeted photos (I'll have to dig it up).
I know, I know!

I am still developing a self contained device that brings "better-than-many-pros" automated GtG heatmapping to end users, you saw it as my pinned tweet.

I am going to embed text-based lag disclosure measurements into the screenshots of the app, to force more sites to disclose VSYNC/ON/OFF/VRR/etc and other parameters embedded into the image. It won't be as accurate as ThorsLab, but more accurate than a lot of pro-reviewer equipment/mistakes -- plus the error margins will be plainly embedded as text at the bottom of the output fancy graphics.

This is targeted to end users and manufacturers as well, as anybody can buy my testing device. I'm also recruiting peer review to analyze the device. The most expensive part is the software development necessary to have one-click GtG heatmapping (no manual stuff!)
namcost wrote:
23 May 2022, 14:58
As far as noise floor, I suppose you mean how the lower and upper end of a reading will vary between voltage. You can EASILY find the min/max of that noise area and still find the true start/stop of a pixel change.
That's correct, yes. In theory reviewers SHOULD be disclosing this ****! My beef is the lack of disclosure of noisefloors too -- they vary a lot between different reviewers. Also remember lots of reviewers calibrate to a photoshop-ready good-color-graphed 120 nits before testing GtG's, so the darks are harder than at full brightness too. There should ideally be multiple GtG heatmaps -- one at color calibrated -- and one at full bright HDR gamut, etc. The averaged GtG numbers are very different, even!
namcost wrote:
23 May 2022, 14:58
Which is what I did for my AW3423DW test. I was LAZY on the XG2431 test and only did 10-90 but I wasn't feeling good back then so I wasn't up to taking the time to find start/stop times manually. Again I don't get paid to review, I do it for fun.
Manual? That's tough.

I thought you had an API-controlled oscilloscope when you published a GtG heatmap.
Sorry about that. I do grant that some pros are doing automatically (They chose to spend on a lower-quality photodiode to gain the ability to automate) and other pros are doing it manually. Even TFTCentral still used SMTT for a long time for their lag tests, because nothing was available within the skill levels of Simon Baker. Yet he does a really good job with super-limited equipment!
namcost wrote:
23 May 2022, 14:58
IF MY TOOLS can find the AW3423DW to having sub 1ms response times (0.1 to 0.8) then there is no reason why the tools PROFESSIONAL reviewers have should get significantly worse.... I admit I don't know everything about everything, but I do know my tools are cheap, and work exceptionally well.
Agreed.
As long as you're willing to spend extra time manually, you can get superlative equipment pretty cheaply.
namcost wrote:
23 May 2022, 14:58
In the case of the CHG70 my times were almost perfectly matching most reviews. Slight variation but as you said that would be akin to panel lottery.
Yes, and temperature and other panel settings (even 0.5C can change GtG numbers by 10%, and change lag numbers via the GtG-end cutoff threshold). Some panels also go into weird modes (e.g. buffering refresh cycles) when certain settings are changed (e.g. using a slightly lower Hz)
namcost wrote:
23 May 2022, 14:58
And you are right, I am angry as hell.
Me too, though I am way more nuanced with my words.

My feeling is the easiest way to educate reviewers is via diplomacy-based approaches, as Blur Busters handles an outsized role in educating reviewers. If I scream at them, they stop listening to me. If I critique them (like my paraphrased post), they start listening and make changes. It's a hard slog, much like my slow 60Hz-single-strobe advocacy and 1000Hz-is-useful-to-humankind advocacy, but I gotta be nuanced and diplomatic due to my outsized influence of reviewers.

I apologize if pounced a little harshly on you, being Blur Busters as an incubator of custom indie display tests for more than 500 content creators with a grand total of more than 100 million people. (LinusTechTips 14.5M, RTINGS 9.5M, etc, etc, only to name only two). With such crushing weight on my shoulders -- I have to be as impeccable as possible YET also diplomatic to industry (reviewers, manufacturers).

Blur Busters is metaphorically kind of an embassy/advocacy/education role between angry end users and the industry -- I gotta "speak to them in a way they can understand" so I do fight hard against politicspeak (like current contemporary era of many countries "LIES LIES / FAKE FAKE / FALSE NEWS" politics media blather which I so vehemently despise of all sides that makes me want to completely tune out and stick to 100% geekiness). It's tough.

I'm only human, y'know. Apologies.

As a nuts and bolts "indie tests incubator" (both our own and in influencing forum members to develop tests -- including you who made your own tests inspired by blur busters) -- With 100M people worldwide viewing reviewers where they use my tests, I have a heavy weight on my shoulders!
namcost wrote:
23 May 2022, 14:58
I am tired of reviewers not doing their job properly. I am tired of companies that make products getting away with consistently selling us garbage products. And the reviewers not blasting them for it. Even looking at the difference between gaming monitors and televisions, televisions never stopped innovating. Meanwhile gaming monitors are stagnant. We should be so far ahead right now. I am sure you could agree. You apparently work with monitors.... you know we could be further ahead but brands slack off.
Totally fair criticism of industry. 100% agree.

That's why I am going to hard-code honesty into the screenshots output by my testing device, if I can release it. I might even use steganography for the integrity check (e.g. SHA2 hash or other) to detect whether the honesty was manually edited.

For me, it is hard to carefully release Blur Busters' first hardware product that is intended to be 100% API controllable -- I don't want to be a Failed Kickstarter or hurt reputation if my numbers output slightly differently than other cheaper/more expensive equipment.

Eventually (maybe not initially), my aim is to also add a feature that embeds a hash to detect unmodified screenshots of the tester graphs/output, since it will publish testing disclosures in text at the bottom (statistics / telemetry / settings / etc) with a versioning mechanism so that missed telemetry can be added later and version number bumped (much like RTINGS test version numbers or such). So when they publish the screenshots on blogs/etc, the accuracy of the test can be verified. I could even embed the nits / noisefloor detected too, if necessary into the "geek stats". Even the telemetry can include things like the Device ID detected from the EDID, so I know it's the correct display being tested, etc. At least with that info, it becomes easier to call out mistakes, accidents, or even deceptiveness - etc. Just like my sync track in pursuit photos is already a visual certificate of camera tracking accuracy, I'm a fan of embedded accuracy automatically built into the results...

Automation reduces the problems of laziness, since I can use many algorithms to guarantee that the user places the photodiode sensor in the location it needs to be (e.g. so not incorrectly passing off top measurements as bottom, and vice versa, etc), to avoid unintentional errors or intentional deception. There are many ways to make it plug-n-play.

It's scary, y'know. I need a good beta-test vetting, and I may even opensource elements of these [still figuring out details -- e.g. give users a choice of purchasing fully built product + give users/lower budget reviewers a choice to build own opensourced product (but that won't utilize the SHA2-honesty-steganography or such, since I can't guarantee the veracity of end user opensource builds). I'm still hunting for the free:paid ratio compromise, if I go both free & paid approaches.

It's squarely in between, but its bang for buck will be excellent for an API-automatable product. With an API, the goal is you can run all display tests overnight (GtG including heatmaps, MPRT, VRR, lag, simulated pursuit camera photographs, etc, of multiple Hz and multiple sync technologies) after one button click per screen position. End users will have more power than 90% of display reviewers, and display reviewers/manufacturers will also be forced to buy our product or upgrade their existing product.

Initially, feature set will be limited, but this is the goal. And it won't be as accurate as a ThorsLab -- but it doesn't need to be, to get quite useful data.

I'm still thinking through many approaches that simultaneously pays the bills yet full preserves Blur Busters reputation as THE incubator of free indie display tests. We shall see!
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Dirty Scrubz
Posts: 193
Joined: 16 Jan 2020, 04:52

Re: AW3423DW officially available for everyone at dell website

Post by Dirty Scrubz » 29 May 2022, 17:24

Tom Clancy's E-sport wrote:
14 May 2022, 11:21
Boop wrote:
10 May 2022, 00:45
  • AW3423DW VSYNC OFF, GSYNC OFF, Uncapped FPS, did not feel as smooth as GSYNC ON for competitive FPS games. I assume it's caused by the large screen size, 175Hz, and screen tearing with a near zero pixel response time. If I played a game with 1000fps (Quake, Diabotical) it felt very smooth, but a game near 175-300fps felt like a downgrade compared to GSYNC ON.
  • XG2431 240Hz with VSYNC OFF, PureXP Normal, Uncapped FPS, feels much smoother to me and easier to see targets.
  • AW2521H 360Hz with VSYNC OFF, Uncapped FPS, feels smoothest but blurry compared to the others without ULMB turned on.
The AW3423DW is a treat for casual gaming but I prefer the XG2431 for competitive shooters. I'm looking forward to future QD-OLED displays with BFI/Strobing and 240Hz+ refresh rate.
After dealing with AW3423DW for several weeks, I cannot do the same effective 180° shot in games with it. Some reasons that make it slower than Zowie monitors out there are due to the higher input lag, longer GPU rendering time, lower refresh rate.

The conclusion is AW3423DW is good as a casual monitor but not a snappy competitive monitor. Another issue is the brightness is quite low. It's kind of hard to believe 2017 Zowie XL2546 with strobe displays a smoother, brighter image than AW3423DW.
I had a Zowie XL27464S and it was worse whether strobing or not vs this AW QD-OLED in competitive shooters (WZ, Apex Legends, CS:Go etc). Even with BFI, you still have overshoot ghosting that is never solved for where as the OLED doesn't have that. Latency feels better on the OLED as well from pressing the mouse to seeing the reaction on screen. I think HUB had an agenda against this display, they tried their best to convince their readers the screen was gray in regular room lighting and then I called them out on it on Twitter w/photos of my setup. They then claimed that THREE overhead lights is "dim" and that in their viewpoint, a bright workplace is basically aiming the sun or direct light at the display. After that I couldn't take them seriously for display reviews anylonger.

Post Reply