Response Time Normalization

Everything about displays and monitors. 120Hz, 144Hz, 240Hz, 4K, 1440p, input lag, display shopping, monitor purchase decisions, compare, versus, debate, and more. Questions? Just ask!
Post Reply
masneb
Posts: 239
Joined: 15 Apr 2019, 03:04

Response Time Normalization

Post by masneb » 20 Jul 2019, 01:41

So a weird and counter intuitive topic matter here, why can't response times be normalized for the lowest common denominator? Obviously transitioning from one color to another can sometimes be faster then other colors, but why can't the monitor artificially reduce the speed that it transitions to make it more seamless? Giving slower pixels more time to catch up. Basically anti-overdrive... underdrive?

Why? Just like with light boost tech, this is about hiding the transition and making the picture more fluid. Since motion clarity often times seems to stem from a picture being recognizable and consistent, the less extreme levels of variance the better. When you look at graphs on TFTcentral some monitors have high levels of variance between response times, which I would argue actually makes things a lot worse. It's not just about the minimum time or average, but the variance in the distribution.

I could see manufacturers being against this as it now seems to be about lying about response times, but for consumers I think this would be a huge bonus. About the closest thing to this now is using a monitor that has low levels of variance in the response times (one of the reasons I'm interested in the XF252Q), of course there are tons of other things that play into motion clarity, so getting that and everything else being kosher seems to be a long shot.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Response Time Normalization

Post by Chief Blur Buster » 20 Jul 2019, 04:08

They already do that in some models! For better strobing. NVIDIA does what you describe for ULMB: equalizing the pixel response time to as low as possible. This is part of the GSYNC cost premium.

There are over 60,000 different pixel response times on the same panel (256 x 256 for 8bit), depending on source color and destination color, and even the history of the previous two colors. In addition, refreshtime AND temperature, changes each of those independently and individually.

In addition to simple source-destination, there are many other factors:

If a pixel is black-black-black for 3 refresh cycles, it even responds slightly differently than if a pixel was black-white-black. Even the pixel polarity (inversion) like black-black-white versus black-white-blaxk (positive and negative voltage), can create a subtle difference.

If a pixel is 85 degrees F in the hot power supply corner, and 65 degrees F in the cold top edge in a cold room, it even responds differently. Ever forgotten a smartphone in a cold car in the middle of winter? Screen responds slow when cold, fast when hot. Now imagine same panel in cold room, hot room, or even when one edge of panel is hotter due to power supply behind monitor. It especially affects VA noticeably - noticeably reduced strobe crosstalk on some panels if you warm them up for 30 minutes.

For VRR, if prior refreshtime history sequence was: 1/144sec - 1/120sec - 1/144sec, the next transition can respond slightly differently than if the refreshtime history was 1/80sec - 1/100sec - 1/144sec

Now multiply ALL the above 256x256 for source-destination pairs for 8bit color space.

Now, microsecond variances from panel to panel, because the panel grid array microwires are never the exact same number of atoms thick for the same ohms of conductivity. You see some of this as greyfield splotching and/or nonuniformities, or even vertical streaks/corners that have scanlines but others don't, when limits are pished (e.g. max Hz, or strobe mode, etc), or other strange nonuniformities.

There are so many ways for GtG to become inconsistent.

No wonder why the NVIDIA native GSYNC chip is expensive but worthwhile to some of us, in that it shows noticeably higher quality overdrive (especially VRR) and usually less strobe crosstalk (ULMB mode). But even, not perfect.

In some of the product (like GSYNC HDR local dimmed) some of the pixel-response-normalizing overdrive algorithms in use are quite advanced formulas over a thousand times more complex than simple overdrive. Those take several people years to engineer, just only focussing only on trying to invent the perfect-enough overdrive.

The invention of 3D tech (LightBoost) required advanced overdrive normalization, so overdrive for LightBoost was literally 100x better than anything that came before, because every single color combo had to be individually calibrated independent overdrive per GtG combo, mostly finish GtG inside the VBI (the interval between refresh cycles). But overdrive sometimes have no headroom below black or above white, so GtG is necessarily slower at the clipped/saturated colors, and sometimes you have to reduce gamut (poorer colors) to improve overdrive equalization. A very severe engineering pick-poision.

LCD (Liquid Crystal Displays) pixels are molecules with momentum. The momentum/stiction/friction of each pixel is not perfectly identical under all conditions (temperature, previous state, next state, voltage, duration of actuation current, actuation soonness after previous actuations, etc).

Considering how crappy the original passive matrix LCDs were in the 1980s, we've come a long way, and with what I see at conventions, LCD continues to improve and will be a horse in the race, concurrent to OLED (and others) for many decades to come. Solving issues more and more.

That said -- pixel response engineering is a horrendously huge Pandora Box.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

masneb
Posts: 239
Joined: 15 Apr 2019, 03:04

Re: Response Time Normalization

Post by masneb » 20 Jul 2019, 08:05

Yeah, never said it would be easy, but definitely something that's useful. If the GSYNC spec did that I would expect pixel response times to be more in line with each other then they are for a lot of models. ULMB is just another way of looking at something similar, where they hide the transition, instead of normalizing it. I personally am not a fan of ULMB at all, so that tech just doesn't do it for me, something like this might however.

There are a lot of variables for sure, just getting something out the door is a start and nothing needs to start out as perfect, but good to see you find the benefit in such a technique as well.

Post Reply