Sparky wrote:I don't think you fully understand the problem. To start, you need high persistence at low framerates, to avoid flicker and PWM artifacts. You then need to smoothly transition to high framerate low persistence in order to minimize motion blur, without any perceptible change in brightness or colors. THEN you need to figure out how to do all of those things when you don't know how long it's going to take to get the next frame. I think it's solvable, but it's as much a biology problem as an electronics one.
I don't think any of us enthusiasts fully understand the problem, but as a practicing computer engineer I'm pretty confident I get at least the major relevant concerns, and I still don't think it's rocket science.
Given LED backlight driving circuitry that can produce sharply transitioning, varying-amplitude rectangular output, "PWM" (really, only the sub-case of multiple pulses per input frame) is not even relevant, and the only issue is matching
human perceived brightness between the dimmed holding periods and the pulsed strobes within their larger unlit blanking windows. Finding the matching strobe-to-blank-width ratios for each holding dimness level along a backlight brightness curve is nowhere near as hard a topic in human vision as you seem to think it is, and, given this table, you can have arbitrarily long/short/nonexistent holding intervals that have zero effect on perceived net brightness.
(Trying to dynamically adjust the holding interval brightness, etc. instead of this is a fool's errand, requiring mult-frame buffering/lag to avoid flickering artifacts under varying frame rate conditions. I suspect trying to follow this approach/line of thinking is why so many in this community think this is a highly difficult problem.)
You then are just left with:
- defining the strobe shape (quick/bright/sharp vs. broader/dim/blurry) at different Hz/brightness ranges, which is only a single variable whose domain/range attenuate as brightness rises anyway.
- defining a function on how gradually the panel sharpens/blurs motion under changing frame intervals.
Determining the strobe/dim parity tables is best done by individual display manufacturers, and setting the configurable motion clarity parameter space ideally left open for the end users themselves to twiddle via OSDs.
Sparky wrote:
What's actually needed to do that research? Tens to hundreds of different panels, different panel technologies, engineers to interface prototype scalers to different panels and calibrate them, test subjects(young people, old people, different types of colorblindness, etc.), and a lot of time to spend working on the problem.
I agree that each panel needs to be independently parameter tuned, but the problem isn't about research. It's just about scaler ASICs exposing backlight driver timing/level controls via firmware tables, and letting monitor manufacturers tweak those tables to their individual preferences. It escapes me why you think it's necessary for Nvidia to do anything more than sell a flexibly configurable component, unless you actually want them to subsume the monitor manufacturers into component vendors for
their own monitors.
Hell, you could even let monitor firmware writers expose the sharpness/jumpiness vs. blurriness levels to end users via the OSD, etc., without needing a Detonator driver or whatever to play with things.
Sparky wrote:
Think about why g-sync exists in the first place. Do you really think Nvidia would have gotten involved in a traditionally low margin industry if the major display manufacturers didn't need a kick in the pants?
Yes, the industry needed to be shaken up, but G-sync seems to be dubiously over-engineering approach to the problem.
tl;dr: after matching a panel's perceived strobe/dimming levels, controller-created artifacts are easily avoided, and users are left only with determining and configuring their personally preferred motion clarity levels at different brightness x Hz ranges.
I get that it's a big overall undertaking bringing new technologies to market, but the underlying core engineering problem is vastly overstated IMO, mostly due to Nvidia being themselves and gamer "journalism".