FRC is essentially temporal dithering to spread color depth over multiple refresh cycles.
Some technologies do temporal dithering in a sub-refresh manner (e.g. 1-bit temporal dithering 1440Hz for DLP chips, or 600Hz plasma subfields) but LCD FRC is temporal dithering at the full refresh cycle level. Which means it intentionally uses multiple refresh cycles to generate improved color depth.
The easiest explanation is digital bandwidth limitations in the panel. It's an incredible amount of bandwidth at the Pixel Clock level. Trying to do over one half BILLION pixels per second, is an insane Hoover Damburst at the panel/TCON level that sometimes needs FPGA/ASIC level processing. Doing it more cheaply by reducing bandwidth by 25% (6bit instead of 8bit) can save enough costs to use cheaper electronics in the panel/TCON (that motherboard built into the rear of the LCD panel). It can mean the difference between a $400 monitor and a $500 monitor.
Math: 1920 x 1080 x 240 = about half a billion per second (not including blanking intervals, or subpixels)
The otherwise-digital LCD pixels are inherently analog at the final molecular level (they're just rotateable liquid crystal molecules between two polarizers to block/unblock light), and the panel/TCON has to essentially do half a billion digital-to-analog conversions per second. It's easier to do a 6-bit digital-to-analog conversion than an 8-bit digital-to-analog conversion when you only have nanoseconds to do it for a single pixel.
In other words, think of continually readjusting half a billion analog room dimmers per second, literally. Your living room dimmer with the slider or analog knob? Each subpixel is like that, for one refresh cycle. The monitor's chip has to readjust half a billion times per second to adjust the correct amount of electricity going through a pixel. That's tough to do at full bit depth cheaply, and try to control all those analog GtG's for every single subpixel for every refresh cycle. It's already an engineering miracle that LCDs are cheaper than $5000 today.
Very old LCDs like an early portable TV or old 1994 Thinkpad, had a lot of pixel noise, the electronics weren't able to adjust pixels as accurately as they can today.
LCDs weren't meant to drive screens originally -- it was originally invented for wristwatches and calculators in the 1970s. Today LCD drives beautiful retina smartphones screens, fancy 4K UHD TVs, and ultra-high-Hz gaming monitors.
8bit equals 256 levels of grey per color channel (Red, green, and blue). Imagine 256 dimming levels for a LCD wristwatch, and having to repeat those 256 levels for each subpixels, hundreds of times per second (240 times per second for 240Hz).
The LCD in a gaming monitor is still exactly the same law-of-physics as a 1970s wrist watch. Just way more pixels, and those monochrome LCD pixels filtered to color.
So giving up a few bits to make the firehose easier to manage, can save quite a bit of cost. 6bit+FRC (for 8bit) or 8bit+FRC (for 10bit). Also an algorithm virtually identical to FRC can be done at the GPU level too. NVIDIA GPUs automatically does that (GPU level temporal dithering) when the DVI or DisplayPort cable is put into 6bit mode, especially during the 220Hz overclocking of a BenQ XL2720Z.
A pixel clock of half a billion pixels per second is 12 gigabits per second at 24-bits (8+8+8) but only 8 gigabits per second at 18-bits (6+6+6).
Aldagar wrote: ā07 May 2020, 13:10
And what advantages does FRC give to standard sRGB monitors? Maybe it improves accuracy by reducing quantization errors?
Banding is visible even at 10-bits on a 10,000nit HDR monitor.
FRC is still advantageous even at 8bits (to generate 10bits) and 10bits (to generate 12bits).
Banding artifacts in smoky haze, sky, or other gradients. Adding extra bits to that can elminate that.
At 400 nits, having only 8-bits is okay. But if you add 10x more brightness levels, you need 10x more space...even 10bits is not enough to eliminate banding on a 10,000 nit HDR display. That's why we've started using floating-point color and/or 12-bit color, to eliminate banding in ultrawide-gamuts.
So FRC is still super-useful even with 10bit HDR displays to generate 12bits.
Temporal artifacts from FRC is much less noticeable for 8bit+FRC than for 6bit+FRC.
You could dynamically shift gamut around (e.g. dynamic contrast) but HDR is superior because it's per-pixel dynamic contrast. For that, you need humongous bitdepth. Imagine 8-bit color, combined with extra bits for extra backlight brightness levels, to create 12-bits, and sometimes 12-bit linear is not enough to eliminate banding in extreme-HDR situations, so you have those new 16bit and floating point color depth systems.
10bit is usually sufficient and good enough, but limitations certainly show, especially if breaking past 1,000 nits to the 10,000 nits territory. I saw demo HDR material on prototype 10,000 nits displays and the effect is quite impressive -- e.g. streetlamps and neon lights in night scenes, to things like sun reflection off a shiny car ("whites far brighter than white on those sun reflections"), and other things.
It's an amazing sight to behold, but gobbles oodles of color depth. Many of these displays can't do this bitdepth natively and have to mandatorily use internal FRC-style algorithms to keep up with the expanded HDR color depths. Sometimes it's done subrefresh league (e.g. 120Hz FRC on 60Hz material). Still much more natural looking than ultra-high-Hz DLP.