Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Colonel_Gerdauf » 24 May 2022, 17:31

Hello people,

I have have been seeing some places praise BFI as if it is the second coming of Jesus. This actually irritates me, as all that praising and positive PR obstructs some remarkably obvious drawbacks in the lens of an regular user/viewer. For example, I have not ever seen the end of people willfully minimizing the eye strain that the flickering produces. Not to mention, all of this secondary advertising screams to me 3D TV, and we all know how that turned out.

Anyways, after trying to propose alternate solutions to the way BFI works in Discord, here I finally am to finally, hopefully, drive the damn point home.

Anyways, I think we are all familiar with the (comically unhelpful) addage that to provide a complaint, one must offer a solution?

And this is the first of at least a few solutions I have to offer.

To be quite frank with you, the way the BFI works, full screen on full screen off at a clock relative to time (I will get to the latter in another solution later), is mediocre and bare bones in terms of functional systems, and again it baffles me at how people singing BFI's praises are not caring about the drawbacks or people's feedback about the drawbacks.

So, here is an interim solution: instead of flickering the whole screen, why not flicker "halves" at switching intervals.

Video of illustration:
phpBB [video]

A Blender video rendered at 48hz

The reason why I picked 24Hz for a source video is that it highlights the differences between implementations the best. And makes the problems with current BFI very clear. I put the Seizure Warning there for a reason. For a lot of people, myself included, 120Hz BFI is not a whole lot better, and part of why I have zero interest in VR at the moment.

Anyways, there are some illustrations to highlight what is going on. In terms of vertical, it is straight forward as it is basic interlacing. It comes with all of the visual "artifacting" that comes with interlacing of olde. Now checkerboarding is where it gets interesting. Here is a checkerboarding pattern at 2x resolution:
Image

And here is checkerboarding at 4x resolution:
Image

Now... what makes me think that these are better than standard? Just look at how these all works. They split the image into halves that are on at one time, one frame of sorts. What that means is that each frame of data is shown at one spot only at intervals before switching to black again, which brings the supposed benefits of BFI to full capacity. But, halves of the image are always showing at one time, which will limit disruptions to the persistence of vision and namely the negative side effects of flickering.
In the particular case of the 4x Checkerboard, imagine that each pixel box gets shown for three frames, and when it becomes time it turns off, meaning that they are each on for 3 intervals when it takes one to be off.

This should be a sample of a way to improve the BFI implementation. It has the positive effects of BFI in full (as far as I care to notice, anyway), and the negative side effects are significantly reduced. I will talk in other parts about other specific issues with BFI and how to address each one on their own.
Last edited by Colonel_Gerdauf on 26 May 2022, 19:15, edited 6 times in total.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: A better BFI (1) - Interlacing and Checkerboarding

Post by Chief Blur Buster » 24 May 2022, 17:48

Interesting talk here;

The comfort of various different implementations of BFI definitely vary a lot.

There's many kinds of BFI, some really comfortable, and some uncomfortable:
- Squarewave BFI (like strobe backlight)
- Non-integer BFI (pulse width adjustment)
- Rolling BFI
- Phosphor fade simulation
- Etc (like your algorithm suggestion)

They all have different comfort levels, pros/cons, etc since a rolling BFI kind of guarantees a constant number of photons hitting eyeballs per second, which can be less harsh than global squarewave BFI.

It's important to note that there's major difference between hardware BFI (full flash) and software BFI (rolling scanout).

This is because not all pixels refresh at the same time on LCD panels, as seen in High Speed Video of LCD Refreshing.

So, 1000fps high speed videos of flawed/good BFI implementations become quite useful.

Long term, I think CRT electron beam simulators (e.g. using a 1000Hz display, using 16 refresh cycles to simulate 1/16th of CRT electron beam per 60Hz refresh cycle) will become an ultimate retro-look BFI with exactly the same Hz-for-Hz comfort as a retro CRT, since the rolling-scan combined with fadebehind, can get the lowest possible Hz with least eyestrain.

Everyone perceives flicker very differently.

One major problem is with doing things temporally differently, is that you get temporal disjoint artifacts.
- Color wheel; you get rainbow effect (www.testufo.com/rainboweffect)
- Interlacing; you get combing artifacts (www.testufo.com/interlace)
- Multiscanning; you get zigzag artifacts.
- Etc

Even rolling scan can have scanskew, but sequential scanout is a lesser of evil;
- www.testufo.com/scanskew (view on 60Hz DELL to see what I mean)

So many attempted custom BFI algorithms had nasty motion artifacts, until it was adjusted. Artifact-free BFI algorithm has to keep a lot of black period as contiguous as possible, to avoid motion artifacts. Either in global flash or rolling scan, keeping pixel-switching differences between spatially adjacent pixels as small as feasible possible, without adding back flicker disadvantages. It's a very delicate trade off.

Fundamentally, over the long term, BFI and strobing is just a humankind bandaid. Ultra-high-refresh-rate sample and hold (blurless AND flickerless) is the way to go in the ultra long term, as determined by research. See the display research portal for some useful info.

I even say BFI and strobing needs to become obsolete someday 😊 -- see 1000 Hz Journey Article for reducing motion blur without needing any form of BFI / impulsing / strobing / phosphor

The "praise jesus" effect is from CRT fans who finally found a great strobed LCD (such as ViewSonic XG2431) that finally looked CRT motion clarity to them, for those who didn't mind the disadvantages. If you didn't grow up in the CRT days, this is much of a less of an effect.

The important thing is that Blur Busters isn't always about strobing -- if you read the entire Display Research Portal and/or the 25 peer reviewed research papers I'm cited in. ;)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

Re: A better BFI (1) - Interlacing and Checkerboarding

Post by Colonel_Gerdauf » 24 May 2022, 19:31

A few things,

The reason why I mentioned checker-boarding and interlacing as solutions is due to, first of all, the standard on-off half frames being an overwhelming standard in OLED type displays. Another thing being that the solution hits a very nice middle ground between the enjoying the benefits and happily ignoring the drawbacks; while keeping display design very simple. Those other algorithms look nice, and might actually do what I wanted much better, but they introduce a level of design complexity that ruins the potential universality of BFI later down the line in the short future.

I have lived through the CRTs and was very hands-on with them for the first twenty years of my life. While I guess I enjoyed my time with it, I did not at all miss CRT's when LCD's came around. Those things were unwieldy and a pain to move around, doing any meaningful calibrations was a nightmare and a half, how interlacing was done was causing eyestrain, and the electron gun can do weird things whenever it wants to. I remember a CRT that was facing what I would call "the slow green death". Likely due to atmospheric humidity at the time, the display stared to show green borders and fringes that does not seem to go away, and ZAP it stopped working altogether. The latest CRT, which I still have stored somewhere in my home, would generate random white lines when you start up the display, and would take a good few minutes to fade away to a normal state.

I honestly do not see HFR going fast enough to get to multi-thousand FPS levels in a mere decade. Not only is the increase of complexity not linear and closer to exponential when everything else is taken into account, but there is also the matter of perceptions and "imposed" values where nonsense such as 24Hz and 30FPS are born. I myself have strong personal disagreements about where the curve of diminishing returns sits as far as refresh is concerned. Maybe I am just being an old Boomer (I am actually a Millennial-Zoomer), or maybe I am too reliant on "Newtonian thinking" as a person from here had pointed out, but I strongly value the limits of the human brain and processing speeds in relation to motion pictures. There comes a point where you have enough static images to process at one time to have it induce a headache. I'd say that such limit is around 250Hz, which is why I continue to see the recent 360Hz displays as marketing nonsense. Sure, IO latency improves, but at that point it if basically just fighting over pennies.

And back to the topic of the "praise jesus" effect. One counter-argument that is often thrown at my face is FPS-locked content. In that kind of scenario, would it not be better if the BFI were to have a live contextual switch like GSync does? On that matter, what makes it so difficult to detect FPS locked content taking up a screen, and sending a signal to the display to start using BFI?
Last edited by Colonel_Gerdauf on 28 May 2022, 12:29, edited 1 time in total.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: A better BFI (1) - Interlacing and Checkerboarding

Post by Chief Blur Buster » 24 May 2022, 20:44

I've answered this hundreds of times -- and demonstrated big Hz differences in classrooms to silence an audience that it is even visible in non-gaming apps too.

60Hz-vs-240Hz and 240Hz-vs-1000Hz is visible to more than >90% of human population even in non-gaming use cases, when the correct content is demonstrated.

Image

Actually -- you'd be surprised, if you scientifically experimented on some of the prototype displays that I have seen. See Properly Designing A Blind Test That >90% Of Humans Can See 240Hz-vs-1000Hz (non-Game Use Cases Too!)

Also because I go to many conventions (example) I get to feast my eyes on many prototype displays, which guides my display research endeavours. I think you might want to read some of the Area 51 Forum Threads too.

This is more important for things like VR where you need to simultaneously eliminate flicker (Stroboscopic Effect of Finite Frame Rates as well as Display Motion Blur)

- You use flicker to eliminate motion blur (but you have the flicker problem)
- You use motion blur to eliminate flicker (but you have the motion blur problem)

This is a problem for creating a five-sigma-comfortable Holodeck (aka VR) where a display perfectly matches real life analog motion. You can't have your cake and eat it too, because of the artifacts shown below.

Image

Image

Image


There are many engineering paths that even someday 8K 1000Hz will be a free part of a display, as we've spatially retina'd out. 4K was a $10,000 luxury in year 2001, but now is a $299 walmart special. 120Hz is mainstreaming slowly (phones, tablets, consoles) and in a decade, 240Hz will mainstream, etc. Even 240fps@240Hz browser scrolling has 1/4th the motion blur of 60fps@60Hz scrolling.

Refresh rate incrementalism is a big problem in Hz benefit visibility. We can't tell apart 144-vs-165 nor 240-vs-360 as well. 240-vs-360 is just a 1.5x motion blur difference that is throttled to only a 1.1x blur difference because of LCD GtG slowness and microjitter of other sources.

Lots of test miss this, as the forced display motion blur blur difference (above and beyond human vision) of 240Hz vs 1000Hz is like a 1/240sec SLR photo versus 1/1000sec SLR photo, when doing framerate=Hz fast motion (of source material with no pre-baked-in motion blur such as GPU blur effect or camera shutter blur).

Image

Yes, we can strobe to fix motion blur, but strobing is a problem -- some people still see high Hz strobing which makes it different from real life (Important in building perfect display for the defacto equivalent of a Star Trek Holodeck where you have no blur compromises nor stroboscopic compromises, nothing above-and-beyond real life and human brain). So what do you do to get zero-blur and zero-stroboscopics? If aiming for 1ms of motion blur at 1000 pixels/sec (1ms MPRT100%), you need to fill the whole second with 1ms unique frametimes with no blackness, aka 1000fps 1000Hz. This is because as you track moving motion, your eyes are in a different position at beginning of refresh cycle versus end of refresh cycle. So a stationary pixel for 1/60sec against analog moving eyes, creates this problem:


1. Look at stationary UFO for a while. Observe its artifacts.
2. Look at moving UFO for a while. Observe its artifacts.

The same problem occurs even with an OLED/MicroLED display with 0ms GtG, since frametime generates the motionblur -- because of analog moving eyes smearing stationary pixels of a frame (sample-and-hold refresh cycle) due to the artificial humankind invention of using stationary images to emulate moving images). You can't fix blur and stroboscopics simultaneously, without really dramatically raising frame rates and refresh rates.

That's not good enough for a five-sigma Holodeck, for example.

Many excuses say "GPU can't keep up" -- but the good news is that engineered solutions are coming.

You may be aware that NVIDIA is engineering custom DLSS variants for the 2030s that amplify framerate by 4x-10x, so you can use 100fps feedstock to generate 8K 1000fps UE5 eventually in humankind. This will happen far sooner this century than many thing, because of dedicated AI-assisted framerate amplification silicon.

There are many cheap paths to 1000Hz displays that we have discovered (utilizing existing technology), but it will take time to borne out. The problem is we have to go dramatically up the curve of diminishing returns. The MPRT(100%) difference of 60Hz vs 120Hz sample and hold persistence is 8.3 milliseconds, while the MPRT(100%) difference of 120Hz vs 1000Hz sample and hold is 7.3 milliseconds (assumes perfect framerate=Hz motion).

I will crosspost a jumpstart micdrop from Sensitivity Threshold: What is the Hz Limit of Human Eye?, to make sure we're not being distracted by mere refresh rate incrementalism or things like flicker fusion thresholds.

__________________


Many people misunderstand the different sensitivity thresholds, such as "Humans can't see above 75Hz" -- but that is only a flicker threshold. The purpose of this post is to show that there are extremely different orders of magnitude that refresh rate upgrades do address.

Even in a non-gaming context, one thing many people forget is that there’s many thresholds of detectable frequencies.

These are approximate thresholds (varies by human), rounded off to nearest order of magnitude for reader simplicity of how display imperfection scale.

Threshold where slideshows become motion: 10
This is a really low threshold such as 10 frames per second. Several research papers indicate 7 to 13 frames per second, such as this one. This doesn't mean stutter disappears (yet), it just means it now feel like motion rather than a slideshow playback.
Example order of magnitude: 10

Threshold where things stop flickering: 100
A common threshold is 85 Hz (for CRTs). Also known as the “flicker fusion threshold”. Variables such as duty cycle (pulse width) and whether there’s fade (e.g. phosphor fade) can shift this threshold. This also happens to be the rough threshold where stutter completely disappears on a perfect sample-and-hold display.
Example order of magnitude: 100

Thresholds where things stop motion blurring: 1000
Flicker free displays (sample and hold) means there is always a guaranteed minimum display motion blur, even for instant 0ms GtG displays, due to eye tracking blur (animation demo). The higher the resolution and the larger FOV the display, the easier it is to see display motion blur as a difference in sharpness between static imagery and moving imagery, blurry motion despite blur free frames (e.g. rendered frames or fast-shutter frames).
Example order of magnitude: 1000

Threshold for detectable stroboscopic effects: 10,000
Where mouse pointer becomes a continuous motion instead of gapped. This is where higher display Hz helps (reduce distance between gaps) and higher mouse Hz (reduce variance in the gaps). Mouse Hz needs to be massively oversample the display Hz to avoid mouse jitter (aliasing effects). If you move a mouse pointer 4000 pixels per second, you need 4000Hz to turn the mouse pointer into a smooth blur (without adding unwanted GPU blur effect).
Example order of magnitude 10,000

An example:
Image
(From lighting industry paper but has also been shown to be true for stroboscopics on large displays, including VR displays intended to mimic the real world)

More information can be found in Research Section of Blur Busters.

__________

In addition, you may wish to explore the 30 different TestUFO tests with millions of parameters, such as www.testufo.com/persistence or www.testufo.com/eyetracking (and if you have a 240Hz display, test www.testufo.com/blurbusterslaw ...) -- It is fascinating educational experience for a scientist/researcher who's not yet familiar with display research...

The bottom line is that this is a worthy humankind journey, as eventually combined ultra-high resolutions and ultra-high refresh rates will eventually not necessarily be expensive later in humankind. Remember what I said about frame rate amplification technology earlier. H.264 video is essentially 1 frame per second with 23 predicted frames in between, yet it looks visually and perceptually lossless.

Researchers are finding many are now being discovered to extrapolate/reproject new frames (based on things like Z-buffer information & high-frequency controllers), to create new virtually lagless intermediate frames that are perceptually lossless, making it possible to amplify 100fps to 1000fps (or even eventually beyond), necessary for blur-free flicker-free sample-and-hold holy grail. It was only recently that parties such as NVIDIA discovered the geometric-upgrade requrements (e.g. 60Hz -> 240Hz -> 1000Hz -> 4000Hz) to keep things fairly visible to the average layperson -- the Vicious Cycle Effect (higher resolution nd FOV amplifies Hz limitations and vice versa) is one big part of this.

(In fact, I'm cited by NVIDIA in one of their research papers already (Page 2 of Temporally Dense Raytracing so I often correspond with NVIDIA researchers too...)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: A better BFI (1) - Interlacing and Checkerboarding

Post by Chief Blur Buster » 24 May 2022, 21:15

Colonel_Gerdauf wrote:
24 May 2022, 17:31
Hello people,
That being said...

I agree, we definitely need forms of BFI for now, as a band-aid.
Unfortunately, various forms of BFI is the only way to reduce motion blur today of current contemporary frame rates and refresh rates.

For now, we need to strobe in VR headsets because sample-and-hold frame rates are too low to reduce blur via sheer frame rate. For example, the Oculus Quest 2 uses 0.3ms MPRT (0.3ms flashes), which would require (1000/0.3) = 3333fps 3333Hz to match if you wanted to do it in a flickerfree sample-and-hold display without impulse-driving.

But, not everyone can use VR. Because people can be distracted/eyestrained by motion blur (if not strobing) or by phantom array effects (if strobing).

Even on old computer monitors, 864 Hz PWM dimming still produced headaches, consistent with the lighting study paper that led the lighting industry to evolve to 20,000Hz ballasts to avoid xenon-strobe-style stepping effects. Remember -- it's not direct flicker, but the artifacts that hurts the eyes of some people -- as different people have different sensitivities / nausea / motion sickness from various kinds of artifacts (blur, stroboscopics, stutter, etc). Strobing at frequencies higher than frame rate (including via low frame rates or via PWM dimming) can produce ugly motion artifacts that look like this:

Image

If you're familiar with CRT 30fps at 60Hz, the problem still persists at 60fps at 120Hz, and so on:

Image

But real life does not strobe -- thus VR should not strobe if it is to five-sigma real life with perfect VR comfort. A perfect Holodeck needs to be strobeless, so you need to go blurless sample-and-hold, and that requires insane frame rates at insane refresh rates. Can't have cake (blurfree) and eat it too (strobefree) unless you otherwise find a way to simulate analog motion (aka remove the frame rate) -- and the closest parallel to that is ultra high frame rate at ultra high Hz.

As we whac-a-mole all the weak links, if we wanted to whack 100% of all weak links - that's our point of view of the retina refresh rate to five-sigma display comfort for all kinds of niche display cases.

But even along the path, 1000Hz vs 4000Hz can become visible in mudane cases such as www.testufo.com/map if running at 4000 pixels/sec on a 4K display. 1000Hz sample and hold still generates 4 pixel of motion blur at 4000 pixels/sec (persistence blur of sample and hold), so 1000Hz isn't even the final frontier, given sufficient resolution that's still within human angular resolving resolution, combined with motion speeds that are still within human eye tracking velocities.

So the bigger the FOV, the higher the resolution, and how closely the display maxes out angular resolution, all simultaneously amplify Hz. Retina refresh rate for VR headsets (16K 180-degree) is actually far in excess of 10,000Hz, for example especially since you want to add 1/10,000sec of GPU motion blur effect to eliminate stroboscopics -- othewise fast headturns on an 8K-16K 180-degree VR headset could still stroboscopic-step (like Stroboscopic Effect of Finite Frame Rates) and you want the GPU blur effect to be below human-detectability thresholds,etc. (Temporal equivalent of the nyquist 2x sampling factor, essentially). So when you combine all the factors, it really, really pushes up the retina refresh rate pretty high if you max-out the variables.

Regardless, it's a big area of research going on right now nowadays in multiple parts of the world!

Just as grandmas sometimes couldn't tell apart VHS-vs-DVD, it's easy to tell apart VHS-vs-8K. Likewise, in demonstration content, it's easy to tell apart 120Hz-vs-1000Hz, or 240Hz-vs-2000Hz, given sufficient variables (superlarge FOV retina-resolution sample and hold screen that's completely strobe free).

The Holy grail of perfect VR that passes the VR turing test (VR versus ski googles blind test, can't tell the difference), requires all weak links to be completely eliminated. Today, that's a fairytale but at least us display researchers have finally figured this out, and realized that Hz will be a long-term humankind journey.

Today, ~10% of population (pick a number) can't even use VR because of eyestrain, even if there's no motion sickness (perfect 1:1 vertigo sync between VR and real life, like walking around a physical real room perfectly mapping to walking around a virtual room, to avoid vertigo issues -- i.e. avoiding roller coaster apps). This was traced to motion blur sickness/headaches (if you don't strobe) and traced to nausea from flicker / stroboscopics (if you strobe the headset). This still persists to this day on 120Hz headsets. Pushing the refresh rates helps a lot here.

Yes, majority people aren't as fussy or picky, and can be fine with Quest 2.

But that excludes a large part of the population who gets health issues with VR because of the refresh rate limitation forcing all of these band-aids (either blur or stroboscopics, take your pick, can't fix both simultaneously with analog motion defacto equivalent such as ultra fps ultra Hz). And even for the remainder 90% of population, they still see temporal quality improvements (Even if it wasn't hurting there eyes/brains like the 10%) from the refresh rate progress. Win win, in humankind benefit. But geometric jumps up the diminishing curve of returns is mandatory -- remember that 1/60sec vs 1/120sec is an 8.3ms difference in motion blur, but you need 1/120sec vs 1/1000sec to create a 7.3ms difference in motion blur (same magnitude of jump). So an example of an ultra-dramatic geometric jump, see?

Regardless...

TL;DR: This is a long term endeavor of whac-a-mole of all possible display weak links. Five-sigmaing a display to perfectly match real life is going to be a long-term humankind endeavour. This all requires semblance of analog motion with all weak links below human-detectability. To pull this off with stationary non-spatially-mechanically-moving pixels (an engineering impossibility) and a finite refresh rate pipeline, will require quintuple-digit refresh rates and frame rates, when maxing out FOV and maxing out angular resolving resolution (e.g. 16K 180-degree VR).

It may not be until the 2030s before we get 8K 1000fps UE5 quality which still isn't retina in some cases -- but a good stepping stone that is achievable with known engineering paths today in less than ten years (barring supply chains) -- it's plainly clear, many researchers I know are engineering towards that.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

Re: A better BFI (1) - Interlacing and Checkerboarding

Post by Colonel_Gerdauf » 25 May 2022, 10:01

Like I said, there are several kinds of setbacks that need to be taken into consideration before we can pile through with 8K 1KHZ displays

- limitations of connectors. This kinda gets to the limits of the laws of physics at this point; how much data can really be sent over copper at one time? And fibre optic remains impractical for most
- making the additional feature sets (like HDR and VRR) maintain compatibility at the same ratios
- limitations of display types. As it stands, people like Wendel from L1T has remarked on the limitations of IPS-type at HFR above 120hz. Even TN-types have latencies and B2W response times that make proper 1KHz impossible
- thermal and compute speed considerations
- like you said, the paradox of impressions when it comes to incrementalism. At the same time though, for a normal person's point of view, the "proper" advancements of refreshes would take too long to bother tuning in

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

A better BFI (2) - timings

Post by Colonel_Gerdauf » 25 May 2022, 14:13

And here we are at Episode 2 of why BFI as many know it is horrible and needs to have significant improvements in design if it has any place as a motion blur band-aid until HFR finally comes around to kilohertz.

Previously, I had discussed a per-phase solution such as Interlacing and Checkerboarding that nicely deals with the balance between motion blur and flickering. Now let's talk about a different issue; how BFI is timed.

Right now, BFI is declared as a clock such as 120Hz and 240Hz. Now, while this is "simple" in design, it makes things in implementation and forwards compatibility a bit messy to say the very least. It has to be timed correctly in relation to pixel clocks, it needs to be a nice-number ratio, and most importantly it needs to be dialed in separately from everything else, which can introduce some unwanted variability in how well BFI as a whole works in relation to what it was intended to do.

Image

So... why not have BFI be set as a time (milliseconds) in relation to the time period of a frame? For one, due to it being tied to frame instead of tied to master clocks, it can make it considerably less difficult to make it work alongside VRR and especially G-Sync. That can curbstomp the nagging that I had been hearing about "VRR or BFI; pick one" in relation to gaming.

Let's get into detail for a bit: what do I mean?

imagine for a second a single frame in 100Hz instance; that frame takes about 10ms. We can have a screen blank out at a 2 ms interval between the frames. That is about a 1/5 ratio, or a "500Hz" BFI in the traditional definition. Now what happens when the clock gets changed to the olde 60Hz (16.6 ms)? If the time stays static, then the ratio becomes 3/25 (8.3). If the time is scalable, then the time elapsed for the blackout becomes 3.3 ms. The former is visually simpler for end users who want to manually dial in settings via an osd, while the latter is more computationally simple and limits the jarring effects of changes of blackout times.

Image
---
Image

You may be asking me: isn't there also the factor of the blankout time inconsistencies? Actually, not a concern. With thin blankout times, the differences in time with scaling should not be noticeable. And in either case, in an ideal design the timeout ratio as well as the choice of fixed vs scalable would be up to the end users to choose. So there will no longer be a one-size-fits-all situation that currently plagues BFI as it is currently implemented on displays. People can CHOOSE how they want the BFI to function, and having it timed based on the active frame makes that many magnitudes easier.

So you can have a live-scalable/fixed timer for blanking, set as a ratio or percentage, depending on which one is easier for a user to compute and understand. It can happen per-frame, and can be easily scalable based on frame rates or frame rate changes on the window. That alone should make compatibility with VRR a complete non-concern.
Last edited by Colonel_Gerdauf on 05 Jun 2022, 14:15, edited 2 times in total.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Better BFI Algorithms (Interlacing / Checkerboarding / Timings / Etc)

Post by Chief Blur Buster » 26 May 2022, 01:00

Some strobed monitors already scale their pulse width proportional to Hz.

We're already giving users this feature to adjust this!

That's why I designed Strobe Utility to pull all of this off.

Image

ViewSonic XG2431 is one of the most flexible models, with the ability to adjust BFI pulse from anywhere between 1% to 40% of a refresh cycle, in 1% increments. It is available for download at www.blurbusters.com/xg2431

I even show animations that show how to tune, and in what sequence, and instructions on how to create large blanking intervals that are bigger (to hide LCD GtG better between refresh cycles), and provide a built-in motion test pattern (similar to www.testufo.com/crosstalk but built into Strobe Utility) that allows any average user to eyeball-tune the strobe quality orders of magnitudes more easily than with oscilloscope-photodiode equipment!

XG2431 is one of the first desktop LCD panels to achieve perfect zero strobe crosstalk for top/center/bottom, assuming you also use large vertical totals (ultralarge VBI) to hide LCD GtG in VBI -- we already do this.

With this utility, you can have strobe pulse widths from 1% to 40% of a refresh cycle with any refresh rate. And I had ViewSonic make it flexible. Unlike with monolithic NVIDIA ULMB, you can strobe at any refresh rate you choose from 59Hz through 241Hz in 0.001Hz increments, just by creating a custom EDID (custom refresh rate) either via NVIDIA Control Panel or via ToastyX CRU, so you don't even have to be stuck with manufacturer refresh rates. You choose your refresh rate sweet spot (want 85Hz? Done. Want 97.5Hz? Done. Etc). And Strobe Utility allows you to produce clearer LCD motion than many strobe backlights such as NVIDIA ULMB.

So unbeknownst to you, we're already giving power users this BFI tuning feature;

_____

In fact, Strobe Utility has existed on some monitor models for almost a decade (BenQ announced Blur Busters Strobe Utility on March 21st, 2014 -- press release crediting us). Most manufacturers do not give you adjustable BFI, but we strive to do that. BenQ' version of Blur Busters Strobe Utility is still available for download at www.blurbusters.com/strobe-utility even though it is not nearly as flexible as XG2431 (narrower pulse width range, no overdrive-gain adjustment) ... Monitors tunable by Strobe Utility are certainly easier to reduce flicker (of a specific Hz) with than monitors that do not support Strobe Utility.

Here's the older version (Variant of 2014 version designed for BenQ monitors)

Image

___

We already use large VBIs to reduce strobe crosstalk

Most esports monitors supporting Strobe Utility already supports ultra-large blanking intervals (Large Vertical Totals / Quick Frame Transport) to more easily hide LCD GtG between refresh cycles.

We already utilize tricks like ultralarge vertical blanking intervals (e.g. a 4500-scanline refresh cycle with 1080 visible and 3420 in the VBI) to scanout the refresh cycle faster, then let LCD GtG settle for longer time, before flashing the backlight, before the new refresh cycle starts, and so on.

This finally achieved the perfect zero-crosstalk LCD, once we utilized the refresh rate headroom trick (e.g. scanning out a 100Hz refresh cycle in a mere 1/360sec), allowing all real-world LCD GtG to be 100% complete for all color combinations, in the dark cycle between strobe flashes.

It's also a trick utilized by Oculus Quest 2 VR headset too, and recently shown a DisplayWeek 2022 PowerPoint slide. See Meta Revealed The Detailed Specs Of Quest 2’s LCD Display on UploadVR, in the "Quest 2 Architecture" slide where there's big gaps between the scanouts (aka a large blanking interval).

However, we actually use even larger blanking intervals on some displays as big as 3x-4x bigger than the vertical visible resolution. Sometimes it's done internally, but other times it's necessary to be created via a Custom Resolution Utility (if we were not able to convince the manufacturer to do it).

Fortunately, doing large VBI's by the GPU actually has another beneficial effect: Input lag reduction of a low Hz, since frames are transmitted over the cable faster too (also known as Quick Frame Transport), because the visible scanlines are front-loaded at the beginning of the refresh cycle, followed by the long blanking interval right after.

So it hits two birds simultaneously with one stone: Clearer LCD motion with less double-image crosstalk (caused by LCD GtG overlapping refresh cycles) and less input latency (caused by faster refresh cycle transmission over video cables). It's easier if you intentionally choose to buy more Hz than you need (e.g. 240Hz strobed LCD) and intentionally run it at a lower refresh rate (e.g. 100Hz refresh rate), since the max Hz sometimes dictates the fastest scanout velocity the panel is capable of (e.g. 100Hz refresh cycle scanout in 1/240sec).

Most modern esports monitors are now synchronous cable scanout to canel scanout (raster scan) in subrefresh latency, using only rolling-window processing (a few pixel rows, for color processing and DisplayPort/HDMI packetization dejitter). Unlike now, past LCDs needed to pre-buffer a whole refresh cycle signal before beginning to refresh the panel. But today, pixel rows are streamed straight from the cable to the panel in just a few horizontal-scanrate-cycles later. The bonus of these modern horizontal-scanrate-multisync LCD panels is that they're easy to change scanout velocity of low Hz simply by using a Custom Resolution Utility (custom EDID), to force creative things like 100Hz refresh cycles scanout in 1/240sec with a longer VBI "pause" (for LCD GtG settlement) between refresh cycles.

We are also working on a better BFI achievable with current parts & tech

However, BFI is limited by the fact that many panels are manufactured to have a global edgelight that can't be individually controllablke (global strobe flash). A global flash is more harsh at the same Hz than a rolling-scan, because a rolling scan (in sync with LCD scanout, as seen in high speed videos) can keep photons hitting the eyeballs at a more constant rate, even if the photons are coming from elsewhere in the screen.

So Hz-for-Hz, a rolling strobe scan is less harsh on the eyes than a global strobe flash. For global flash, you need a little extra Hz to compensate (for the same pulsewidth per pixel). MiniLED local dimmed backlights are perfect candidates for a scanning-strobe backlight. In slow-motion, they more accurately emulate a CRT tube:

Scanning strobe is something that we are experimenting with (behind the scenes) with the newer MiniLED backlight arrays already. We plan to have a Strobe Utility (phase, window size, maybe a fadebehind zone to simulate phosphor, etc) -- depending on how flexible the MiniLED array timing controller is.

The new MiniLED backlight arrays at high LED counts (>1000 LED) finally produce sufficiently good quality for a rolling strobe -- but the chinese-made MiniLED timing controllers needs to be worked around with some additional electronics. Eventually this will be solved within the next few years.

Here are some images from this thread: viewtopic.php?f=7&t=5890

Image

Image
(image credit: this thread)

But our research vastly predates this. Many manufacturers attempted it and failed to produce anything good quality because of many technological limitations (e.g. GtG not fast enough to fit between the scanning strobes, to things like light bleed as described above). In fact, we were researching scanning backlights more than 10 years ago (still own the domain name www.scanningbacklight.com -- look at domain name registration date), and they still have major problems like light leakage from on-segments to off-segments (blooming problem) that generates worse strobe crosstalk than a global strobe flash -- and strobe crosstalk can be an uncomfortable motion artifact for some.

In fact, I wrote a very old 2014 Electronics Hacking: Creating A Strobe Backlight that outlined major problems long before the current strobe improvements that have been accomplished. Ten years ago in year 2012, I wrote the Scanning Backlight FAQ which is now hosted by Blur Busters.

For years, global strobe was a painful band-aid, but newer MiniLED arrays have finally solved a lot of those problems I write in that 2014 article -- and it is just a simple matter of modifying the timing controller of the MiniLED array to permit more flexibility (and still be Strobe Utility adjustable too, to boot -- like an adjustable-phosphor-decay CRT!)

For the same Hz, from worst to best:
  • Unadjustable global BFI = worst flicker eyestrain
  • Adjustable global BFI = less flicker eyestrain
  • Unadjustable rolling BFI = even less flicker eyestrain
  • Adjustable rolling BFI = least flicker eyestrain
This is what science has shown, because rolling BFI keeps photons hitting human eyeballs continuously. This is a less bothersome flicker than global simultaneous flicker. Global strobe had to be used because of technological limitations inexpensively preventing high-quality scanning strobe.

Since 1000+ LED-count MiniLED local-dimmable backlight array sheets are machine-manufactured (in a way remarkable similar to 3072-LED-chip 32x32 RGB jumbotron modules costing only $10 per module from Alibaba). As manufacturing quantities go up, they are are falling rapidly in price at the factory.

Those 1000+ LED MiniLED local dimming backlights, currently in many 4-figure-priced monitors, will filter down to be finally in 3-figure-priced monitors later this decade. As supply chains start to loosen up, this becomes easier. These MiniLED arrays can be easily commandered (with the right reprogrammed custom timing controller) to do a CRT-style rolling strobe! So FALD backlights turns into hardware-based CRT simulators in a manner of speaking -- and it works surprisingly well in laboratory tests. So existing tech, win, win.

For now, giving power users unlimited adjustments to BFI via our multiple Blur Busters Strobe Utilities now available for certain models of 3 brands (soon 4 or 5) of monitors, is a band-aid solution that can be adjusted to have less BFI flicker than monitors that do not support Blur Busters Strobe Utility.

Awareness Is a Big Problem

Everyday users don't even know about the Strobe Utility, so I'm not surprised you might have been unaware we've been giving users this BFI flexibility on certain monitors (albiet less than 5% of monitor models on the market supports a Strobe Utility, unfortunately). So randomly buying a monitor with blur busting BFI, won't get you the sweet professional Blur Busting features that reduce flicker visibility...

This leads to complaints by many people who don't realize better and more flexible BFI exists.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

Re: Better BFI Algorithms (Interlacing / Checkerboarding / Timings / Etc)

Post by Colonel_Gerdauf » 26 May 2022, 09:00

That gets to a big part of my problem - yes, strobe utility exists, and thank you for your work in that. But that remains a niche item that requires a LOT of advertising to get through, and it should be mentioned that such levels of advertising required has had a very nasty precedent for off-putting cultish behaviours.

"This leads to complaints by many people who don't realize better and more flexible BFI exists."

And a big part of that stems from the fact that the popular displays using BFI's (from big name brands in particular) are using the hard-locked "basic BFI" that I am seeing all around me myself. You have yourself mentioned that the software only works on a limited number of BFI panels. So there are more levels of awareness barriers that needs to be broken here.

While Strobe Utility is a niche tuning software that can be downloaded much like CPU-Z or f.lux or Special K, from my eyes that is honestly pointless, as I would have expected these customizations to be part of the package with display OSDs or USB interfaces.

It is why people generally ignore ULMB as it is hard-coded, or dismiss G-Sync for similar reasons. In the latter case it is quite an unfortunate catch 22, as FreeSync is notorious for its inconsistencies between panels, which is the very last thing you want to have when you are trying to make a tech known to the public.

These kinds of things need to be a standard, and not some optional thing that some lucky-informed person gets to tune on his/her own. Hell, in relation to the second part of my rambles, I still always see BFI strobing measured in Hz, which is about the easiest way to send the BFI tech to the boomer-tech graveyard. Where are the relative times? Where are the ratios?

I do not subscribe to the "one step at the time" approach in this particular case. You either hit it nail in the head the first time, as with G-Sync/ProMotion, or have the tech go through several revisions and versions that only complicate matters, as with FreeSync and high-polling devices. The former will succeed into a "this is the absolute bare minimum" standard in due time, the latter will be doomed to rot a painful death.

While the rolling BFI does look nice, and gives a bit of the CRT feel, one thing I am very unhappy with is the fact that >50% of the display is blanked at one instance of time, and it remains large-area. I still feel that to properly limit the stutter and flicker effects, the "off" pixels need to be at most 50% of the total screen, and evenly spread out.

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

A better BFI (3) - tearline mitigations

Post by Colonel_Gerdauf » 26 May 2022, 19:11

And we are on episode 3 of this saga.

So I have been thinking about solutions to the better use of BFI tech, and more importantly, what actually strikes the balance between the core benefits of BFI, and the core problems that actually exist with the standard fullscreen BFI... If we were to use BFI in such a manner, then the least that could be done is using it to blank out the image processing.

So now how does this one work?

Rather simple. Just have the screen blank out while a new frame is being processed. This is rather tricky to implement correctly, so here is how it goes. You have the initial signal from the cables that a new frame is coming, and that the data stream has started.

Remember that copper is limited in how much data is transmitted at one time, and due to them being serial connectors, it sends to your display frame data chunks at a time. This often presents itself as scan-lines. You should be familiar with what this is if you understand how VSync or G-Sync function in the name of screen tearing. The data from the current frame and the next frame getting jumbled together due to speed differences between the GPU and display. So if a frame blacks out when the initial new_frame signal is received, and turns back on the moment the full frame data is on the display, that would eliminate screen tearing completely. As well, it would make the display work as a true progressive signal instead of progressive scan; one full image then the next then the next and etc.

As an ending word, I should mention that I do understand that similar solutions may already exist, and especially this one here may already be used somewhere. But that is not the point of the thread I am making; the main point that I want to deliver is that to bring BFI as a tech forward, these features that I am specifically mentioning should be the absolute bare minimum. Sadly, it is not. As it stands the "industry standard" remains a full-screen BFI set to clocks not in relation to the active frame. And too many BFI systems, particularly in the scope of OLED, give this option strictly with zero room for customizability. This kind of BFI standard I am strongly against, and as I said previously, if you want to introduce a tech, you need to ace it the first time; you need to ensure there are no compromises that a regular user is forced to make decisions on.

BFI as it stands right now, is a compromise as it often comes with a very painful drawbacks like flickering. People may dismiss it as much as they wish, but remember that people's tolerances to flickering is very different and often time quite low. On top of that the "standard design" of BFI is so comically lazy that it somehow manages to be incompatible with the likes of G-Sync, when there is absolutely no legitimate reason why BFI is not automatically 100% compatible with VRR as a whole. I have very limited tolerance for this kind of tech, and if something like BFI is introduced so sloppily, then I have very strong desires to ensure that it dies a quick 3DTV-like death.

Post Reply