Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

A better BFI (4) - ordering

Post by Colonel_Gerdauf » 27 May 2022, 15:49

And now episode 4.

One of the simplest ways to help make BFI forwards compatible as well as VRR compatible is to strictly define the blanking as pre-frame, and completely prohibit the use of post-frame or cross-frame blanking.

what is pre-frame blanking and post-frame blanking?

Simple. They are the question of whether you want the blanking done at the start of the frame (pre) or at the end of the frame (post). You can also have blanking happening right in between frames (cross), but as I had explained previously, this is less than ideal and requires hard-locking and hard-coding parameters that would restrict it's usability to a very narrow window that would force users to deal with the drawbacks or to disable it altogether.

Let me give you a simple visual of what that would look like.

Assuming binary code, this would be pre-frame blanking:
0111-0111-0111

This way the blank would be sent and completed before any worry about a slated change of refresh or frame rate. This would be perfect, as any sudden changes dictated by G-Sync would not change much about of anything the viewing experience. It is basically as if it was "fixed" refresh, and is not particularly computational on the monitor side.

Meanwhile, this would be post-frame:
1110-1110-1110

This one would require a fixed refresh, as any sudden changes would result in a loss of a blank, and this would introduce visible and uncomfortable inconsistencies which would disrupt the experience and can cause nausea, especially for those with low tolerance to the effects of flickering.

And this is cross-frame:

111-0-111-0-111

This would get back to many of the original issues discussed in this thread, and would strictly rely on a separate clock that requires a nice-number ratio with the refresh clocks. This method is very strongly condemned.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Better BFI Algorithms (Interlacing / Checkerboarding / Timings / Etc)

Post by Chief Blur Buster » 27 May 2022, 18:59

Colonel_Gerdauf wrote:
26 May 2022, 09:00
While Strobe Utility is a niche tuning software that can be downloaded much like CPU-Z or f.lux or Special K, from my eyes that is honestly pointless, as I would have expected these customizations to be part of the package with display OSDs or USB interfaces.
Some displays do this already.
I convinced NVIDIA to add "ULMB Pulse Width" to their menus.
But it's a more limited-range adjustment than Strobe Utility.
Colonel_Gerdauf wrote:
26 May 2022, 09:00
These kinds of things need to be a standard, and not some optional thing that some lucky-informed person gets to tune on his/her own. Hell, in relation to the second part of my rambles, I still always see BFI strobing measured in Hz, which is about the easiest way to send the BFI tech to the boomer-tech graveyard. Where are the relative times? Where are the ratios?
That's why Blur Busters came up with the Blur Busters Approved Logo Programme (press release). It's not a very easy advocacy to do behind the scenes, but we're working on this.

Colonel_Gerdauf wrote:
26 May 2022, 09:00
I do not subscribe to the "one step at the time" approach in this particular case. You either hit it nail in the head the first time, as with G-Sync/ProMotion, or have the tech go through several revisions and versions that only complicate matters, as with FreeSync and high-polling devices. The former will succeed into a "this is the absolute bare minimum" standard in due time, the latter will be doomed to rot a painful death.
Unfortunately, I must refer to the cat herding image by xkcd: https://xkcd.com/927/

It's impossible to get the industry to unify behind a standard except via higher refresh rates. The great thing is that 1000fps 1000Hz allows custom BFI patterns to be created via software instead of by the display vendor. The generic sheer refresh rate offloads the responsibility further upstream.

Needless to say, multiple things are done in parallel -- the multiple BFI approaches and the multiple Hz-based approaches. We're only a small company, even if we have had an outsized influence (compared to status quo if Blur Busters never existed)

Colonel_Gerdauf wrote:
26 May 2022, 09:00
While the rolling BFI does look nice, and gives a bit of the CRT feel, one thing I am very unhappy with is the fact that >50% of the display is blanked at one instance of time, and it remains large-area. I still feel that to properly limit the stutter and flicker effects, the "off" pixels need to be at most 50% of the total screen, and evenly spread out.
Laws of physics does not allow further blur reductions with your ideas.
It's necessary to have a long contiguous ratio of OFF:ON ratios, whether global or rolling approach.
To reduce motion blur by 75% requires pixels (and near adjacent pixels) to be contiguously off for 75% of the time.

Unfortunately, this common suggest was tested many times and shown to be a guaranteed DOA -- you cannot have cake and eat it too. You can try to spread it out a bit (e.g. phosphor decay or tightly-bunched multi-pulses like plasma) but it's still much more tightly coupled together than your algorithm ideas.

Laws of physics dictate that 75% pure blur reduction (assumes GtG=0) without introducing odd artifacts requires either:
- Global strobe where pixels (and spatially adjacent pixels of the same frame) are visible only 25% contiguously within a refresh cycle (global strobe approach)
- Rolling strobe where pixels (and spatially adjacent pixels of the same frame) are visible only 25% contiguously within a refresh cycle (rolling strobe approach)
- Quadruple frame rate at quadruple refresh rate (sample-and-hold approach)

If you try spread out illumination as you describe, guaranteed artifacts pop up. The more time separation between spatially adjacent pixels, the more motion artifacts pop up. Unfortunately, laws of physics is a hard wall. I can even create custom TestUFO tests to prove my point, there's no solution, and thousands of researchers have tried already -- your idea is already provably nonstarter. It has to be tight time differentials between adjacent pixels. Spreading it worsens the blur, or adds other motion artifacts.

You can use unconventional approaches like longer pixel pulses for brighter pixels and shorter pixel pulses for darker pixels (one contiguous pulse per frame, with the black period contiguous for it as well for all spatially adjacent pixels).

You can do a tight timeoffset (like the time difference between pixel rows, e.g. 1/67000sec at 67KHz horizontal scanrate) and artifacts are minor like just scanskew www.testufo.com/scanskew ... Any worsening spread-out than adjacent pixel rows (or adjacent pixel columns for sideways scan), is always worse than scanskew. No matter, checkerboard, interlacing (of any form), random dither (of any form), have all produced worse artifacts than contigious illumination followed by contiguous blackness (both spatially and temporally).

If you have a 144Hz-240Hz display, you can also check BFI motion blur physics for yourself at Custom Configurable Software-Based Black Frame Insertion Demo. Assuming LCD GtG is insignificant (or you're using a high-Hz OLED), motion blur is linearly proportional to the pixel visibility time. Any other BFI pattern is necessarily worse than this.

One of the most simple methods of creating an artifact is double-strobing (like CRT 30fps at 60Hz). The same problem happens for 60fps strobe at 120Hz too, and so on.

Image

This remains true for 60fps on strobed 120Hz LCD
This remains true for 120fps on strobed 240Hz LCD
Etc.

In fact, I can emulate a CRT 30fps at 60Hz double image effect in software TestUFO simulation of double impulsing -- check the second UFO out of three to see the double-image effect. Best tested on a display that is 120Hz and higher (so it stops flickering due to your flicker fusion threshold).



It uses 1/4th the refresh rate to simulate a double strobe (FRAME-OFF-REPEAT-OFF), so to simulate a 30fps at 60Hz CRT, requires a 120Hz at minimum. Otherwise, you only see it at 15fps because this is done in software at refresh-granularity of sample and hold. However, this is an excellent demonstration of how all kinds of multi-impulsing the same frame (hardware or software) creates artifacts.

However, we've done thousands of hardware tests of different BFI patterns, and our findings are consistent with other researchers. You cannot have 75% motion blur reduction without a 75%-contiguous-time black pixel (including adjacent pixels). Only tiny time offsets are possible (e.g. the time difference between pixel rows, like 1 unit of horizontal scan rate) without artifacts.

You can blend it out a bit (e.g. fade-in and fade-out, like simulated phosphor decay) with little penalty on blur -- only a slight worsening of blur with a massive reduction of eyestrain. But once you go very spatially-different phases of pulsings, nasty motion artifacts start to pop up.

Mathematically, there's provably no non-sequential-scanout method that's better than a rolling scan, unfortunately. Necessarily, it's a contiguous PWM-style, due to laws of physics. You can soften the leading and trailing edges and have a fade-strobe or a fade-scan (like CRT phosphor), but you can't change the scan pattern without guaranteed even worse artifacts than phosphor decay.

It's a function of the fact that human eyes are analog. Your eyes are in a different position at the beginning of pixel visibility and end of pixel visibility. That necessarily stamps that flicker in your eyes. If you spread the phase of the flickers of spatially-adjacent pixels too much, it stamps a flicker into your retinas at a much more different position because more time delta has passed since.

So to avoid this, you need contiguous on-time and contiguous off-time on a per-unique-image on a per-pixel basis, including its immediately spatially adjacent pixels. This was scientifically proven mandatory

(In fact sequential scanout is not artifact free due to scan skew. Even that tiny time difference between adjacent pixel rows (1/67000sec) creates a human visible skew artifact at www.testufo.com/scanskew when viewed on a 60Hz DELL or 60Hz HP monitor). Even sequential scanout (whether sample-and-hold scanout or rolling-scan BFI) produces the same scanskew artifact, but it is the most minor possible display artifact of non-global display refreshing approach. That's why rolling scan is the most artifact-free non-global display refreshing mathematically possible.

(Not to mention, it conveniently fits nicely with the raster display pipeline workflow of serializing 2D image over a 1D cable and onto a row-column addressed display, a workflow that is still occuring on DisplayPort and HDMI cables, even with compressed streams, pixels are delivered left-to-right, top-to-bottom. Converting sequential scanout to a non-sequential necessarily adds input lag, because you have to buffer a slow-scan 1/120sec signal fully before generating DLP subfields or plasma subfields, or any other temporal-dither-color display technology). Even on plasma, they still needed to tightly bunch the bright pulse followed by a long dark period, to properly reduce motion blur on plasmas -- aka long dark time for individual pixels (and spatially adjacent pixels).

Regardless,

It is an amazingly simple universal explainer that catches-all temporal imperfections (interlacing artifacts, color sequential artifacts, double-strobe artifacts, or any other artifacts generated by non-contiguous representation of a pixel on a per-frame basis, etc).

If you're failing to understand this, then you need to study closer on the laws of physics as it pertains to displays, how the analog-moving eyes interacts with the non-moving pixels of a refresh cycle -- at 2000 pixels/sec motion, 1/60sec worth of eye movement spreads over 33 pixels. If you don't bunch your bright pixels temporally AND spatially tightly, with longer dark periods in between, you fail to reduce motion blur in an artifact-free way.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: A better BFI (4) - ordering

Post by Chief Blur Buster » 27 May 2022, 19:48

Colonel_Gerdauf wrote:
27 May 2022, 15:49
One of the simplest ways to help make BFI forwards compatible as well as VRR compatible is to strictly define the blanking as pre-frame, and completely prohibit the use of post-frame or cross-frame blanking.
That's correct -- one flash per unique frame is what is needed.

The problem is the monitor doesn't know when the game is finished rendering, and sometimes it takes a longer time (e.g. longer than the flicker fusion threshold). With VRR, the monitor is waiting for the game to finish the frame before refreshing the screen. So the monitor stays off until the new frame arrives. And guess what happens if there's a disk access? (1/15sec). They start turning on again to repeat-strobe or sample-and-hold the current frame, to avoid the flicker of erratic strobed VRR approaches.
Colonel_Gerdauf wrote:
27 May 2022, 15:49
Assuming binary code, this would be pre-frame blanking:
Unfortunately, laws of physics dictate the following for motion-artifact-free BFI:
(A) One pulse per frame
(B) Contiguous blank time
(C) No time differential between spatially adjacent pixels
(D) Must be true simultaneously for spatially adjacent pixels
(E) Must be true simultaneously for temporally adjacent pixels
(F) Motion blur reduction can be no better than the contiguous OFF time you can do.

Rolling BFI violates (C) and (D) slightly, but by only a very tight time differential (1080p/120Hz = 135KHz scan rate = 1/135000sec difference) resulting only in a very minor artifact (scanskew at www.testufo.com/scanskew). Anything less contiguous than this is guaranteed worse more blur/artifact, unfortunately.

When you apply this absolute mandatory principle to strobed VRR, you can't avoid flicker when frame rates go down to 30fps. So ugly compromises are made to prevent flicker. All methods of pre-emptively avoiding flicker, produces massive loss of blur reduction. There's no bypassing this laws of physics compromise. Also don't forget the variable-strobe-crosstalk effect (from varying amount of blank time, some blank time has enough time to hide LCD GtG, other blank time does not have enough time to hide LCD GtG).

Also even 10 microsecond error in strobe timing can create human visible flicker. A 1ms pulse versus a 1.01ms pulse is a 1% difference in number of photons, which is a human visible "candlelight style" flicker if the changes to 1ms into 1.01ms and back to 1ms is at a sufficiently low cyclic frequency.

We discovered this when we were trying to troubleshoot a flickering strobe backlight, and the flicker was traced to a 10 microsecond error. Ouch. BFI must always be much more mathematically precise than a non-BFI refresh cycle, because of this -- whether it's a hardware accident or a BFI algorithm. I added as a line item to The Amazing Human Visible Feats Of The Millisecond.
Colonel_Gerdauf wrote:
27 May 2022, 15:49
This one would require a fixed refresh, as any sudden changes would result in a loss of a blank, and this would introduce visible and uncomfortable inconsistencies which would disrupt the experience and can cause nausea, especially for those with low tolerance to the effects of flickering.
That can help, but reduces motion blur much less.

Let's illustrate a problem with the laws of physics in display motion blur:
- A 90% motion blur reduction requires a 90% contiguous black time on a per pixel basis (spatially and temporally)
- 60fps 60Hz sample and hold at 2000 pixels/sec is still 33 pixels of motion blur.
- Perfect squarewave BFI (whether global or rolling) that flashes 10% and is dark 90%, still even has 3.3 pixels of motion blur.
- Any flicker-softeners (e.g. phosphor fade or spreading out algorithms) is always worse than this.
Colonel_Gerdauf wrote:
27 May 2022, 15:49
111-0-111-0-111
This only reduces motion blur by 25%.
A lot less flicker, but almost no blur reduction.
You can't have cake and eat it too.

Again, research has repeated led to this:
(A) One pulse per frame
(B) Contiguous blank time
(C) No time differential between spatially adjacent pixels
(D) Must be true simultaneously for spatially adjacent pixels
(E) Must be true simultaneously for temporally adjacent pixels
(F) Motion blur reduction can be no better than the contiguous OFF time you can do.
50% contiguous off time = 50% blur reduction without motion artifacts
75% contiguous off time = 75% blur reduction without motion artifacts
90% contiguous off time = 90% blur reduction without motion artifacts

So algorithms to soften the flicker is simply turning the squarewave to softer wave (sinewave, trianglewave, phosphor decay simulation, and other fadein/fadeout systems) -- but there can be only one risetime and only one falltime per unique frame, to get motion-artifactless BFI.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Colonel_Gerdauf » 27 May 2022, 19:54

With rolling BFI, the one you have shown, does the direction of the scans matter in regards to the positive and negative effects of BFI?

For example, what happens when you have horizontal rolling? I am not sure how it visually changes anything. For one, it can enforce pattern consistency for when someone wants to set up multiple displays with mixed portrait and landscape layouts. Once concern that I think I can see is the amount of time a scan takes to complete in horizontal versus vertical.

Although this might be computationally weird, but what happens when the on-scans travel in a 1:1 diagonal path?

As to the rolling strobe vs dot-flip strobe, we are again at the discussion of compromise. flickering vs artifacts, in this case. Like I mentioned, 75%-off is straight up unacceptable for me; that is going to give me intense vertigo very quickly. If it is truly a situation that BFI cannot function as intended without the irritating flickering, then quite frankly I want BFI to be killed off in its entirety in the way the 3DTV's have gone.

If they want a "CRT-like effect" then they need to come up with something else. I do not like the compromise being made here: true motion feeling at the cost of some flicker? Well, I want ZERO flicker, thank you very much.

And a side note: this CRT fanboyism over BFI has another issue that I am struggling to cope with; they seem have a revisionist lens about how the display refreshes have been designed historically and how sample-and-hold came to be. Similar things have happened in regards to plasma displays, and I have no patience for seeing it prop up again.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Chief Blur Buster » 27 May 2022, 20:03

Colonel_Gerdauf wrote:
27 May 2022, 19:54
With rolling BFI, the one you have shown, does the direction of the scans matter in regards to the positive and negative effects of BFI?

For example, what happens when you have horizontal rolling? I am not sure how it visually changes anything. For one, it can enforce pattern consistency for when someone wants to set up multiple displays with mixed portrait and landscape layouts. Once concern that I think I can see is the amount of time a scan takes to complete in horizontal versus vertical.
Scan direction doesn't really matter (top to bottom, bottom to top, left to right, right to left, even diagonal). It only changes the direction of scanskew, as seen at www.testufo.com/scanskew ... It's still tight refreshing time differentials between adjacent pixels.

Colonel_Gerdauf wrote:
27 May 2022, 19:54
Although this might be computationally weird, but what happens when the on-scans travel in a 1:1 diagonal path?
You'd only get the most scanskew effect for motion perpendicular to the scan direction. Any 360-degrees of scan direction can be utilized, as long as you keep refresh time differentials super-tight between spatially adjacent pixels.

But in practice, a horizontal or vertical scanout is used for simplicity on row-column addressed displays, and because it is most compatible with the raster delivery methodology of display cables (serialization of 2D images to 1D cable), which is a perfect-fit for lagless scanout simply by streaming pixel rows straight to the panel (with only pixel row processing algorithms).

Colonel_Gerdauf wrote:
27 May 2022, 19:54
As to the rolling strobe vs dot-flip strobe, we are again at the discussion of compromise. flickering vs artifacts, in this case. Like I mentioned, 75%-off is straight up unacceptable for me, that is going to give me intense vertigo very quickly. If it is truly a situation that BFI cannot function as intended without the irritating flickering, then quite frankly I want BFI to be killed off in its entirety in the way the 3DTV's have gone.
That's why we're big fans of retina refresh rates.

Flickerless 240fps 240Hz reduces motion blur by 75% versus 60fps 60Hz.

4ms frametimes can be done by pulse (4ms flash) or by contiguousness (full second full of contiguous 4ms frames).
See below:

Image

Image

So you see, from motion blur mathematics (for fastest possible 0ms GtG display), assuming framerate=Hz
1. Motion blur is always from start of pixel visibility to end of pixel visibility time per frame.
2. Impulsed displays means motion blur is controlled by contiguous flash time (e.g. 1/240sec contiguous flash = 1/240sec blur)
3. Sample and hold displays means motion blur is controlled by frametime (e.g. 1/240sec frametime = 1/240sec blur)

Now, do you understand yet, why we're big fans of using brute framerate to reduce display motion blur instead of BFI?

This is good when the refresh rate and framerate is native (original), like for games, VR, scrolling, panning, etc.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Chief Blur Buster » 27 May 2022, 20:14

The great news is refresh rate parallelization is one easy trick to get higher Hz, when budgets allows (e.g. military simulators or ride simulators).

For example, 16 different 60Hz projectors projecting onto the same screen, flashing 1/1000sec each in round robin fashion (strobed LCoS projectors), generating a sample-and-hold 960fps 960Hz. You can even use 16 separate video files per output, on a genlock bus, sourced and split from a sped-up Phantom Flex 4K 960fps footage.

Retina refresh rates can be done today in the lab at incredibly high costs, via multiple kinds of refresh rate combining techniques such as this. This is not yet commercialized, but refresh rate parallelism (1000fps+ 1000Hz+ sample and hold) is a good shortcut for concurrently blurless + flickerless.

Also, subdivision of a display backplane also solves a lot of problems, see theoretical 960Hz OLED. Jumbotrons also refresh at 600-1920Hz today via repeat-refresh of the same frame, but they can be modified with faster ASICs and FPGAs (per 32x32 module) to produce a high native refresh rate on MiniLED/MicroLED/Jumbotron type displays, see this post about retina-refresh-rate LED arrays. There's many shortcuts to help jump more dramatically up the curve of diminishing returns. It's all already being done in multiple labs worldwide.

The bottom line is retina Hz is coming sooner than many expect (at least at the top end), now that many of us discovered we have to jump more dramatically up the diminishing curve of returns. Existing tech can do 8K 1000fps 1000Hz via source-side parallelization (e.g. 8 different 8K120 cameras capturing 1/1000sec shutter in round robin) and destination display via round robin projector stacked strobes as described. Many methods of refresh rate combining is possible.

Or if you do it with DLP you have to disable its temporal dithering, which interferes with refresh rate progress. For example, for 1920Hz 1-bit DLP mirror projectors, you can do 24 different 1-bit monochrome DLP projectors (doing 1 bit each of the 24bit color space) projecting on the same screen for 1920fps 1920Hz. ViewPixx has a true native 1440Hz DLP mode in 1-bit monochrome, but you can use any TI DLP chip + FPGAs, and use 24 to 36 chips projecting onto the same screen to have a 0-rainbow 0-temporals DLP that is producing perfect native 24-bit to 36-bit color at the max mirror rate (960Hz, 1440Hz, 1920Hz or even 2880Hz).

My favourite projector to do it on would be LCD or LCoS, as long as GtG100% could fit in the dark period between strobes. You can use mechanical spinning discs in front of the lens of the LCD/LCoS, to make sure only one projector is illuminating, with the strobe phase adjusted to minimize artifacts (similar to how ViewSonic XG2431 Strobe Utility does it) -- or even the strobe wheel slit can be a rolling-shutter, in sync with LCD scanout, to get away from the GtG fade zone (seen in high speed videos).

We need to jump 120Hz->1000Hz (7.3ms blur difference) to get the same benefit for 60Hz->120Hz (8.3ms blur difference), for example so we recently lit a fire under a lot of refresh rate parallelization approaches worldwide.

Few researchers realized until recently, the need to jump more dramatically up the curve of diminishing returns, but fortunately we have found technology solutions already that are already being engineered this decade.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Chief Blur Buster » 27 May 2022, 20:27

Since this is all display engineering talk, I have moved this thread to Area 51.

This is relevant to inexperienced manufacturers (especially those who just rebadge existing technology with minor modifications). Modifications to projector firmwares, for example, can make refresh rate parallization easier with multiple kinds of very disparate projector technologies, once you know the technology considerations of each specific projector.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Colonel_Gerdauf
Posts: 17
Joined: 24 May 2022, 12:24

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Colonel_Gerdauf » 27 May 2022, 20:35

Chief Blur Buster wrote:
27 May 2022, 20:03
Now, do you understand yet, why we're big fans of using brute framerate to reduce display motion blur instead of BFI? This is good when the refresh rate and framerate is native (original), like for games and VR.
Putting aside technical differences of perspective, I do understand the value there of going for raw refresh improvements.

But there remains the issue that is not ever going to get resolved on the software end; frame locked content. This remains a sticking point for BFI fans that they shove into every discussion. What would be your solution here? flickering via BFI is a no-go for me, so the alternative I see is interpolation, which despite great strides made on TV's remain the butt of jokes between movie buffs and PCMR types (and neither are the brightest bulbs of the bunch).

Another issue which I do not see a viable resolution on is how to deal with incrementalism as you describe it. With all the things going on in design, going full 120 then 1000 then 4000 would require long stretches of time. People do not have the patience or so much the attention span to stay engaged and keep pushing.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Chief Blur Buster » 27 May 2022, 20:56

Colonel_Gerdauf wrote:
27 May 2022, 20:35
But there remains the issue that is not ever going to get resolved on the software end; frame locked content. This remains a sticking point for BFI fans that they shove into every discussion. What would be your solution here?
There's really not much we can do, but brute Hz enables a lot of custom BFI algorithms to be outsourced to the software layer.

The great news is that brute refresh rate allows you to invent algorithms. Like a CRT electron beam simulator. A 960Hz display can do a software-based simulation of a CRT electron beam at 16 digital refresh cycles per CRT Hz (for 60Hz CRT simulation).

CRT electron beam simulators start to work usably well when you have at least 4-6 refresh cycles per CRT Hz, especially on 0ms-GtG displays like the upcoming 240Hz OLEDs about to come onto the market.

A sufficiently brute refresh rate can simulate any retro display (a plasma look, a DLP look, a CRT look, a digital BFI look, etc). If the display is already temporally retina and spatially retina, you have a superset of past-display simulation, to your heart's content, simply by software algorithms.

Or invent unorthodox algorithms that completely blends sample-and-hold versus impulsing. Whether it's an alphablended continuum of rolling scan and sample-and-hold. Or an alphablend of multiple simulated display technologies (combining both advantages and disadvantages into compromises).

Invent your own flicker-vs-blur tradeoff on that infernal tradeoff (it's as almost frustrating as quantum -- you can only measure time or position, not both simultaneously).

Colonel_Gerdauf wrote:
27 May 2022, 20:35
What would be your solution here? flickering via BFI is a no-go for me, so the alternative I see is interpolation, which despite great strides made on TV's remain the butt of jokes between movie buffs and PCMR types.
Although I dislike interpolation for movies, interpolation is becoming gradually more and more perfect thanks to things like artificial intelligence understanding the scenery -- basically think of AI as ultrafast PhotoShop artists filling in missing detail (e.g. parallax issues) and de-blurring camera shutter motion blur to compensate for the increase in frame rates.

The best AI-based approaches are non-realtime unfortunately, and can't be done realtime in a TV -- it is computationally intensive. The ultimate would be metaphorically like a theoretical filmmaker-optimized DALL-E 2 (or er, the future GPT-4 AGI variant) "improving" each frame of each film frame as perfectly as it thinks possible (including referencing other frames to autofill parallax uncoverings better than a guesswork), to remove original blur and delete interpolation artifacts intelligently, etc. But real time approaches are getting better and better.

Now, when it comes to game material, it is less black box. You know the geometry (Z-buffer) and you know the high Hz of the controller, and you can use that data to make it less black box and more perfect. Interpolation is not the only algorithm at play, there's extrapolation and reprojection, plus AI-based approaches (like DLSS) to turn a low-resolution render to a lossless-looking 4K image. I cover this all in Frame Rate Amplification Technology.

If you have enough data such as high-Hz controllers (1000Hz head trackers etc), you can perceptually flawlessly reproject scenery at a higher frame rate than the GPU render frame rate. This is the technique that Oculus (er, Meta) virtual reality to convert 45fps to 90fps without artifacts (except for hand tracking stutter). Frame rate amplification is a technology that exists today (at 2x-3x factors), and will improve massively.

Retro-Friendly Purist Approach:
For 35mm purists, one can simulate a 24Hz double-strobe film projector (180-degree rolling shutter spinning at 48 cycles per second) via a 96Hz refresh rate screen doing double-strobe BFI. You get the double image artifact, but your motion blur is significantly reduced. This is a fair tradeoff for many 35mm purists who want an identical look to yesterday's 35mm projectors, but different people prefer more blur than a double-image effect. Everybody's preference is different.

Colonel_Gerdauf wrote:
27 May 2022, 20:35
Another issue which I do not see a viable resolution on is how to deal with incrementalism as you describe it. With all the things going on in design, going full 120 then 1000 then 4000 would require long stretches of time. People do not have the patience or so much the attention span to stay engaged and keep pushing.
I expect it to play roughly out similarly to resolutions, except slightly slower. VHS, then DVD, then 720p, then 1080p, then 4K, then 8K. Except a little more slowly this decade. For example, one Hz doubling per decade. This decade will be the slow mainstreaming of 120Hz (it already started on phones and tablets, as well as TVs). I still remember the 4K naysayers from 10 years ago.

The good news is that smaller Hz differentials are easier to see on OLED, so you see a difference between 120Hz-vs-240Hz on OLED much more clearly than 120Hz-vs-240Hz on LCD. This is because LCD GtG is no longer an error margin diminishing the differences between refresh rates. With OLED, MiniLED and MicroLED displays, it will be easier to see the benefits of the refresh rate race over smaller increments than 120Hz-vs-1000Hz. We just have to keep doubling mainstream Hz once every now and then, preferably every 5 years rather than every 10 years (but every 10 years is more realistic).

Blur Busters is the bleeding edge, and we're in it for the long haul.

But we are trying to speed up the refresh rate race, through various tricks, at least at the high-end use cases as described earlier.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Better BFI Algorithms (Split Blanking / Timings / Progressive / etc)

Post by Chief Blur Buster » 27 May 2022, 21:12

Colonel_Gerdauf wrote:
27 May 2022, 20:35
Putting aside technical differences of perspective, I do understand the value there of going for raw refresh improvements.
Are you sure you understand ALL the advantages? ;)

Sync technologies are simply a side effect of a finite refresh rate forcing various compromises, in visuals, stutters, latency, tearing, and other problems created by non-retina refresh rates, from the aliasing effects of framerate versus refreshrate. When you go to retina refresh rate, where refresh-roundoff error is no longer human-visible, you can play any frame rate natively without VRR without stutter.

Retina refresh rate (Between ~1KHz and 20KHz+ depending on tech, resolution, FOV, motionspeed, and use case) has the following advantages:
  • Behaves as per-pixel VRR
    ...(Perfect videophile 23.976p 24p 25p 50p 59.94p 60p even concurrent side-by-side windows, no pulldown judder)
  • Makes strobing obsolete
  • Makes VRR obsolete
  • Merges sync technology visuals (VSYNC ON = VSYNC OFF = VRR = same perceived visuals)
  • Merges sync technology latency (VSYNC ON = VSYNC OFF = VRR = same perceived latency)
  • Outsource all possible display & BFI algorithms to the software layer
    ...(CRT simulators, LCD simulators, plasma simulators, global BFI, rolling BFI, dream-your-algorithm, etc)
  • Reduces latency of low frame rates to nil (e.g. 2000Hz means 24fps delivered 1/2000sec per frame)
  • Reduces latency of lookahead-based* frame rate amplification approaches
    ...(e.g. 200fps->1000fps = 1/200sec lookahead latency at most)
  • All the above are display-independent behavior: Just supply brute Hz
Indirectly, it can simplify certain kinds of display engineering when you don't need to worry about strobing or VRR or mode-switching for different video frame rates, or such. And also helps push the refresh rate race a little bit more geometrically than was earlier, by removing certain overheads.

*Some frame rate amplification technologies (e.g. Oculus ASW) has no lookahead; simply relying on controller Hz to reproject geometry without a full 3D rerender. However, other more-blackbox frame rate amplification technologies benefit from a lookahead step.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply