Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
Post Reply
theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by theTDC » 09 Mar 2021, 00:33

I have a few questions here.

Also, for the record, I'm going to use 100 Hz, 200Hz, and other easy to do math on refresh rates, because the exact details don't really matter.

First, is there any disadvantage to lightboost? My understanding is that no matter the technology there will be pixel switching times. So first, is this true for OLEDs?

Secondly, we do know that all LCD panels are going to have some switching times, so is there any real disadvantage to having a strobing backlight, at least to the extent that we can hide the pixel switching? By that I mean if we have a 100 Hz monitor, with a fairly slow 4ms switching time, is there any real argument to showing people that, over having something like a 5ms on/off strobing backlight?

Third, this site talks about the point where stuttering becomes blurring. So 15 Hz is obviously perceived as stuttering for everyone, while 500 Hz would be perceived as blurring for everyone. I'm sure that two things affect this, the speed of motion of the subject matter, and the age/genetics/etc of the individual viewing this. Those two things may make this question impossible to understand, but I'm wondering how low you can go in terms of refresh rate/persistance mismatch?

Because theoretically the strobe backlight persistance is completely independent of the refresh rate. We could have a 1ms backlight strobe for a 500 Hz monitor, or a 5 Hz monitor. Obviously the latter just flat out wouldn't work. But is that because 5 Hz is far below the stutter level?

To use real numbers, if we settle on a strobe backlight time of 1ms, what do we really get from a refresh rate of 500 Hz versus 100 Hz? I guess we could have more light for the same backlight level, and I mean by definition the picture would update more often. However, does that actually matter? If we had a 100 Hz monitor with just a 1ms persistent backlight, would that be a very good monitor experience, comparable to something like a 500 Hz, 1ms persistence monitor? Or would it just not work?

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is there any disadvantage to lightboost?

Post by Chief Blur Buster » 09 Mar 2021, 02:31

theTDC wrote:
09 Mar 2021, 00:33
First, is there any disadvantage to lightboost?
Yes and no and depends.

1. LightBoost is a very old strobe backlight brand name. See Motion Blur Reduction FAQ for more recent brand names of motion blur reduction. LightBoost has crappy colors as a compromise of its motion blur reduction, but newer strobe brands can avoid color degradation. Such as PureXP and DyAc. And are you aware I now help manufacturers with strobe backlights -- Blur Busters Approved?

2. There are always pros/cons for strobe backlights, some have less cons than others. Some useful reading can be found here:
- Pixel Response FAQ: GtG Versus MPRT, the Two Pixel Response Benchmarks
- The Stroboscopic Effect of Finite Frame Rates
- Strobe Crosstalk FAQ
- Blur Busters Law: The Amazing Journey To Future 1000 Hz Displays
- Why is Refresh Rate Headroom Good For Strobing?

3. Motion blur is frame visibility time (MPRT persistence), and reducing motion blur is done by
- Flashing the frame; (e.g. LightBoost, ULMB, etc)
- Adding more frames (e.g. doing 240fps 240Hz, or doing 1000fps 1000Hz)
Regardless of how it is done, 2ms frame visibility done either way (2ms flash or flickerless 500fps), has the tracking-based same motion blur, assuming GtG=0 (or GtG hidden).
theTDC wrote:
09 Mar 2021, 00:33
My understanding is that no matter the technology there will be pixel switching times.
You have to decouple your thinking when it comes to LCDs instead of OLEDs.

Panel switching time and backlight switching time. Sometimes it's possible to hide the pixel switching time completely in the dark periods of a strobe backlight. The problem is real world GtG doesn't always fit in the dark period between strobe flashes.

But some LCDs can perfectly hide LCD GtG between refresh cycles, like the Oculus Quest 2 VR LCD, which can reduce motion blur better than a CRT can, with a perfect zero crosstalk rating for even top/center/bottom (it's one of the few LCDs able to do so). I own a Quest 2 VR headset, and it's superlative for a strobed LCD. Best I've ever seen, for example. It's very hard to hide real-world GtG pixel switching time in the backlight=OFF periods, but it's possible.
theTDC wrote:
09 Mar 2021, 00:33
Secondly, we do know that all LCD panels are going to have some switching times, so is there any real disadvantage to having a strobing backlight, at least to the extent that we can hide the pixel switching? By that I mean if we have a 100 Hz monitor, with a fairly slow 4ms switching time, is there any real argument to showing people that, over having something like a 5ms on/off strobing backlight?
This is a common problem, it's called Strobe Crosstalk. I've seen very bad crosstalk (especially on VA panels and old IPS panels). But TN panels and new "Fast IPS" panels can have crosstalk below 3% intensity, and sometimes all the way to perfect (0%) in the case of Quest 2's virtual reality IPS LCD -- one of the world's best strobed LCDs.

Image

The faster the LCD GtG, the more complete it becomes before the backlight turns back on. I've seen LCDs with worse than 25% crosstalk, and I've seen LCDs with perfect 0% crosstalk. Most good LCDs are between 1%-3%.

Moral of the story: For good strobe backlights, stick to TN LCDs and "Fast IPS" LCDs (2020-and newer 165Hz IPS), and use some extra refresh rate headroom, preferably at a 50% ratio (e.g. strobe a 240Hz LCD at 120Hz). I also believe, for example, the Oculus 2 VR LCD is technically in theory a 240Hz "Fast IPS" LCD being heavily under-driven at 72Hz or 90Hz to achieve its perfect zero crosstalk ability. It's easier to hide GtG completely from human eyes this way.

Right now, some strobe backlights can go as low as approximately 0.5ms MPRT (or less) at their most extreme settings. It gets very dark though. I can see the human-visible difference between 0.5ms MPRT and 1.0ms MPRT in TestUFO Panning Map Test at 3000 pixels/second, because 1ms MPRT still is 3 pixels of motion blur, enough to obscure the tiny text in Google Maps to less-than-CRT-clarity. Yes, on some of my strobed LCDs, I can actually read the tiny 6-point street map labels. Such as NVIDIA ULMB at Pulse Width=30, or PureXP set to "Ultra", or BenQ DyAc persistence (pulse width) halved -- you need 0.5ms MPRT to be able to read the Google Map street name labels scrolling past at 3000 pixels/second. (You also need to fullscreen the test, so you have more time to eye-track the street names. Even better than 4K 0.5ms MPRT strobed displays arrive eventually, too).

That's why new 2020 VR LCDs (0.3 MPRT) can do less motion blur than OLED VR (2ms MPRT). It's because of the Talbot-Plateau law (laws of physics) where you need to flash twice as bright to get halved persistence, since it's easier to outsource lighting to an overkill water-cooled backlight, than direct-emission from OLED pixels. So you can cram more lumens in a briefer strobe flash, allowing lower MPRTs in the best LCDs than the best OLEDs.

The best blur-reduced LCD is far ahead of the best blur-reduced OLED, but you have to really cherrypick the best of each. Because of strobe crosstalk, the LG CX OLED with its long 50%:50% duty cycle BFI, still produces 4ms MPRT (1/240sec) but the OLED beauty (blacks and colors) can make it look preferable even to a 1ms MPRT strobe backlight on a washed-out-colors LCD. These LCD disadvantages are eventually fixable, via better local dimming & better LCD backlights (e.g. Nanosys Quantum Dot), though they are not common in desktop gaming monitors.

If you want to learn more about how strobe tuning affects the crosstalk, read ANIMATIONS: Strobe Tuning.
theTDC wrote:
09 Mar 2021, 00:33
Third, this site talks about the point where stuttering becomes blurring. So 15 Hz is obviously perceived as stuttering for everyone, while 500 Hz would be perceived as blurring for everyone. I'm sure that two things affect this, the speed of motion of the subject matter
Yes, speed affects things.

See Making Of: Why Are TestUFO Display Motion Tests 960 Pixels Per Second? as well as the Vicious Cycle Effect section of the 1000Hz Article

At 8000 pixels/sec, 1ms MPRT translates to 8 pixel of display motion blur.
theTDC wrote:
09 Mar 2021, 00:33
and the age/genetics/etc of the individual viewing this.
This is primary a motion-nonblindness and a flicker-fusion-threshold thing.
-- If you can SEE a music string vibrate shakily, then you can see stutter of a display of a Hz below this music string frequency
-- If you can SEE a music string vibrate blur, then you can see motion blur of a display of a Hz exceeding this music string frequency

It's same, it's equivalent -- how sensitive you are to being able to see something vibrate, or if something vibrates fast enough to blur. So whatever human sensitivities you have in watching real-world vibrations, is generally mappable to sensitivities to a refresh rate on a display (assuming GtG=0, which preserves this equivalence)

Please view this YouTube video to understand how sample-and-hold is simply vibration of the finite framerate acting against analog-moving-eyes:

phpBB [video]


Certainly genetics/sensitivity will vary the flicker fusion threshold (approximately 60-80Hz for a lot of things), where visible flicker (including edge-flicker of low-frequency sample-and-hold aka "stutter") blends into non-flicker (including blurry-edge of high-frequency sample-and-hold aka "persistence display motion blur").
theTDC wrote:
09 Mar 2021, 00:33
Those two things may make this question impossible to understand
This topic matter is not impossible to understand because it's simple Motion Blur 101 to me. It's very elementary. If you ever sit in one of my classrooms (a special VIP contracting service I provide to clients to hire me), and watch my 240Hz TestUFO powerpoints, you'll pretty much agree with me.

I turn all the complex motion blur calculus into E=mc^2 equivalent simplicity.

My show-and-tell and display engineers learn a lot from me. Anybody who watches me teach have lots of superlative things in saying they now understand display motion blur physics much better.

Image
Note: Forum readers who are in the industry, can contact me via services.blurbusters.com for these kinds of services, which I don't widely advertise yet.

It's Easier To Teach With a True 240 Hz Display
It is so much easier to teach display motion blur physics with a 240 Hz display, because I can show what happens at doublings 15 -> 30 -> 60 -> 120 -> 240. It becomes quite obvious that double framerate consistently halves the amplitude for all humans (regardless of whether it's stutter amplitude or motion blur amplitude) and it's easy to predict how much less motion blur a higher frame rate becomes.

Also, some things are completely independent of human sensitivity. Things like the amplitude are immutable constant like PI and e, regardless of your flicker fusion threshold (your individual sensitivity of how stutters blend to blur). "Amplitude" is same for stutter/blur. For stutter, that's the jump distance. For blur, that's the blur thickness. Almost everybody (who isn't motion blind) can see the amplitude, regardless of whether the amplitude is stutter, or is motion blur.

Same for a vibrating string -- how far back-and-fourth the string vibrates. Regardless of whether the string visibly vibrates, or the string just blurs. Nearly everyone can SEE that the string vibrates -- just that one human may see the string visibly vibrating -- but another human sees the string blurring. This is EXACTLY the same case for displays -- almost everyone (when pointed out carefully) can see the amplitude. So they still see refresh rate improvement benefits, regardless -- when trained to pay attention (e.g. eyetracking www.testufo.com and comparing the frame rates). Nobody has ever told me they couldn't tell apart 240fps vs 120fps vs 60fps UFOs, once I showed them on a large 240Hz display -- unless they had a medical issue such as motion blindness (Akinetopsia).

That's why Blur Busters recommends geometric upgrades to refresh rates to punch the diminishing curve of returns. 60 -> 120 -> 240 -> 480 -> 960Hz. Or 60 -> 144 -> 360Hz. esports players can upgrade more incrementally, but the Average User needs to upgrade geometrically to derive benefits in fast-motion activity.

Motion blur physics is very easily extrapolatable once I demo frame rate doublings & refresh rate doublings beyond typical flicker fusion thresholds (aka 60Hz). This stuff is highly easily teachable once someone owns a 240Hz display, but very hard to teach if you only own a 60 Hz display.
theTDC wrote:
09 Mar 2021, 00:33
but I'm wondering how low you can go in terms of refresh rate/persistance mismatch?
You can see for yourself with TestUFO Strobing via Software-Based Black Frames on a 60Hz display:

Animation: TestUFO Black Frames 30fps
Animation: TestUFO Black Frames 20fps
Animation: TestUFO Black Frames 15fps
Animation: TestUFO Black Frames 12fps
Animation: TestUFO Black Frames 10fps

Here's the 1/6th framerate version (10fps at 60Hz).
- Notice how the bottom two UFOs is similar in motion blur? It's because frame visibility time is identical.
- 10fps flickers a lot, but it doesn't stutter like 10fps. See?

(Best viewed on a desktop screen or large iPad tablet)



Now. Please pick up your jaw from the floor in how much I've educated you with the above. I make teaching display motion blur physics simple!
That's only 1% of my classroom teaching, right there, then.

(This is one of the few animations I can easily teach at 60Hz without needing 240Hz)

Notice that the 2nd last UFO in all the above is matching the motion blur of the full frame rate of your display (assuming GtG of your display is not mediocre). You can easily see that motion blur is frame visibility time, and unique frames of the last two UFOs are visible for one refresh cycle, regardless of its actual frame rate. The main side effect is flicker / stroboscopic effect.
theTDC wrote:
09 Mar 2021, 00:33
Because theoretically the strobe backlight persistance is completely independent of the refresh rate.
Correct.

...Although there are cons. You do get the stroboscopic stepping effect of a low frame rate as seen in The Stroboscopic Effect of Finite Frame Rates. So there's still benefits to 1000fps 1000Hz to get 1ms persistence without strobing.
theTDC wrote:
09 Mar 2021, 00:33
We could have a 1ms backlight strobe for a 500 Hz monitor, or a 5 Hz monitor. Obviously the latter just flat out wouldn't work. But is that because 5 Hz is far below the stutter level?
There is no stutter. Look at the above TestUFO animations on black frames.
It's just simply annoying flicker. Nobody likes low-frequency flicker.
theTDC wrote:
09 Mar 2021, 00:33
To use real numbers, if we settle on a strobe backlight time of 1ms, what do we really get from a refresh rate of 500 Hz versus 100 Hz?
Less stroboscopic effect, as seen in the mouse arrow effect from my 480 Hz Monitor Tests

Image

Another problem, though:

Remember we have three situations:
(A) Moving eyes tracking moving objects
(B) Moving eyes past stationary objects
(C) Stationary eyes while moving objects scroll past

Flicker/strobing only fixes (A).
Situation (B) and (C) reveals problems of low refresh rate, REGARDLESS of whether it's fixing motion blur via strobing.
theTDC wrote:
09 Mar 2021, 00:33
I guess we could have more light for the same backlight level, and I mean by definition the picture would update more often. However, does that actually matter?
Yes. See above. (B) and (C).
theTDC wrote:
09 Mar 2021, 00:33
If we had a 100 Hz monitor with just a 1ms persistent backlight, would that be a very good monitor experience, comparable to something like a 500 Hz, 1ms persistence monitor? Or would it just not work?
Both will have the same motion blur for (A)
However, you will have more stroboscopic effect (phantom arrays) for situation (B) and (C)

TL;DR: Strobing and Low Refresh Rates is Still a Humankind Band-Aid For Display Motion Blur. (Albeit It's a GOOD Band-AId!)

Related reading: CRT Nirvana Guide for DIsappointed CRT-to-LCD Upgraders.
...If you're shopping for the best possible motion blur reduction options.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by theTDC » 09 Mar 2021, 17:30

EDIT: Response got rejected as spam, so I'm breaking it up into multiple parts.

PART 1

Well first of all, let me just say that in all my years on the internet, this is probably the single most informative response I have ever receieved. I saw this just before going to bed last night, and didn't get a chance to finish reading all the material until just a few minutes ago. I think I'll break up my responses into multiple posts, but for now I'll just focus on my original question.

If I'm understanding this correctly, the reason you can't just add a strobing backlight to pretty much 100% of monitors with very little engineering is twofold.

1) The physical mechanism that updates the pixels works on a line by line method, and traditionally this takes the entire refresh time to finish. It is impossible to simultaneously update all the pixels. If I'm understanding this correctly, it's impossible to even update 2 or more pixels at the exact same time. Maybe there's SIMD for monitors? Either way it doesn't fundamentally change this.
2) The GtG measurements are misleading to the point of outright lying. 10%/90% is just not good enough, and "average time," can be an order of magnitude better than "worst time". In fact, some LCD's are so bad that the worst case transitions take even longer than the refresh rate!

So what would happen if we just took a 100Hz monitor and gave it a strobing backlight of 0.1ms? I picked 0.1ms because it's close to instantaneous. Let's assume that the worst case transition takes 10ms, and average transitions are ~2ms. Also, we're assuming a full 10ms to scan from top-bottom giving each pixel it's new colour. What we would see is the top line of pixels being 100% transitioned to the new frame. The bottom line of pixels would be 99.9% still the old frame, and everything in the middle would be progressively worse, but dependent on the specific colour transitions required. You have a good still image of this effect in one of the linked resources.

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by theTDC » 09 Mar 2021, 17:31

PART 2

So that's basically horrible, and to make a strobing backlight work we need a few things to be true.

1) The physical mechanism that updates the pixels needs to scan top to bottom much, much faster than the refresh rate.
2) All the pixels need to transition, worst case scenario, to some acceptable degree of accuracy such as 99%, in the time remaining. Since the bottom line of pixels gets the updated information last, this time is (refresh rate) - (full screen scan), or whatever you would call it. So for our 100Hz display, even with a 0.1ms strobe light, if it takes 9ms to update the pixels while the screen is unlit, we only get 1ms for the bottom pixels to transition.
3) The time that the backlight is on subtracts from the time we have to complete this process. So if our backlight is on for 0.001ms, we basically have the full refresh time, but if it's a more reasonable 1ms, we have just 9ms. If it's 5ms, we have just 5ms.
4) Due to the above, and also lowering persistance, there is quite a lot of argument for very low strobe times, of around 1ms or so, or even less. However, this linearly lowers the brightness of the image, possibly to unacceptable levels. Ultimately, for that reason and others, such as strobing effects, it would be better to just have a 1000Hz display, and a strobe backlight is great, but far from perfect.
5) Using realistic numbers if we have a backlight on for 1ms we might need to improve our scanline speed up to about 4ms, from 10ms. Then the worst case pixel transitions need to be 5ms, from 10ms. So in this scenario we need to improve raster speed by 2.5x, and true GtG speed by 2x, to not get horrible results. Actual implementations may differ in the rate at which raster speed is increased versus GtG speed, but that simply shifts which one gets improved more.

Because changing pixels to 99% accuracy is actually quite hard, there are some different approaches to this. One which you talked about, multipass, works on the principle that it's easier to build a really fast scanline mechanism, to the point where we do this twice with multipass, once to overdrive the pixels most of the way there, the second to give them the correct colour and let them "settle," into that. This works because, for some reason, it takes the pixels much longer to get to the right colour if they're just given the right colour. I don't know why, but it's interesting.

For the record, "strobe crosstalk," I found to be a fairly confusing term for "the pixels still haven't completely changed colour yet," but I guess it works. Anyway, I really underestimated how terrible LCD monitors really were, and how hard it would be to just add a strobing backlight. You also helped me understand why it can be very nice on some monitors to not run at full refresh rates with a strobing backlight, because they won't have acceptable GtG responses in that time window, so you'll get a horrible image quality.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by Chief Blur Buster » 09 Mar 2021, 19:39

theTDC wrote:
09 Mar 2021, 17:30
EDIT: Response got rejected as spam, so I'm breaking it up into multiple parts.
Apologies. The spam filter is unusually aggressive on new-user 1st and 2nd posts, and gradually disengages. Most spammers post-once-and-abandon. Your next post will be 4th, which means you now should be able to post links/images/etc without the spam filter throwing a fit for most legitimate users -- unless redflagged somehow by posting from an iffy ISP in redflagged locations that's currently attacking with spammers/DDoS/etc -- this forum is in perpetual attack. I sincerely apologize in advance if the spam guard dog ate your homework (pun)
theTDC wrote:
09 Mar 2021, 17:30
Well first of all, let me just say that in all my years on the internet, this is probably the single most informative response I have ever receieved. I saw this just before going to bed last night, and didn't get a chance to finish reading all the material until just a few minutes ago. I think I'll break up my responses into multiple posts, but for now I'll just focus on my original question.
A big Blur Busters mission is definitely to myth bust refresh rates & motion blur reduction. Being hobby-turned-biz, Blur Busters is the go-to when it comes to these topics to educate the Internet on all the descendant myths of "Humans Can't Tell 30fps Versus 60fps" ;)
theTDC wrote:
09 Mar 2021, 17:30
If I'm understanding this correctly, the reason you can't just add a strobing backlight to pretty much 100% of monitors with very little engineering is twofold.
Yes. Sometimes you can do an "okay" job with a DIY strobe backlight, but it illustrates a lot of problems:

phpBB [video]

theTDC wrote:
09 Mar 2021, 17:30
1) The physical mechanism that updates the pixels works on a line by line method, and traditionally this takes the entire refresh time to finish. It is impossible to simultaneously update all the pixels. If I'm understanding this correctly, it's impossible to even update 2 or more pixels at the exact same time. Maybe there's SIMD for monitors? Either way it doesn't fundamentally change this.
Electronics in many monitors have been upgraded to essentially do multiple-pixel and even full-pixelrow refresh but it's still chunked refresh in a top-to-bottom sweep. The limiting factor is number of wires in the LVDS cables in the panel edge connector as well as how many channels are being done concurrently. Such creativity allowed us to exit the 1024x768 60Hz 50ms LCD dark ages.

However, you can speed up the scanout (e.g. 1/240sec sweep of a 120Hz refresh cycle), allowing more idle time between refresh cycles too. This visually improves as follows:

Image

This is WHY you want highest-Hz, and use a low-Hz strobe -- so you can have more GtG completion between refresh cycles.
theTDC wrote:
09 Mar 2021, 17:30
2) The GtG measurements are misleading to the point of outright lying. 10%/90% is just not good enough, and "average time," can be an order of magnitude better than "worst time". In fact, some LCD's are so bad that the worst case transitions take even longer than the refresh rate!
It's not a lie because it's a VESA standard, because of signal limitations -- sometimes it's hard to measure signal below 10% or above 90% on a noisy oscilloscope. However, it's misleading from a human-perception perspective -- and you are right, that real world GtG is a problem for strobing.
theTDC wrote:
09 Mar 2021, 17:30
So what would happen if we just took a 100Hz monitor and gave it a strobing backlight of 0.1ms? I picked 0.1ms because it's close to instantaneous. Let's assume that the worst case transition takes 10ms, and average transitions are ~2ms. Also, we're assuming a full 10ms to scan from top-bottom giving each pixel it's new colour. What we would see is the top line of pixels being 100% transitioned to the new frame. The bottom line of pixels would be 99.9% still the old frame, and everything in the middle would be progressively worse, but dependent on the specific colour transitions required. You have a good still image of this effect in one of the linked resources.
We have a crosstalk calibration test at www.testufo.com/crosstalk which we monitor strobe artifacts.



With a strobe flash timing offset relative to panel refresh, you can adjust the position of the worst crosstalk:

Image

Animation taken from this thread: viewtopic.php?t=4596

Now a larger VBI (more scanline time between refresh cycles), means it's easier to hide the crosstalk bar between refresh cycles. This is the visual-strobe-adjustment method enjoyed by BenQ users at www.blurbusters.com/strobe-utility
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by theTDC » 09 Mar 2021, 20:29

Stroboscopic effects importance vs Motion Blur Part 1

"Remember we have three situations:
(A) Moving eyes tracking moving objects
(B) Moving eyes past stationary objects
(C) Stationary eyes while moving objects scroll past"

If I'm understanding this correctly, using your resources as a guide, if we take a relatively common thing such as moving the camera around fairly quickly in an FPS, the results are very different depending on where we're looking. So if we see an enemy at near the right edge of the screen and look at it while we quickly whip the camera to it, the blur we experience is almost completely persistence based, not framerate based, within reason. However, for something that had different motion across the screen, this gives us the opposite effect. Crosshairs in an FPS are a great example, since our eyes are now moving right-left as the enemy moves from right-left in our vision, the crosshairs in our peripheral vision are now "juddering," their way along our vision until the enemy is in the center and the two of them match? Is that correct? Oppositely, if we are focused on the crosshairs while turning the camera, it's the enemy that now "skips" its way over to us.

But if I'm understanding this correctly, we completely solve the blurring just through low persistence. So that entire class of problem can be eliminated, and is only as bad at 1000Hz as at 100Hz + 1ms backlight. I find the stroboscopic effects annoying, but not like the "My eyes hurt and I physically cannot look at the screen," problems of sample-hold blur. The examples you gave of peeking through a small crack, in a door or through grass or whatever, were faily illuminating. With a lower framerate, there is a much higher chance of simply missing data from the scene. But do you have any data on how people subjectively view these problems? I suspect I'm in the majority with being relatively okay with the stroboscopic effects after ~100 Hz or so, but very much not okay with the motion-blur for things day to day things such as scrolling through text on a phone.

If persistence blur is a much bigger problem for 99% of the population, this could be very good, because I'm not very bullish on us ever getting the hardware capable of powering 10,000 Hz monitors, even if we eventually get the monitors themselves. Frankly Moore's law has been dead for a very long time. While it's possible we might get 10x the GPU power by the end of the decade, our single thread CPU perf has improved just 13x since 2001, according to passmark, and I doubt it will pull another 13x increase in the next 100 years. On top of that, when we start getting into 100 microseconds territory, we might have to completely re-architect our chips, since the memory latency time between the GPU and CPU is many multiples of our refresh rate. Heck, a single cache miss that takes ~100ns is an entire 0.1% of our frametime. Yikes.

I really don't think that any video game simulation is going to be able to be run at 10,000 Hz, at least if it's at the level of our modern games, no matter how far into the future we go. So if it ends up being that there is quite a worse epxerience with, say, 1,000 Hz monitors at 0.1ms persistence, versus outright 10,000 Hz monitors, I think that's just going to be too bad for gamers. And that's assuming they can even run some modern-type game at 1,000 Hz.

Personally I don't really care about games, let alone modern ones, but I do care about real time 3D rendering, for things like architectural preview, interactive storytelling, and many others. I think there's lots of potential for those types of things, since the CPU simulation side is so trivial, so getting well in excess of 1,000+ Hz is really not that difficult. Frankly it's outright trivial. Once we spawn an avatar in there, and start with some rudimentary collision detection, interactions (like opening a door), it should still be very easy to run in excess of 1,000 Hz. With ~2004 era graphics, I think such scenes/workloads are already limited by the monitors we have to display them.

Here's to hoping the monitors can get there! It would certainly be for the best if we could make that happen, and while I don't understand the hardware nearly as well as the CPU/GPU hardware, you seem pretty bullish on the monitors getting there. I would love nothing more than to get my hands on some 10,000 Hz display, at even something like 640x360 resolution, running a simulation that fast. What an experience that would be.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by Chief Blur Buster » 11 Mar 2021, 02:08

This reply is also targeted to other readers of this thread, as you've clearly noticed the TestUFO's already from what you're saying;
theTDC wrote:
09 Mar 2021, 20:29
But if I'm understanding this correctly, we completely solve the blurring just through low persistence. So that entire class of problem can be eliminated, and is only as bad at 1000Hz as at 100Hz + 1ms backlight. I find the stroboscopic effects annoying, but not like the "My eyes hurt and I physically cannot look at the screen," problems of sample-hold blur.
As long as:
1. Triple match of framerate=stroberate=refreshrate
....(prevent duplicate image effect during scenario A above)
2. The rate of all three is high enough to prevent flicker.

...It can definitely be the lesser of evil for most people than motion blur.

This is how virtual reality became popular again; they finally found a way to solve display motion blur with lightweight low-persistence displays, combined with powerful GPUs to generate 3D graphics. Blur Busters had a minor hand in steering VR direction early on (Oculus Kickstarter).

That said, you probably figure out:
-- Some people are very bothered by stroboscopic effects
-- Some people are very bothered by flicker effects (even to high frequencies)
-- Technology should still continue to progress.

Someday we'll simultaneously fix stroboscopics *AND* motion blur simultaneously -- it just requires formerly unobtainium frame rates and refresh rates that gradually starts to become a reality over the coming years -- possibly by the end of this decade for the first high end kilohertz displays (e.g. esports/VR) even if not mainstream. ASUS now has roadmapped 1000 Hz display by the end of this decade, so it's only a stone's throw away from high end.
theTDC wrote:
09 Mar 2021, 20:29
The examples you gave of peeking through a small crack, in a door or through grass or whatever, were faily illuminating.
It's also the basis of the TestUFO Persistence animation, which becomes sharper at higher refresh rates:



At 60Hz, it's very horizontally pixelated-looking. Even you you change line separation 16, 32, 64 pixels -- the underlying 60Hz refresh rate permanently limits the resolution you can get through this effect. Double Hz and it becomes sharper.

It's also the same principle of persistence-of-vision toys, as well as the old 1920s mechanical televisions with the Nipikow wheels that converts a flickering light (or a flickering pixel) into actual images. It's like waving your head through a crack and scanning the scenery (the bathroom stall maneuver) through a slit -- this works best at infinite refresh rate, but at limited refresh rate, you're getting resolution loss.
theTDC wrote:
09 Mar 2021, 20:29
With a lower framerate, there is a much higher chance of simply missing data from the scene. But do you have any data on how people subjectively view these problems? I suspect I'm in the majority with being relatively okay with the stroboscopic effects after ~100 Hz or so, but very much not okay with the motion-blur for things day to day things such as scrolling through text on a phone.
The majority of people are fine with stroboscopic effects, but there are some of us who get eyestrain even with 240 Hz strobe. The unluckiest people are the ones who are motionblur-sensitive and stroboscopics-sensitive.

The best practices to minimize eyestrain during strobing definitely is framerate=refreshrate=stroberate because this eliminates a lot of double-image effects and amplified-jittering effects. You also need a gaming mouse a few times higher Hz than the monitor, which is why the new 8000 Hz mice is a noticeable benefit for 360 Hz gaming monitors -- I was able to see mouse microstuttering on my 360 Hz monitor.

Strobing also amplifies the visibility of microstutter -- including mouse microstutters -- so you might want to use ~2000 Hz to brute force those harmonic/beat-frequency micro stuttering out (microstutters/jittery motion often a function of Hz-vs-Hz, or Hz-vs-fps beat-frequencying) -- part of why I don't like VSYNC OFF with low-Hz strobing, because jitter-feel is a known disadvantage of strobing unsynchronized frame rates.
theTDC wrote:
09 Mar 2021, 20:29
If persistence blur is a much bigger problem for 99% of the population, this could be very good, because I'm not very bullish on us ever getting the hardware capable of powering 10,000 Hz monitors, even if we eventually get the monitors themselves.
The Vicious Cycle Effect affects how low/high a refresh rate becomes retina. Where you need more time to track eyes on fast high resolution moving objects, to notice its motion resolution limitations. The smaller the display, the less time you have to notice the motion resolution limitations of fast-moving objects that disappear off the edge of the screen more quickly.

Be warned, these are only example numbers/estimates and will vary greatly on a human's maximum eye-tracking speed:

- It's possible that smartphone displays become "retina refresh rate" at slightly below 1000 Hz for most people
- It's possible that desktop monitors become "retina refresh rate" at about ~2000 Hz for most people
- It's possible that 10,000 Hz is only needed as a retina refresh rate for full FOV displays like virtual reality headsets (or first row seating on a 16K-resolution IMAX/OMNIMAX screen)

The smaller the FOV and the lower the reoslution, the lower the "retina" point of refresh rate where no further benefits are derived.
theTDC wrote:
09 Mar 2021, 20:29
Frankly Moore's law has been dead for a very long time. While it's possible we might get 10x the GPU power by the end of the decade
We don't need 10x GPU power. We only need Frame Rate Amplification Technology. My Oculus Rift VR headset laglessly converts 45fps to 90fps using ASW 2.0 as a depth-buffer-aware "smart 3D interpolation-like algorithm" that doesn't need latency because it is not a black-box interpolator. It just need motion vectors from 1000Hz controllers, to create intermediate frames.

Remember, when you watch Netflix, the Netflix streams at only 1 true frame per second, and Netflix is adding 23 "predicted frames" in between (essentially "fake frames" if you want to use lingo). It's not a bad wolf like interpolation because it's not black-box prediction, because the movie file knows what's ahead, and can accurately predict the next frame using math formulas to "fake" the in-between frames. Humans can't tell this.

Frame Rate Amplification Technology: No Longer Need To Render EVERY Frame

Video compression uses the system of I-Frame, B-Frame and P-Frame system of H.264 compression. Even H.EVC has something similar where

Image

What's magically happening is that we're reworking the GPU workflow to add perceptually lossless frames in between original GPU rendered frames.

By the end of the decade (or next, at latest) we will have GPUs with dedicated frame rate amplification silicon that can laglessly convert 100fps to 1000fps without artifacts (even with processing overhead, it will still be less laggy than the original 100fps, because of the access to original 1000Hz+ movement data that removes the ugly soap opera effect guesswork of grandpa's LCD television). As flawless as those Netflix "fake-frames".

2:1 ratio frame rate amplification has already arrived to most PC-based virtual reality headsets, such as Oculus' Asynchronous Space Warp version 2.0 which is frankly, amazing for what it does. But 2:1 needs to become 5:1 and 10:1 to get 1000fps Unreal 5 by the end of this decade.

If you are fascinated by this, make sure to read Frame Rate Amplification Tech -- more frame rate with fewer transistors simply by reworking the GPU workflow.
theTDC wrote:
09 Mar 2021, 20:29
our single thread CPU perf has improved just 13x since 2001, according to passmark, and I doubt it will pull another 13x increase in the next 100 years.
You're thinking old-fashioned. Frame rate amplification avoids this. We only need 100fps to get 1000fps. Today, my VR headset is already converting 45fps to 90fps almost flawlessly. No soap opera, no parallax artifacts, no super-lag feel. The Version 2.0 uses the Z-Buffer to eliminate the old parallax artifacts too, technology that is already on the market! VR is ahead of desktop gaming in frame rate amplification, for those who have been living under the rock about new methods of free frame rate.

In fact, I am cited on Page 2 of a research paper, Temporally Dense Raytracing by NVIDIA -- a ray tracing version of frame rate amplification.

We've only begun to scratch the surface of frame rate amplification. We stop thinking like Issac Newton and now think like Albert Einstein -- GPU workflows is being reworked to say goodbye to "all-frames-must-be-original-frames". As long as we can make extra GPU frames as flawless as Blu-Ray "fake frames" (Blu-Ray also uses estimated frames in between the original P-Frames), without adding input lag. Frames visible for 1/1000sec can be estimated/extrapolated/interpolated/reprojected/whatever as long as it's perfect-looking to the human eyes. Imagine it as "retina interpolation" except it's not interpolation (more accurately "extrapolation") because it's not black box, because the frame rate amplifier knows about the 1000 Hz+ controller data, so it's not like it doesn't know the position of the next frame. So it doesn't need to be laggy and ugly like grandpa's Sony Motionflow of yesteryear.
theTDC wrote:
09 Mar 2021, 20:29
On top of that, when we start getting into 100 microseconds territory, we might have to completely re-architect our chips, since the memory latency time between the GPU and CPU is many multiples of our refresh rate. Heck, a single cache miss that takes ~100ns is an entire 0.1% of our frametime. Yikes.
We do need to rearchitect, but we only need to add frame rate amplification silicon, and the necessary APIs to feed it properly (movement data, controller data, physics data) so it can create the intermediate properly. We may still have, say, 2ms of processing latency to generate 1000fps, but 2ms is less lag than 100fps (10ms per frame). You see? So the lag goes down, but not down to 100 microseconds, as we pipeline those framerate-amplified frames concurrently over parallelized multicore frame rate amplification silicon.

Do you understand what I am getting at? We people are thinking outside the box these days :D
theTDC wrote:
09 Mar 2021, 20:29
I really don't think that any video game simulation is going to be able to be run at 10,000 Hz
We don't need 10,000 Hz for desktop monitors -- 10,000 Hz is only for 180-degree screens because of the Vicious Cycle Effect.

The bigger FOV the screen, the higher the retina Hz is. The smaller FOV the screen, the lower the retina Hz is, because of less time to eye-track objects before it falls off the edge of the screen. Resolution plays a role, the higher resolution the easier to see. It's only when you've got retina-resolution 180-degree-FOV displays, that we need 10,000+ Hz to make it indistinguishable from real life.

Besides, 10,000 Hz will probably only be possible via parallelized frame rate amplification (e.g. large supercomputers connected to a VR headset). A main GPU would generate the intermediate frames, but daughter GPUs will process the individual framerate-amplified frames, with a high-speed supercomputer bus accepting the motion vector data from controllers (mouse, head tracking, keyboard, game physics, etc), to essentially 3D-extrapolate the frame perceptually flawlessly. Imagine one main controller GPU generating 100fps data, and 9 additional GPU cards parallelizing the in-between frames in a shingled-render manner. You may still have the same latency of 100fps original frame rate that way, but it's already doable at the enterprise level during shingled-rendering (+1ms offsets per parallized GPU running on a large custom made motherboard designed for enterprise applications), but requires custom programming today.

Tomorrow, it all filters down to decicated amplification silicon on the same GPU (fewer transitors than SLI per frame), so doing 10:1 ratio amplification will eventually only require twice to three times as many transitors as the original frame rate -- there are tricks already happening. To reduce transistor count even further, some of this will involve neural networks and artificial intelligence, though. Eventually some of this will filter down to consumer GPUs to increase frame rate amplification ratios to approximately 4:1 or 5:1 (up from 2:1), and then consequently to 10:1 ratios.

For retail level within 10-20 years, we should be able to achieve 1000fps frame rate amplification from 100fps. The way you replied to my post, suggests that you didn't read the bottom half of Frame Rate Amplification Technology, so please read that first, before replying to this post -- amazing stuff is already happening in laboratories.

The important thing is that monolithic frame rendering is going the way of dinosaur in the next 1-2 decades, replaced by a multilayer parallized frame rate amplification system.
theTDC wrote:
09 Mar 2021, 20:29
at least if it's at the level of our modern games, no matter how far into the future we go. So if it ends up being that there is quite a worse epxerience with, say, 1,000 Hz monitors at 0.1ms persistence, versus outright 10,000 Hz monitors, I think that's just going to be too bad for gamers. And that's assuming they can even run some modern-type game at 1,000 Hz.
Strobed 1000 Hz will probably still be useful for 180-degree VR indeed, as stroboscopic effects will be extremely minor and only during ultra-fast-motion situations.
theTDC wrote:
09 Mar 2021, 20:29
Personally I don't really care about games, let alone modern ones, but I do care about real time 3D rendering, for things like architectural preview, interactive storytelling, and many others. I think there's lots of potential for those types of things, since the CPU simulation side is so trivial, so getting well in excess of 1,000+ Hz is really not that difficult.
As long as there's a high-frequency motion vector API to have the CAD tell the GPU to "rotate this scene 0.125 degrees along the Z axis at these co-ordinates", the GPU can frame-rate-amplify it without needing an original re-render (redrawing all those polygons/triangles).

Beyond 2030s+ we are not going to be rendering every single frame fully anymore at frame refresh rate stratospheres, it'll have more in common with a multi-tier hierarchy of macroblock video compression (except it'll be perceptually lossless, like a 36-bit-color 300+ megabit/sec H.EVC digital cinema file). We can't scale anymore with classical monolithic polygonal complete re-rendering every single frame; but we still need to do it at fairly high sample rates (100-200fps).

No Longer Need To Render EVERY Frame: Perceptually Lossless & Lagless "Predicted Frames" Are Now Possible in Real Time GPU-3D (metaphorical 3D-GPU equivalent of Netflix and Blu-Ray predicted P-Frames and B-Frames)

You see, 200fps and 5ms timescales, also means the intermediate frames can now be perceptually losslessly filled very, very, very quickly in with some algorithms we've all successfully come up with. Some methods require display-side GPUs (to eliminate the bandwidth explosion on display cable), while other methods simply rearchitecture it and uses descendants of perceptually lossless display stream compression, to get ultra-refresh-rates at ultra-resolutions, from computer to display.

It will take about ~10 years to be reality at the 1000fps scales, but there's nothing inherently stopping further progress. It's kind of the framebuffer-workflow equivalent of going from single core processors to multi core processors -- videogames will still render at only 100-200fps(ish), but it'll be UE5-quality at 1000fps+ instead on the display because of the new frame rate amplification pipelines.
theTDC wrote:
09 Mar 2021, 20:29
Frankly it's outright trivial. Once we spawn an avatar in there, and start with some rudimentary collision detection, interactions (like opening a door), it should still be very easy to run in excess of 1,000 Hz. With ~2004 era graphics, I think such scenes/workloads are already limited by the monitors we have to display them.
TL;DR: Good News: There's an eventual engineering path to UE5+ graphics at 1000fps+, see above

(But yes, we'll still have to use strobing for now. You'll just be upgrading again in a decade.)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by theTDC » 12 Mar 2021, 17:13

Let me say once again, that these replies are excellent and very informative. I hadn't seen that particular TestUFO sequence with the thin vertical bars, but it illustrates the problem extremely well. Sometimes a picture, or short video, speaks 1000 words indeed. Now onto the fascinating FRAT discussion.

First, let's look at good old fashioned interpolated then rendered frames -> Upsides/Downsides
I have also become mostly convinced about the FRAT, with a few hangups. The first thing I'd like to mention is that actually, if priorities were different in the games industry, we could already have 1,000+ FPS real time rendering. Most games already have unsynched framerate/logic rates. By that I mean that if the game logic runs at 100 Hz (usually 60 or 30 Hz but these numbers are easier), there will be a graphics thread that runs at a potentially arbitrary number, not even necessarily a multiple of the logic Hz. So if the game is running at 100 Hz, we might have 140 FPS rendering.

The way this is done is through interpolation. The CPU figures out where the characters are supposed to be, and what part of their animations they're supposed to be in, and then interpolates to that point. The animation part is quite hardware intensive, scaling linearly with the amount of vertexes in the scene being animated, and is a serious limiter to how fast our game can run. The motion interpolation is not extremely intensive. Some games, such as I believe Halo 5, actually have the animations running at 30 Hz, while the scene is rendered at 60 Hz along with the physics/logic, lowering latency at the expense of looking really weird. And splitting the work up into multiple threads can only go so far, because there just aren't that many threads on many peoples CPUs. Until you start getting ~8 core CPUs at very minimum, you won't get appreciable lowering of animation time, even assuming that you can split the load very well amongst all cores.

Rendering frames with old animations is really distracting, and not a very pleasant experience. However, if it was really important to game developers, they would drop the vertex count drastically on the models, re-designing the game/visuals if need be, and we could do this fairly easily. This basically solves 99% of the problem of low framerate. Yes, we can get some corner issues with physics simulation not matching framerate, leading to incorrect interpolation, and other things of that nature, but basically this already existing technology would, if priorities were different, allow us to have our 500+ Hz games.

Even without this, at high enough Hz this problem is drastically lessened. After all, just go play Half-Life 2 nowadays on some modern hardware. It ran at a constantly locked 300 FPS on my GTX 1060 + Core i3-4100 setup, and was an excellent experience. So I take back my original point made a few posts above about needing 10,000 Hz simulations, the simulation can run at drastically lower Hz than the rendering, and this is 99% as good as running the whole program at 1,000+Hz. I was simply mistaken.

As an aside, running Half Life 2 on my 1080p 120Hz lightboost AOC monitor was the first time where I could stand vsync on. As someone extremely sensitive to input lag and motion blur, 120 Hz seemed to cross that bare minimum point for me, where the improved visual fidelity was worth the extra lag.

I guess what I'm trying to say is that the good old fashioned "just render more frames" is a completely valid solution. Compared to a pixel shader interpolation solution, it has these tradeoffs:

1) Lowered maximum "fidelity" in the form of fewer vertexes for both CPU animations/GPU rendering. Also lower texture resolutions, and most probably older rendering algorithms. Ultra high resolutions (4,320p, 2,160p) would also almost certainly be sacrificed.

Personally, I really don't care at all about these tradeoffs, so this solution is fine for me. I'm old enough to remember N64 games, which ran at such low resolutions/polygon models/texture resolutions, that you legitimately could often not visually understand what you were looking at. At that point, more fidelity transitioned from "I don't know what I"m looking at" -> "I do know what I'm looking at." This happened back in about ~1998 on the PC side, and 2001 on the console side. Nowadays the tradeoff is, quite frankly, trivial "this reflection is more shiny" nonsense. I'm not trying to shame anyone who cares about this, I'm just saying that I really couldn't care less.

2) Possible physics/input -> interpolation mismatch.

To take the extreme, if we had our simulation running at just 1 Hz, but the interpolation running infinitely fast, we could have a situation where what should happen is a tennis ball hits the ground a half second after our last sim, then half a second later we are back where we started. Our renderer would progressively interpolate the tennis ball through the ground. Then the next frame it would "snap" back up to the bounce position.

A similar error would be if the tennis ball was at the apex of a bounce, and had 0 velocity. Interpolated frames would simply keep it where it is. Honestly, these are not really very serious issues with a reasonable simulation Hz, such as 100 Hz, but they do need to be kept in mind.

Having said that, our pixel shader interpolator has its own issues. In the same example, our pixel shader would have just kept the tennis ball in the exact same position the whole time, despite this also being incorrect. if we really thought this class of errors was better, we could just run the previous solution a frame delayed and work that way, so it's sort of a moot point.

It is worth mentioning, both solutions are better than no interpolation, and illustrate the fundamental problems with extremely low physics simulation. These should be acceptable issues with realistic physics simulation Hz.

2.1) In fact, with our tennis ball example above, the closest solution we can get is only through CPU interpolation + full re-render. To get truly accurate solutions, we need to simulate one frame ahead, and keep track of all velocity changes/rotation changes, and the timestamps for these changes. Only then can we get truly accurate interpolation between the two. A pixel shader solution simply cannot possibly accurately simulate something as simple as a tennis ball hitting a wall, then abruptly having an enormous velocity/spin change. It's simply not possible to do this just by comparing frames, there is not enough information.

Now, technically we're still not all the way there, because we're assuming static velocity until a collision. The tennis ball is accelerating every infintely small moment in time, but this solution is a whole lot closer, and solves 99.9% of the problem with a sufficiently high Hz base simulation, and not extremely high acceleration. A tennis ball under earths gravity will be simulated extremely well here.

3) Upside: Perfect rendering without artifacts.

While our positions/rotations for our entities can be incorrect, we will never incorrectly render the frame itself. We don't have to worry about cases where x polygon is perfectly behind y polygon one frame, and is discarded from the z-buffer data. We don't have to worry about multiple polygons rotating in such a way that they screw up our PS interpolator. Unless there is drastic update to rotation/movement within that 10ms window, whatever frame is rendered will be rendered perfectly.

4) Upside: Zero latency.

While there is latency if our predicted motion (continues along same path) turns out to be incorrect, each frame can be displayed immediately. We don't have to render two, then interpolate between them. While this is just 10ms of extra latency for a 100 FPS experience, it is worth mentioning. Our perfect solution has 1 frame of latency, but is a 99.9% perfect interpolation solution.

5) Upside: Perfect response to sharp camera movement, and other artifacts.

Since we aren't interpolating the frames, only the movement, we never have issues where the interpolation is obviously garbage, like we will with a pixel shader interpolating. If we imagine an ingame cutscene, the camera suddenly snaps to a new position. Well, there's absolutely no way to properly interpolate this effect in our pixel shader. It's just flat out going to give you garbage results, no matter how much Z-buffer information you retain.

I guess ultimately if we compare the two interpolation solutions it can be accurately summed up as a tradeoff between:

1) Drastically lower fidelity per frame, in the case of the CPU interpolation + re-render technique already used in almost every game.
2) BUT, drastically lower potential "garbage" results.
3) And potentially (much?) better overall results, due to necessary imperfections in the PS solution. We would need to see side by sides to truly compare, and it would probably depend quite a bit on the scene/camera movement/etcetera.

As I mentioned earlier, I really don't care at all about the lower technical fidelity in the year 2021, because I think that we've long put the diminishing marginal returns in our rearview mirror. On the other hand, I hate with a passion low framerates for all the reasons possible. Either way, I'm convinced now that we already have the hardware to get ultra high framerates. What we need now is the monitors, and possibly some slight hardware re-architecting as you mentioned, to overcome Hz limitations in sending frames to the monitor.

I have an extremely negative opinion of the video game industry, and in fact most of the hardware industry as well. We could easily have gotten this 10+ years ago, but the rendering/art tradeoffs required for ultra high framerates wouldn't look as good in screenshots, so they wouldn't sell as many games.

That is, I believe, far more important to change than any specific engineering issues.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by Chief Blur Buster » 13 Mar 2021, 16:49

theTDC wrote:
12 Mar 2021, 17:13
That is, I believe, far more important to change than any specific engineering issues.
Before I reply to your thread which seems to stick to the old-fashioned concept of interpolation, I would like to disambiguate the dictionary definition of the word "interpolation". Remember "interpolation" is NOT the only method of frame rate amplification.

You should include the entire universe of frame rate amplification technologies, both spatial-based (e.g. DLSS) and temporal-based (e.g. ASW 2.0), neither of which meets the definition of classical interpolation. Not all of these meet the dictionary definition of "interpolation".

There's several techniques, all with differences, that does very different things from each other:

- Interpolation
- Extrapolation (doesn't use interpolation)
- Reprojection (doesn't use interpolation)

Also, if you have been to many conventions like I have, I have seen a few implementations of artificial intelligence in interpolation and extrapolation technologies already. Some of these AI's are already doing things loosely like "....That looks like a brick texture, let me real-time photoshop a sharper brick texture in its place..." and some of this is already built into NVIDIA DLSS Version 2.0.

The Old-Fashioned Interpolation Black Box
Also, interpolation used to be a black box in the past. An interpolation device in a television (e.g. Sony MotionFlow) that doesn't know the original motion vectors, doesn't know the physics formulas, doesn't know the depth buffer, doesn't know the controller data. So interpolation (according to dictionary definition) is "the insertion of something of a different nature into something else." which is often guesswork. You get artifacts, parallax reveal defects, unfixed motion blur for video (e.g. motion gets smoother but original camera/source-based motion blur is unfixed) creating unnatural motion known as the "soap opera effect", sudden framerate changes when interpolation is unable to figure out the motion (e.g. random motions versus panning motion), etc. Interpolation requires lookbehind and lookahead frames, to guess the in-between frames. Which means it has to buffer frames, and output lagged intermediate frames.

Non-Black-Box Methods of Frame Rate Amplification
Extrapolation/reprojection only requires lookbehind, plus smart knowledge of current. This is not called "interpolation" because it doesn't need to know the next frame. The engine can be fed telemetry data (controller data, position data, physics data, movement data) in realtime at a sample rate higher than frame rate, e.g. at 1000 samples per second. And simply modifies the previous frame to meet the new data, with shortcuts that avoids redrawing the whole scene. Most movements in 1/200sec (at 200fps) are very tiny, so reprojection can be done artifactlessly -- if you've ever seen an Oculus Rift with ASW 2.0, then you'll realize the promise of frame rate amplification is very promising despite what you say. For example, you only need geometry/parallelogram/rectangle stretching/rotating/zooming for most graphics of most intermediate frames.

Lookahead Lag is not necessary for frame rate amplification
So the extrapolator doesn't need to add input lag required of interpolation algorithms. For example if you're mouseturning 2 pixels in 1/1000sec, the reprojector simply has to scroll the whole screen two pixels sideways. If a little bit of parallax reveal occurs, it simply fills revealed data with learned data from a history of previous frames, with the reprojector's knowledge of the Z-buffer knowing what parallax effects got revealed. AI is sometimes used to automatically "photoshop" the reveal-data, as it is already being done today in DLSS 2.0 as well as in some of the world's best interpolators Example: 15fps converted to 60fps except no lookforward frame would be needed for extrapolation/reprojection techniques. Now, the requirements of frame rate amplification at 200fps->1000fps is in theory less AI-compute intensive (because of smaller movement shifts per frame) than the requirements of frame rate amplifications 15fps->60fps. Also, yes, physics is compute-intensive, but the majority of physics is less intensive than a full GPU-rerender. Besides, you can just offload it to a second GPU that can handle more complex physics -- e.g. a hybrid "SLI" where one card does GPU renders and the other card does frame rate amplification (including perfect physics calculations for physics). NVIDIA is likely already putting frame rate amplification silicon increasingly on the same GPU for the purposes of increasing frame rate.

Artificial Intelligence: Like smart real-time photoshopping to make intermediate frames look perfect to humans
Another example is these AI-based frame rate amplification technologies would know laws of physics and expect that ball needs to bounce fully (based on recognizing it's a ball object). Sure, there will be complexities if you dump a dump truck full of baseballs/basketballs and need to run a physics emulator on all moving objects. But it's mostly algorithmic refinements at this stage. Stuff that happens for only 1/100sec is less important than stuff that happens for 1/5sec. As long as physics is emulated sufficiently accurately, things that happen for a brief flash will usually not be noticed by a human. So you just need a sufficiently high sample rate (e.g. 100fps or 200fps), then you can frame-rate-amplify beyond that without objectionable physics artifacts, since you're not going to easily see if a 100mph baseball bounces at the target or 5 inches from its target -- the speed and suddeness would be too fast. So, we only need to frame rate amplify sufficiently, and emulate physics accurately enough to a sufficiently fine granularity. And for critical animations, the algorithms/AI can figure out the rest.

Physics errors is far less visible for individual 1/1000sec frames than 1/50sec frames
Play a high speed 1000fps video, one frame at a time -- most of them have tiny movements in adjacent frames (1/1000sec apart) and very slow physics in between frames, compared to low frame rate video. The finer the frame granularity, the easier to frame-rate-amplify the pair of frames without artifacts and a lot of rendering-shortcut opportunities are available without showing objectionable artifacts (e.g. .less visible than other things like changing between 4X AA and 16X AA, or changing 4X anisotropic filtering to 16X anisotropic filtering etc). The total compute power is definitely higher than original 100fps, but less than doing 1000fps individually at a time, unless you're doing true supercomputer simulator physics (but that's not what we're needing to do here) -- this is generation for real time motion, not generation for slow-motion, and we don't need to show bounce flaws that are visible for only, say, 1/1000sec. Imagine throwing a superball at a wall at 100mph, few will see the direct bounce itself -- just a blur (because it more resembles fixed-gaze moving-object circumstance, because the ball was too fast to eye-tarck) -- and eye-tracking does not change direction instantaneously; eye tracking doesn't change direction instantaneously in 1/1000sec.

We just need physics errors to be below human noticeability threshold
The previous figher jet pilot tests found that momentary-flash identification (e.g. flashed briefly) was about 1/255sec (give or take), where an object was flashed briefly for ~1/255sec (without brightness compensation), anything briefer and they couldn't identify it. So an imperfect ball bounce in a 1/1000sec may not be visible. Obviously, variables matter (e.g. brighter flashes, like 10,000 nit flashes) can be visible briefer, but if it's just a blank blue sky, then an airplane-in-sky appearing for 1/250sec, then a blank blue sky again, and you have to try to identify the airplane (F15? F22? Mig-28? Etc) that was just visible for a brief period. The fighter pilots couldn't identify anything briefer of a flash.

Obviously, some websites try to incorrectly spin the old "fighter pilot test", as being "humans can't see above 255fps" -- but that's a distortion. That test by the Air Force was just a momentary-visibility test (which applies here -- e.g. momentary visibility of physics flaws). Other things are more noticeable like stroboscopic effects or motion blur effects, which requires multiple consecutive frames of usually consistent motion, in order to notice the flaws of a frame rate (like a single frame of physics errors, such as a ball incompletely bouncing).

Especially since eye tracking can't change direction that fast (the physical momentum of a rotating eyeball needs to decelerate and reaccelerate in the other direction, etc...). We just need frame rates to be high enough above momentary human identification thresholds, like 250fps, then frame rate amplify beyond that even if physics is imperfect/incomplete.

Now, that being said, this threshold can easily be lowered to 100fps or 200fps for most average laypeople, which is much easier -- today's top of the line GPUs are easily able to do 100fps in a lot of games now, for example, and it's not a longshot that these frame rate territories (100-200fps) becomes the baseline point of future frame rate amplification technologies (including non-interpolation-based ones).

This isn't 30fps "interpolated/extrapolated/reprojected" to 60fps. Think conceptually, 200fps turned into 1000fps -- any brief-visibility artifacts can be optimized out with a variety of shortcuts and new physics rendering workflows. You may still need to emulate simpler physics at least down to the frame rate amplification granularity (e.g. ball bouncing off a moving vehicle), but the whole scene doesn't need to be re-rendered during these frame rate stratospheres, for the purposes of frame rate photorealism to a human.

So, we don't need to emulate physics to a finer granularity than necessary if the human cannot detect it; anything that moves faster than the eye can be instead simulated approximately instead (or even as a generated 1/1000sec GPU motion blur). If it is noticed, then it will be optimized-out as need be over time (preferably by the GPU vendor or API vendor like Metal, or engine vendor such as Unreal Engine, so game developers don't have to worry about every single nuance). More shortcuts can be safely done without human perception with 1/1000sec (1000fps) frames than with 1/50sec (50fps) frames, you just need to make things accurate to what human eyes are capable of noticing.

You may notice flaws during videoing the frame rate amplification then replaying it at slow-motion, but in real time, it just ends up looking like photorealistic (e.g. UE5) at extreme frame rates (e.g. 1000fps+). Frame rate amplification can safely use more rendering shortcuts for finer-granularity frames (1/1000sec) than for lower-granularity frames (1/100sec), as long as the continuous motion effects remain perfectly artifact-less (e.g. consistent motion vectors), the non-consistent motion (e.g. bounces, explosions, dump truck dumping dirt, etc) doesn't need to be emulated at a fine granularity. Also, various possible AI shortcuts will occur (e.g. "...AI: I've seen this ball before, and I know balls bounce, so let me save some physics processing power by giving you an analog motion vector bounce path that I think the ball will bounce..."), depending on the cost:benefit of AI silicon -- the speed of doing classical computations versus the speed of AI shortcuts. This could occur at the 1/100sec granularity (Ai estimation of predicted physics paths to the next full-calculation physics update), during amplifying 100fps to 1000fps.

Again, this is not 2020s stuff, but more 2030s stuff when frame rate amplification silicon is able to do 5:1 ratio or 10:1 ratio of frame rate amplification.

Don't Discount Role of Artificial Intelligence in Frame Rate Amplification
We're already now in an era where AGI (Artificial General Intelligence as smart as humans) is arriving by approximately ~2040 (50% of AI scientists now estimate that timeline for AGI). You just need to read all the GPT-3 news and achievements to see that the writing is now on the wall. Simpler intelligences are already built into NVIDIA GPU as it's a germane part of NVIDIA DLSS 2.0 and Oculus ASW 2.0 -- they both use neural-network-based stuff.

AI increasingly used in more gaming applications, Oculus Quest 2 VR headset uses AI to track the whole room, to stitch the passthrough mode (4 cameras into one seamless 3D 360 video) which allows you to see the coffeetable before you stub your toe -- it's got a neat Holodeck mode where you can safely walk around in VR until you get too close to edges of the room or too close to your furniture. And then the VR headset suddenly becomes transparent to the room (using its 4 room-tracking cameras being turned visible to your eyes). Also its AI alsod oes some pretty neat hand-tracking ability -- controllers are optional in games, and you can manipulate some games using just your bare hands, with the cameras and the AI tracking your individual finger movements and rendering virtual hands in-VR, allowing you to contort your hands in VR the same way you contort your hands in real life -- even middle finger, salute of horns to rock concert, and most sign-language symbols maps accurately from your real-life hands to your VR hands, all without a controller -- just simply cameras + AI doing the processing at extremely low latency in real time (I can wave my hands pretty quickly and it doesn't lag, like old Microsoft Kinect -- it's 100x more accurate, and it's a built-in in-headset GPU).

Anyway, the point being is AI is being increasingly used more and more as part of gaming technologies, from NVIDIA DLSS 2.0 to hand tracking to future artifactless frame rate amplification. The AI will be increasingly able to see ..."This frame looks like it has artifacts...Let me fix the artifacts in real time"... type of stuff (AI interpolators are increasingly able to do this). So, some new high end TVs I saw at CES 2020, had some really kick-ass artificial intelligence interpolation that was light years far beyond what I've seen in the past, to achieve 120fps or 240fps HFR out of 30-60fps, without any visible physics issues I could tell.

We Look Ahead To 2030s Trends For Frame Rate Increases
Now that being said, this is till 2020s, and it will still be another ten years to get 100fps -> 1000fps frame rate amplification at approximately UE5 levels without needing too many more transistors per chip than today's RTX 3090. There's an incredible bottomless bucket of additional 21st century software developer optimizations for today's gaming+AI graduates from today's universities, if you major in the right stuff -- there is a lot of new algorithmic discoveries being made everyday.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Falkentyne
Posts: 2793
Joined: 26 Mar 2014, 07:23

Re: Is There Any Disadvantage To LightBoost / Motion Blur Reduction?

Post by Falkentyne » 19 Mar 2021, 19:11

Chief has pretty much said everything that needs to be said and I can't add much without saying something I know Chief will find wrong.

Lightboost was designed for 3d glasses, which is why the colors are so bad. There is an intentional(?) color shift done for this. This was hardly the first 3d glasses technology. 3D Vision was first used on the Samsung 2233RZ, which is the first 120hz gaming grade LCD on the market, and there were some other models also, but there was tons of sample and hold motion blur, since you were basically getting 60hz to each eye. But "stobed" 3d glasses tech goes far back--you have to go as far back as the Wicked 3D glasses designed for Voodoo2 cards (and the CRT's at the time were phosphor strobed anyway). And then you had stuff like "Dactyl Nightmare" and other "3D" arcade games before the internet.

Benq blur reduction in its initial form, was, from what I remember, a reverse engineering of Lightboost--the same electronics were used, but without the terrible color and gamma shift. That's why the Vertical Total 1497-1502 range was significant, as this is what lightboost used internally for accelerated scanout. Also the monitor scaler reported a strange resolution in this mode--I believe, 1400 x 1050, which doesn't correspond to any standard resolution in existence. Note that if you reduced the vertical total to 1481, you would get an "out of range" error on the OSD, although the picture would still appear. What's interesting is that the monitor scaler reported the resolution as 1920x1440, because the VT range was within range of 2560x1440 (the monitor firmware actually HAS this resolution embedded in it, which is why it accepts VT's this high). But going up to VT 1497 causes the scaler resolution to be reported within "Proper limits" now, rather than previous amounts increasing in "steps" as the Vertical Total went up.

Thanks to Chief Blur Buster, Benq made downloadable firmwares (which were quite difficult to flash) for their original Z series monitors, and added adjustable strobe phase and strobe duty, as well as a single strobe override for low refresh rates --60hz-85hz (warning: this did NOT work at 50hz and could run the risk of damaging the monitor--it wouldn't strobe at all but the backlight would still be driven by 1.8x voltage; usually it would just power cycle from protection). It was interesting back then to see how persistence affected blur reduction. You can check out the sticky in the Benq Strobing section for most of my findings, but Masterotaku found out a lot of stuff first.

Then Benq decided to join the dark side and remove the Single Strobe override when they switched scalers from Mstar to Realtek, which meant double strobing at 60hz (120hz timings). To this day I STILL do not know if this was an oversight or intentional, as the original XL2730Z didn't even strobe right at 100hz--it used the 120hz timings, causing an unusable monitor.

I'll never forgive them for that.

That being said, their current flagship, the Benq XL2746S is probably the BRIGHTEST strobing TN monitor on the planet right now. The strobing power on this monitor puts the original Z series monitors to shame, faster than if The Rock's Rock Bottom met the Undertaker's Tombstone. And it can strobe at 100hz, 120hz and 144hz, via custom vertical total values, with absolutely NO crosstalk from the out of phase timing band, although the ghosting issue is still a challenge. At least there are 255 levels of overdrive, and the three persistent default levels (which depend on the refresh rate) have rather conservative OD levels, unlike Benq's past monitors, which turned into an inverse ghosting mess (there were ways to mitigate this on some models). There are still some leftover firmware bugs from benq's legacy code, like high VT (2250, etc) ranges simply giving a "no signal" error at 60hz-74hz, then suddenly working at 75hz, and 155-179hz having problems accepting a VT over 1400 without a "no signal detected" error, and then 180hz suddenly working perfectly with VT 1500. Silliness like that, like there's some code doing checks at some timings.

I think the only thing better quality and brightness wise is the Viewsonic XG270, which wouldn't have been possible without Chief Blur Buster. But we're waiting for the next iteration of electronics, which should bring 60hz full range single strobe back.

Post Reply