Mechanism by Which BFI smoothes out 24FPS

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
John Hess
Posts: 5
Joined: 05 Apr 2023, 23:50

Mechanism by Which BFI smoothes out 24FPS

Post by John Hess » 06 Apr 2023, 00:19

I'm having a bit of difficulty wrapping my mind around this and I've been going through this site up and down. I feel like I can intuit why but I need a bit of help. Tell me if this makes sense...

The reason why 24fps video stutters on harder on displays with extremely low GtG and long MPRT like OLED) is that long MPRT induces MORE blur (compared to a monitor with longer GtG) which given the low frame rate would present as stutter. (thinking in terms of the stutter-blur continuum)

By reducing the MPRT with BFI, we are essentially reducing the blur, thus reducing stutter which is added by the display.

BFI won't eliminate the inherent judder of 24fps that's just the character of it or the motion blur, but it will lessen the stutter (low frequency motion blur) induced by MPRT.

In that sense is BFI almost a "Faux"-slow GtG? Instead of slowly ramping to the new pixel color, it jumps to black first before going to the new pixel color? Obviously I'm abusing the concept but is there a connection?

John Hess
Posts: 5
Joined: 05 Apr 2023, 23:50

Re: Mechanism by Which BFI smoothes out 24FPS

Post by John Hess » 07 Apr 2023, 11:22

I've been mulling over this a lot - built an stress test of moving zebra stripes at 24fps, put it inside a 144fps stream with 3 full screen black frame insertion per frame (72hz triple bladed shutter). Played it on my 144hz monitor and found that it did not smooth out the motion - perfect 1:1 BFI will smooth motion, but as predicted on this site I ended seeing triple edges when tracking the movement, and no discernable difference in stutter in fixed gaze.

I added rendered motion blur to the zebra pattern, the stutter was softened a bit but same triple movement artifacts showed up on both in BFI and non-BFI movement.

So it would seem that BFI by itself DOES NOT smooth out 24fps.

But I couldn't get over the fact that on my Sony A80J, 24fps just looked better when I engaged the Clearness under the Motion tab. Clearness being described by the TV as adding black frames.

So I got out my camera to slomo the screens- the max I can do is 180fps (5.5 ms - capable of shooting 1/2000 shutter so that would be 0.5 ms exposure) so a decent start. I built a 24fps flicker test that flashed between two distinctly different levels of gray each frame along with a counter and some spinning clock graphics. Threw it on the 144hz.

As expected the frames sharply switched from frame to frame. The GtG on the 144hz monitor is quite low.

Moved that video to my 60hz LCD flat panel monitor (probably 15 years old). There I noticed two things. First you could see the frames blend from one to the next - a high GtG - but what I also noticed was a scanning artifact - not sharp, but an up to down soft band. This seems to make sense regards to a strobing fluorescent backlight.

Then I moved to the Sony A80J - I tested the four different clearness settings. With clearness all the way off, I still noticed this up and down band. It was still rolling scanning. At the first level, the strobing became a little more pronounced. Level two and max clearness looked like they introduced a 3:2 pulldown and introduced even darker scan.

I'll have to do more formal tests.

Has there been any explanation to why a rolling scan/flicker would reduce the appearance of stutter on low frame rate material? Is it sort of like long GtG values?

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Mechanism by Which BFI smoothes out 24FPS

Post by Chief Blur Buster » 18 Apr 2023, 19:49

Hola,

We had a long Twitter DM conversation, which I think needs to be crossposted here.
I will ask for permission.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Mechanism by Which BFI smoothes out 24FPS

Post by thatoneguy » 19 Apr 2023, 13:10

I've said it before and I'll say it again.
A solution to the 24fps problem would probably be multiplying each frame in the source video file itself(with the trade-off of a bloated filesize) before playing them back.
Most video games(especially old ones) have low animation framerates yet they have 0 judder or double image effect. 24fps video content should be able to mimick that effect if they mimic the same logic.

John Hess
Posts: 5
Joined: 05 Apr 2023, 23:50

Re: Mechanism by Which BFI smoothes out 24FPS

Post by John Hess » 27 Apr 2023, 18:56

Chief Blur Buster wrote:
18 Apr 2023, 19:49
Hola,

We had a long Twitter DM conversation, which I think needs to be crossposted here.
I will ask for permission.
Please do post!

Since our conversation, I was able to test out the 144hz monitor by getting it down to 48hz. Using a my phone to record the transition from frame to frame I noticed that the image scanned much slower when in 48hz mode than in 144hz mode.

This is starting to lead me to think that it's the speed at which the frame changes that affects how smooth the perception of motion.

In some of the articles Chief writes about how longer G2G can mask stuttering - the slow scan of the frame sort of acts like that.

In my mental analogy it's like the difference between a square wave and a sine wave. The square wave has a sharper edge which is a higher frequency edge. In photo manipulation we have "High pass filter" which sharpens the image by smoothing out low frequency changes of color. High frequencies indicate the edges of an object in the image. I'm imagining that this works in the temporal sense as well - so high frequency changes between frames present as stutter, low frequency changes (like long G2G or long scan times) present with less stutter.

Another factor I want to consider is the overall brightness of the screen. I'm sensing that lower brightness screens (such as movie theater projections) will stutter less than high brightness screens. Brightness is a component of the a related phenomenon with flicker fusion.

So another solution to stutter is to watch in lower light and a darker room.

John Hess
Posts: 5
Joined: 05 Apr 2023, 23:50

Re: Mechanism by Which BFI smoothes out 24FPS

Post by John Hess » 27 Apr 2023, 19:01

thatoneguy wrote:
19 Apr 2023, 13:10
I've said it before and I'll say it again.
A solution to the 24fps problem would probably be multiplying each frame in the source video file itself(with the trade-off of a bloated filesize) before playing them back.
Most video games(especially old ones) have low animation framerates yet they have 0 judder or double image effect. 24fps video content should be able to mimick that effect if they mimic the same logic.
I don't see why this would need to be done on the source video file - it could easily be done on software hardware side - just hold the frame for multiple refreshes. As I understand it, that's how things work now.

But I'm now onto the idea that it's how the frames transition from one to the other that matter. Having near instantaneous transitions exacerbate stutter.

To my original question about the mechanism of BFI - I've even discovered that certain modes of BFI on my Sony turn it's 120hz display into a 60hz display (of the two, only one do I find the flicker to be really objectionable). But in doing so they image scans from top to bottom slower than the higher refresh rate.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Mechanism by Which BFI smoothes out 24FPS

Post by Chief Blur Buster » 27 Apr 2023, 20:18

Before I post history, I have to make some corrections to misunderstandings;
John Hess wrote:
27 Apr 2023, 18:56
Since our conversation, I was able to test out the 144hz monitor by getting it down to 48hz. Using a my phone to record the transition from frame to frame I noticed that the image scanned much slower when in 48hz mode than in 144hz mode.

This is starting to lead me to think that it's the speed at which the frame changes that affects how smooth the perception of motion.
Slow scanout for low Hz is common for panels that sync to signal scanout velocity.

Scanout velocity has no effect on stutter.

This is evidenced by the fact that you can make a 48Hz mode that scans in 1/48sec, and a different custom 48Hz mode that scans in 1/144sec -- and there is no difference in stutter.
John Hess wrote:
27 Apr 2023, 18:56
In some of the articles Chief writes about how longer G2G can mask stuttering - the slow scan of the frame sort of acts like that.
Correct for GtG ...but you are misinterpreting my writings towards scanout.
Slow scanout is not the same thing as slow GtG.

- You can have a slow scanout with fast GtG.
- And you can have a fast scanout with slow GtG.

Only the GtG component affects the motion blur;

It's possible to have the following
1. A slow scanout with fast GtG, is simply a very sharp-edge wipe downwards.
2. A fast scanout with slow GtG, is simply a blurry-edge wipe downwards.
Only the GtG differences (and persistence) affects display motion blur


Let's simplify things by using the same refresh rate and same scanout sweep.

Same 60 Hz, and same 1/60sec scanout velocity

Slow GtG on IPS LCD (same Hz, same scanout velocity)

phpBB [video]


Fast GtG on OLED (same Hz, same scanout velocity)

phpBB [video]

John Hess wrote:
27 Apr 2023, 18:56
In my mental analogy it's like the difference between a square wave and a sine wave. The square wave has a sharper edge which is a higher frequency edge.
GtG can work that way, but scanout does not work that way.

It is only a coincidential correlation that you are making by accident -- it is a common misunderstanding in display community, that I have had to (I do get paid to teach classrooms about display physics -- https://services.blurbusters.com )

In summary, we did extensive testing and we have confirmed that scanout velocity differences (at same GtG speed) has no effect on motion blur. It only affects motion geometry (e.g. scanskew effect, e.g. www.testufo.com/scanskew ...)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

John Hess
Posts: 5
Joined: 05 Apr 2023, 23:50

Re: Mechanism by Which BFI smoothes out 24FPS

Post by John Hess » 27 Apr 2023, 21:49

I really do appreciate all the time Chief has spent discussing this me.

Still a lot to mull over but I think my own whackadoo theories were making it unnecessarily and erroneously complicated.

Futuretech
Posts: 35
Joined: 11 Oct 2020, 23:52

Re: Mechanism by Which BFI smoothes out 24FPS

Post by Futuretech » 05 May 2023, 00:12

Perhaps I shouldn't post this speaking more out of theory. I can intuitively understand it but to phrase it down gives me trouble. Plus it's been answered before at a prior date a few years ago.

The main answer was, The cost to be able to harness this technology would make it prohibitively expensive. On top of that I'm sure Mark Rehjon can explain datastream of serialized data to be scanned on a display through a scanline system whether the classical/modern display is being speed up to so much that the sheer brute force speeds of the serial datastream provides just-in-time/real-time speeds of ever increasing amount.

I'm sure that top part kinda self-answers the entire bottom. But I'd like to express my theorems because I'm surprised other methods of scanning have not been much delved into at a higher level or even a company or research level. It seems we want more data and speed and just the ever increasing rates of speed seem like the datastream to scanline or datastream to pixel/subpixel painting method of pixel/subpixel per pixel/subpixel is the go to. Potentially we might make it so fast it just doesn't matter whether a columnscan or globalscan/totalscan.

Sure sweeping a column left to right or flashing an entire image at once is neat. But if the simplest and oldest approach is nearly as fast while shaving off costs and complexity and scaling down price at all ranges from BOM to Consumer. Then the obvious is met plus the reality is this has been the principle method of delivering data through the datastream of computer(cpu/gpu) to monitors in fact only in the last prior decade did the LCD revolution return to nearing and even reaching CRT levels despite what some who still treasure their CRTs state.

So all in all I'm gonna postulate my theorems as simply theories. I think technologies should be explored in depth and I think it would just be an interesting examination on monitor/display technologies.

But I've never really seen much talk on it so I'll reiterate it again.
How about a column-scan or global-scan/total-scan of the image?
We know inasmuch since the analog days and current digital but within the firmware/software properties analog principles. That a monitor will scan from Top Left to Right traveling downward to the bottom right; for most monitors out there. Some oddball or scientific ones seem to differ intentionally or manufacturing difference.

When studying this phenomena we get skewing as mentioned above. So, John Hess, is looking for a mechanism to smooth out 24fps/41.7ms per frame content.

The particular issue mentioned above is skewing in both directions left to right and right to left.

But also other reasons; I'm a little curious on motion blur as CRTs have much less issues. Unfortunately as time went by with the LCD revolution the CRT did not receive any Ultra-Fast Phosphors such as the 2,000Hz(0.5ms or 500microsecond) type which produces microblur rather than milliblur as with Medium-Persistence Phosphers 5ms - 2ms.

So a CRT electron gun scans a pixel and in most obvious but lacking in mathematical principles such as the 10%/90% property or any other mathematical property of pixel calculation. To simplify significantly the pixel lights up or gets excited and then within a period of time it would decay or de-excite and return to black or off or not operating. Usually best case with higher-end models Medium-Persistence Phosphors as low as 2 milliseconds or as high as 5 millisecond. Again mathematical differences in what is considered an appropriate measurement of response time makes the situation fuzzy.

So the curiosity is there's various decay times and various reasons, currently age and CRT type for example a older, lower end model having Slow-Persistence Phosphors for example a time of above 5 or more milliseconds maybe even double-digit milliseconds.

But, John Hess, is wanting to know why lower and not necessarily older but often older media is expressed so poorly on better displays.

A LCD, OLED, MicroLED, or future device/devices use Sample & Hold method on top of various differences to response time or even in current higher end displays methods by which motion blur can be reduced such as black frame. In fact, John Hess, responded with he tried out a lower frame-rates and higher refresh-rates and tried to see how that would help.

We know that just because a CRT is fast and the argument is LCD/OLED/MicroLED/Future are faster to some things. We know current displays electronically fire i.e. electric latency rather than use an electron gun like a CRT. But ignoring small variables, LCDs and CRTs display 24fps in the above quite well, so far only in recent times to some they even bother to notice perhaps in the past it was relegated to expensive, high/higher-end, non-consumer displays. Some of the up and coming displays current high/higher end LCDs, current OLED and incoming MicroLED show some differences or potentially show more.

If LCDs are slower but getting improved, OLED is much faster but still has a touch of things, and MicroLED is hopefully coming out with bang. A question is postulated.

The question being: If CRTs reasonably display information faster than most LCD. Why then does, John Hess, mention he has a problem?

Is it that the content is made for CRTs and it just so happens we are pushing our displays to higher levels that our 24fps content can't handle it?

For example Sample & Hold would have reduced some issues just as having Sample & Hold increases other issues. If we are using larger resolutions, higher frame-rate and higher refresh-rates i.e. speed and data. Then does that mean technologies that inhibit motion blur/persistence, motion reproduction; are we seeing 24fps content in a truer level for motion?

Another question might be does post-processing effects of monitors hide or show issues as well?

Obviously motion as a still image is the best looking image. But as we approach still image levels of motion persistence and display. One might wonder what microblur and nanoblur would do to older content that is on the threshold of displaying data below the millisecond time.

Again CRTs did it just fine. Is it the resolution and refresh rates in combination with low-frame rates or other?

Yeah low frame-rate content had issues for example playing a game at 29.95fps|30fps is not the same as 59.95fps|60fps and with refresh rate technologies reducing further the scan or draw time example 29.95fps|30fps w/120Hz or 8.3ms or 1/2 of 16.7ms. One might wonder what that does to low-frame rate content as mentioned by John Hess.

If the image is progressively drawn faster arbitrarily of the frame itself. In other words the 1:1 matching doesn't dictate the latency of the frame to that specific refresh period. But in fact the lower frame is drawn at a faster rate thus a 24fps or 41.7ms image might be 120Hz drawn and down to 8.3ms or (41.7ms - 8.3ms) equaling 33.4ms less latency than even the specific 24fps or 41.7ms.

IF anything it seems like the content being displayed faster regardless of pixel transition times such as for example OLED, would in essence IMPROVE display output. In fact because VRR is merely painting the image at a faster scan out rather than outright slower speed subject to more anomalies. Doesn't it seem strange the image is showing issues?

So it seems like even improving the ability to paint the image has properties were the image even has issues. To, John Hess's, testing and mentioning above of skewing and or other image lacking in smoothing.

So why not a columnscan?

So a matrix is created by an image ex: 960 x 540 or 518,400 pixels. But my question is why, ignoring costs, must the matrix be made in such fashion? i.e. paint top left to top right then downward, 1, back to the beginning for a second horizontal line left to right.

It obviously not only produces a geometric skew literally skewing and other image anomalies. But it seems like the more we resolve the speed issues with our displays i.e. faster properties whatever that may be, it seems we discover new issues.

It's either 1:1 or 1:X being a multiple.

Some have suggested older content using either a multiple such as 1:2 24FPS/48Hz, 24fps/72Hz, 24fps/96Hz or and one particular reason why 120 was made a defacto standard with solid-state displays, 24fps/120Hz so 1:2, 1:3, 1:4, 1:5... and so on.

As well as in modern times as mentioned in Blurbusters a few years ago. Defining framerate-less technology whereby the old 24fps is converted into something of having tens, hundreds, or thousands of more frames. In essence creating extrapolation of data for further sets of frames.

So in this case we want to paint or draw the image with less geometric issues i.e. skews or actions as well as any other issues as well.

So why can't the scanline be a columnscan.

So in the beginning I said, Let's ignore cost.
Scanline:

1: 1, 2, 3, ... 960
2: 1, 2, 3, ... 960 or 1920
3: 1, 2, 3, ... 960 or 2,880
4: ..., ..., ... 960 or 3,840
5: ..., ..., ... 960 or 4,800
.: ..., ..., ... 960 or 5,760
540: 1, 2, 3,... 960 or total 518,400 pixels for one frame in this case 960 x 540 x 24fps.
All this happens in 41.7ms per frame for 24 of them and if using speed up technologies like say 390Hz or 2.6ms; even faster by 39.1ms sooner. In essence even if the frame is drawn in 2.6ms it still needs wait for a next frame to make 1,000ms for 1 second and there is 24 divisions or frames means the display is kinda twiddling it's thumbs, idling by waiting for the next image.

In essence S: 0 -> 1: 41.7 -> 2: 83.4 -> 3: 125.1 -> 4: 166.8...12: 500.4...22: 916.6 -> 23: 958.3 -> 24: 1000ms.

It seems even this speed up of 16.25 times faster still has problems. Or in John Hess's case if he's specifically pointing out 24 and using 144 as a divisible part. 24fps with 144, or 1:6 multiple. 34.7ms faster or 6.9ms scan time. He noticed a lack of smoothness and potentially even the skew anomaly. Even if 6 times the speed of 24.

To return we call it a scanline painting/drawing of a datastream of information. So a computer will spit out or beam out either through a monitor buffer i.e. a laggier display or in some displays a real-time almost instant process; little to no latency in the method i.e, bufferless display.

Another way I've theorized is a columnscan.

Instead of painting 960 lines in the first line from left to right 540 times downward beginning with the first pixel.

But methodology wise it's column to column, left to right.
Columnscan:
B:------------------>|
1:> 1, 2, 3, ...540
2:> 1, 2, 3, ...540 or 1,080
3:> 1, 2, 3, ...540 or 1,620
4:> 1, 2, 3, ...540 or 2,160
5:> 1, 2, 3, ...540 or 2,700
etc.
With certain programming an LED matrix kit can be lit entirely from a column and swept through not just left to right but top to bottom. And produce a column movement if an LED matrix can do that surely at some point someone must have had the idea of using that on a monitor/television.

Similar to how originally progressive scan was invented from movie projectors as the film reel moved up or down into the aperture of light. In essence the projector would mechanically actuate the film and just pass it up or down depending on mechanical construction and display an image, it would occur so quick as to seem seamless albeit I'm sure sometimes or on some error or purposeful effect would indicate the filmreel would display that sweep up or down.

Later with production of CRTs did interlacing and progressive scan but through scanline pixel/subpixel left to right, down, left to right. Rather than possessing a column a fan of electron beam and sweeping it left to right, top to bottom, or bottom to top. It seems we developed technology to be scanlined pixel per pixel through a factor of a datastream or serial process.

In essence 1, 2, 3, 4, 5 etc. till 960 or the total vertical column scans or sweeps the vertical column from it's beginning LEFT to the RIGHT till it hits 540. In essence all 960 pixels on the vertical column move right to the 540 horizontal lines.

Another is a even more expensive method and probably is either memory buffered or the very fundamental of technology needs to change.
Globalscan/totalscan:

1: 1, 2, 3, 4, 5, ...540
2: 1, 2, 3, 4, 5, ...540
3: 1, 2, 3, 4, 5, ...540
4: 1, 2, 3, 4, 5, ...540
5: 1, 2, 3, 4, 5, ...540
6: 1, 2, 3, 4, 5, ...540
960: 1, 2, 3, 4, 5, ...540

In simplest terms the entire matrix i.e. image 960 column x 540 lines is painted/scanned entirely at once or globalscan/totalscan.
Again if an LED matrix can be totally lit up all at once a total pre-configured pixel/subpixel arrangement to produce a total image or display of image; if an LED matrix can do that surely at some point someone must have had the idea of using that on a monitor/television.

Whereby the entire image is painted on the screen all at once. We can call it a screenscan or flashscan or popscan or Globalscan/totalscan albeit Globalscan has in recent times been relegated in use a lot to black frame insertion technology that we currently use to mimic the qualities of CRT particularly with a lack of rollingscan or barscan or scanline to mimic a fade to black for motion persistence issues.

But what if cost ignoring. We had every pixel on the screen painted all at once.

So in essence a framescan or Globalscan/totalscan or in essence the entire matrix of the image is scanned on screen. So all 960 x 540 or 518,400 pixels are scanned unto the screen all at once.

If the image is painted on a fixed-display the only anomalies might be a quasi-3 Dimensional on/off property which could bother some people, flicker; eye fatigue or eye issues? Maybe a wave-form effect some sort of quasi-3Dimensional effect maybe a flicker around the peripherals if using a screen that requires peripheral vision some people do posses thresholds for detecting flicker in such manner.

Does a scanline progressive scan hinder flicker effects like a seizure inducing display? Or can the globalscan/totalscan/flashscan be okay to seizure prone people display the total image at once display on/display off etc.?

Still I think my theorycrafting is just that a theory. Obviously Mark Rehjon can explain it better costs and technology limitations. It just doesn't work that way but is expensive.

For example the response I received was it would require circuitry to every pixels and furthermore subpixel. So to paint every pixel at once let alone launch a datastream through the computer into the display and who knows how laggy the electronic chips inside the display are in fact even the buffer or memory pool is a potential factor we probably need a lot of research into instant or immediate process on the columnscan, framescan, or globalscan/totalscan i.e. bufferless display.

2160 columnscan to 3840 horizontal is probably a very deep mathematical cost to the bandwidth. Albeit 960 scansweep to 540 horizontal is much easier to handle. I'd personally like to know if any researchers out there ever made a columnscan display and fired it off like a sweep i.e. a full column to the right. Same for globalscan or flashscan or totalscan.

Transmitting an entire image at once in totality is a lot on bandwidth. Again as a curiosity I'm surprised small-scale low-resolution factors in such displays have not been tested, LED matrix posses them look at the music industry with matrix tones and LEDs embedded into them. Obviously a specific technological difference needs to be created and programmed. The serialized-nature of our datastream seems the major factor rather than a bank/parallel or outright total discharge.

These are all just theories and I'm pretty sure they are reasonable they have a basis on logic and potentiality of technology.

But would require a total shift in understanding and development.

I'm reminded of the recent improvements to TVs. In the prior decade 10bpc Deep Color or 30-Bit(1024c) was seen as high-end in fact in the 8bpc(256 color) department we even had many drop lower-end panels to 6bpc + 2bFRC, so it produced literally psuedo-8bit(64 color with 2-bit dithering). Some consumers were so oblivious to it of course, so someone might think they have a nice screen and it's really considered a low-range garbage.

In current times some TVs are hitting 12bpc(4096c). I'm surprised companies didn't even try middle ground approaches like 9bpc or 512 color per channel.

It's the same with Terinary, Trinary, or Teritiary numeral system. Balanced terinary -1 negative, +1 positive, and 0 = ground/off. We use Binary but some have expressed terinary as having deep potentiality. I'm reminded of Thomas Fowler and some of his points he made when he postulated what it might have been had trinary been used as the standard rather than delving into binary.

That's my best I can offer, like you, I posses whackadoo theories. But don't think of it as "unnecessarily and erroneously complicated."

You put more thought and effort into display technologies than most people. Most don't even research further than the price tag. Plus your here at Blurbusters it seems to have become a grassroots area for study on display research. If your here your probably doing more than most people.

bumbeen
Posts: 86
Joined: 25 Apr 2023, 14:35

Re: Mechanism by Which BFI smoothes out 24FPS

Post by bumbeen » 17 May 2023, 15:53

CRT have built in anti-aliasing and built in BFI. Sony has a strong history with cameras and displays and should know better how to handle 24fps content which is going to be viewed on a TV more often than a 144hz monitor which may not care either way. AFAIK digital projectors either double or triple flash 24fps content, and filmmakers avoid creating scenes that ask the viewer to track a moving object. Usually your eyes will saccade around a movie scene which doesn't add blur like tracking does. That's all I can add :)

Post Reply