Source file vs Refresh-based frame duplication: Video Games vs Movies

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
Post Reply
thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by thatoneguy » 06 Jul 2022, 07:03

So this is a thought that has been bothering me for a while.
Traditionally video games since the Atari days, NES/Mario, Sonic to modern games like for example Cuphead etc. ran at a minimum of 60fps, and despite their animations being animated at a much lower rate than 60fps(for example the "Sonic about to fall from a ledge" animation in Sonic 1 is "animated" at only 3 frames) and the animations itself repeating, you never see a double/multiple image effect in them.

With Movies on the other hand the source file is fixed at a 24fps so when playing them on a impulse-based display you get the double-image effect.
So what if instead of doing the typical pulldown techniques that TVs do we instead pre-emptively duplicate the frames in the source file itself(granted that would lead to bloated filesizes but this wouldn't be a big of a problem for theatres at least) and play it back at a higher refresh rate?
Could that eliminate the double image effect just like in video games while maintaining smooth motion quality and the 24fps look with no SOE or am I mistaken?

Basically, my line of thinking is: "If video games with low framerate animations don't have the double-image effect than why should movies?"

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by Chief Blur Buster » 06 Jul 2022, 15:17

- On a scientifically perfect sample-and-hold display
- As long as there's no weak link (compression artifacts)

THEN

Source-based and refresh-based frame duplication is identical to human eyes.

BUT

Real-world situations can create differences (e.g. compression artifacts, strobed VRR displays, and other subtle nuances) can create differences.

Double-image effects caused by pulldown judder can also be successfully reproduced in games that intentionally create pulldown judder. You have to duplicate the cadence in order for an apples-vs-apples situations. A 24fps videogame at 60Hz generating 3:2 pulldown, also exhibits a pulldown-derived judder-generated double image effect;

It's easier to create a double image effect from impulsed displays, but judder/vibration mechanics can create human-perceived double image effects. It does not matter if it's source-based or display-based pulldown judder -- as long as it's the same at the photons end.

Games configured in a precise way (e.g. RTSS non-scanline cap. RTSS is microsecond-accurate 24fps cap combined with VSYNC ON running on a 60Hz display, will automatically naturally fall into a 3:2 pulldown judder like 24fps film. If you enable GPU motion blur to maximum, it will have the same motion blur as a movie file filmed with a 360-degree shutter (1/24sec camera exposure).

(The alternative is disabling GPU motion blur, and finding 24fps material that was filmed with ultrafast camera shutter, to eliminate camera-shutter-related motion blur, e.g. 1/1000sec shutter. But source-based blur of GPU blurring needs to be identical to the source-based blur of camera, when comparing video source to game source)

Now you've reproduced an apples vs apples experimental variables. Frame duplication at exactly the same cadence.

Now scrolling/strafing/turning can produce the same judder-generated double image effect as a 24fps film panning scene from video.

Obviously, the blur needs to be the same, the frame rate needs to be the same, the pulldown needs to be the same, but you can reproduce it in a game simulating the same setting as the film.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by thatoneguy » 07 Jul 2022, 07:05

Your post is a bit confusing.
So from what I gather from your post is that if you have:
1. A perfect sample-and-hold display
2. No compression artifacts
then that should be able to get rid of the double-image effect while simultaneously lowering MPRT? Is that correct?

If so then MicroLED displays are the closest due to their extremely low nanosecond pixel response time(unless Laser Diode Displays which are even faster, become a thing in the future) and most theatres don't have to deal with compression artifacts because they use uncompressed files.

So in theory an Uncompressed Movie File with pre-emptive duplicated images to say 960fps from 24fps displayed on a (Theoretical)Native 960hz Direct Emission LED Cinema Screen should yield low MPRT(1ms or so) while still retaining the 24fps look without interpolation artifacts.

As I understand it, the reason for the double image effect is because of temporal differences caused by the display having to display one frame multiple times to match the refresh rate. Hence why even CRTs with their excellent response time still produce DFE. But if the duplicated frames are in the source file themselves than that should negate the temporal differences.

Let me know if I'm wrong, or if what I'm thinking has been tested before.

EDIT: As for video games, what I was referring was the fact that a huge amount of games, even though they run at 60fps, have a ton of sub-60fps animations and I've never noticed the DFE on them. So movies/video material should logically behave the same if they had their frames duplicated beforehand in the source file(which would be similar to a game engine displaying a frame multiple times) as opposed to them being fixed 24fps and letting the display duplicate the frames itself.
I brought up video games simply as a comparison to movies.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by Chief Blur Buster » 07 Jul 2022, 12:40

thatoneguy wrote:
07 Jul 2022, 07:05
As I understand it, the reason for the double image effect is because of temporal differences caused by the display having to display one frame multiple times to match the refresh rate. Hence why even CRTs with their excellent response time still produce DFE. But if the duplicated frames are in the source file themselves than that should negate the temporal differences.

Let me know if I'm wrong, or if what I'm thinking has been tested before.
It can be one of the causes of double image effects, but only if the display flickers (has temporals).
But that's not what we're talking about here.

Specifically, here, double image effects can be perceived by the stutter amplitude (the outermost range of stutter), much like how a vibrating guitar string sometimes looks like a double string. I thought that effect is what we are talking about here -- that has absolutely nothing to do with repeat refresh cycles.

There are probably multiple lines of thoughts & multiple tangents involved here, let's not wild goose chase the wrong tree in the forest, shall we...

So, let's round this out:

(A) If the display has ANY temporals (flicker, impulsing, phosphor, strobing), then double image effects are caused by repeat strobes. See Duplicate Images on Impulsed Displays

(B) If the display has ZERO temporals in the flicker department, double image effects are a human perception effect from the stutter vibration much akin to the double-string effect of a vibrating harp / piano / guitar string.

These effects may happen independently or concurrently, but (A) definitely is a more dominant effect over (B), but (A) is not applicable whenever a display has zero subrefresh light modulation (aka flicker) such as sample and hold LCDs).

There's even multiple threads that dives deeper into the multiple causes of double image effects that aren't from traditional double-impulsing causes (e.g. CRT 30fps at 60Hz)
- Repeated images on sample and hold displays at low frame rates
- Sample & Hold Motion Blur 60FPS @60Hz vs. @120Hz (Read both Page 1 and 2)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by thatoneguy » 08 Jul 2022, 04:41

So I take it that what I proposed has not been tested yet then?
If that's the case then, since I never noticed DFE on low framerate animations in 60fps games then I am confident that duplicating/multiplicating the frames beforehand in the source file itself and then playing it back at a higher refresh rate(matching the framerate of course so if 24fps multiplied by x3 then it would be 72fps@72hz) this would potentially get rid of DFE while lowering persistence blur similar to how it works in video games.

I hope that what I'm thinking is being researched somewhere because this could solve the low framerate problem associated with 24fps film/video since if it works how I think it should then converting 24fps to 960fps beforehand would get rid of blur and potentially get rid of the DFE which would yield smooth motion like on a strobed 24fps@24hz CRT without the flicker. So this way you would get to have your cake and eat it too.

Of course this method wouldn't work with frame-locked video games since they're real-time applications but for passive applications such as video I can see this working(although with the caveat of bloated filesizes).

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by Chief Blur Buster » 10 Jul 2022, 22:12

thatoneguy wrote:
08 Jul 2022, 04:41
So I take it that what I proposed has not been tested yet then?
In general? Yes. People notice it quite often. Read the threads I pointed.

For this specific display of this specific file? No.
I don't have access to your specific display with your specific video file to determine the exact causes of DFE (Double Frame Effect, I presume).
thatoneguy wrote:
08 Jul 2022, 04:41
If that's the case then, since I never noticed DFE on low framerate animations in 60fps games then I am confident that duplicating/multiplicating the frames beforehand in the source file itself and then playing it back at a higher refresh rate(matching the framerate of course so if 24fps multiplied by x3 then it would be 72fps@72hz) this would potentially get rid of DFE while lowering persistence blur similar to how it works in video games.
Duplicate refresh cycles never reduce persistence on a sample and hold display (one that has no impulsing-like temporals). Repeating x3 duplicate does not reduce persistence on flickerless displays.

For impulsed displays, if you repeat x3 you can reduce persistence but you get triple image effect. Some people like the triple image effect (72Hz CRT) in exchange for reduced display motion blur.

The only way to have as single image effect at higher Hz is framerate=Hz, which means interpolation/extrapolation/reprojection (frame rate amplification) techniques.

This is hard brickwall laws of physics.
thatoneguy wrote:
08 Jul 2022, 04:41
I hope that what I'm thinking is being researched somewhere because this could solve the low framerate problem associated with 24fps film/video since if it works how I think it should then converting 24fps to 960fps beforehand would get rid of blur and potentially get rid of the DFE which would yield smooth motion like on a strobed 24fps@24hz CRT without the flicker. So this way you would get to have your cake and eat it too.
Yes, 960fps would be interpolation/extrapolation/reprojection. Everything will become video-like smooth, far more than 24 frames per second. It would be as blurfree as 24fps@24Hz, but you'll also lose the stroboscopic effect and stopmotion effect, so the motion will feel different.

However, one major problem is the soap opera effect, caused by having camera motion blur that's worse than frametime. That's positively ugly. 24fps movies with 1/24sec motion blur, converted to 960fps without removing camera-based motion blur, creates an ugly soap opera effect. We need AI (Artificial intelligence / neural networks) to erase the camera-based lens focus blur -- it's very compute intensive.

It's already heavily researched -- there are only so many ways to do it in a pick-poison way.
Simulating a retro CRT or retro 35mm projector (double strobe), with different kinds of tradeoffs.
thatoneguy wrote:
08 Jul 2022, 04:41
Of course this method wouldn't work with frame-locked video games since they're real-time applications but for passive applications such as video I can see this working(although with the caveat of bloated filesizes).
Actually, I already have an article about converting 100fps into 1,000fps in realtime:
Frame Rate Amplification Technologies

Oculus Rift already does it now (in a way better than interpolation) at roughly a 2:1 ratio, but I am looking forward to frame rate amplification technologies of larger ratios (5:1 to 10:1 ratios)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by thatoneguy » 20 Nov 2023, 03:00

THREAD REVIVAL


So I just remembered this thread.
Looking back at this, I think I was misunderstood when I originally proposed this since I think I did not explain myself very well.
Of course multiplying the same frames would not improve motion fluidity(only higher framerate does that), but it could improve motion clarity(since you are converting a lower framerate to a higher one to match the refresh rate).

I will attempt to explain myself better one more time:
Take for example traditional animation. Many animated films(shorts or features) are animated on twos(12 frames per second or even as low as threes(8 frames per seconds).
A good example would be the classic Pink Panther theatrical shorts:
blake-edwards-pink-panther-title.jpg
blake-edwards-pink-panther-title.jpg (26.99 KiB) Viewed 3026 times
These were animated on twos which meant 12 unique drawing per second, and then each of those drawings were shot 2 times on film to meet the required 24fps. If you watch this on a 24fps@24hz CRT you should get a buttery smooth picture with no double image effects whatsoever even though the animation was made at 12fps.

Another example: Cuphead a game from 2017 which imitates 1930s cartoons
cuphead.jpeg
cuphead.jpeg (38.23 KiB) Viewed 3026 times
All the animations in this game were done at 24fps(with a few animations here and there being 12fps or lower or 40+ fps)
This game to my knowledge does the same thing where the game engine repeats the same frames of animation and when you play it at 60fps or 120fps or 240fps it looks extremely clear, clearer than any actual 24fps animation video/film I have ever seen in my life.
For example look at the flag here, it only moves on frames 3, 6, 8 and 11
yxpzszV.gif
yxpzszV.gif (1.69 MiB) Viewed 3022 times

What I was proposing is that we do the same technique that was done in animation(to save on costs) but instead of doing 12fps -> 24fps, we instead do 24fps ->(insert frame rate here e.g 72fps, 96fps, 120fps, 144fps, 240fps, 480fps, 960fps etc.)
Doing this SHOULD end up with smooth motion like Cuphead(or 24fps@24hz CRT) and I also see no reason as to why it wouldnt work for live action 24fps video material such as 24fps movies as well.

I would like to see a test by somebody who is tech-savvy enough to do it. If only to get a better understanding.
I think this could be tested by shooting a short 24fps panning video(preferrably raw video) and then converting the 24fps file by manually multiplying each frame on a computer to your desired target framerate on the source file and then play it back.
If all that is done and the panning looks clearer vs regular 24fps file then my theory will be proven correct.

With this, I will rest my case in this topic.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Source file vs Refresh-based frame duplication: Video Games vs Movies

Post by Chief Blur Buster » 20 Nov 2023, 17:41

I think something (roughly like this) is already in the works already by a few people I know on Discord!

Now...A lot of things have improved since this thread was created! AI-based interpolation.

For absolutely stunning near-flawless interpolation on 2D games (like RTS style), I recommend using a Elecard capture card + RIFE 4.6 NCNN interpolation engine + RTX 4000 series. It can do one of the world's best NN-AI-based interpolation to 1080p/120 in real time using a compute-heavy interpolation/extrapolation algorithm requiring the latest RTX cards

It's a convoluted setup, but it makes retro games look like true native 120fps, with only rare artifacts.

The NN-AI "learns" the background of games while you play them, and uses as background-reveal or parallax-infill (sometimes pixel perfectly, sometimes not, depends on the game you try). So it's a more flawless interpolation for retrogaming, platformers, and RTS games. For a interpolation "black box in the middle" type of setup that is not integrated into the engine -- it is the most native looking realtime interpolation today's compute can get you today so far.

There's input lag, but it's not too shabby. In theory only +1 frame! (excluding capture card overhead).

Long term, I want to see somebody implement this into a Windows Virtual Display Driver (I have a $2000 source code bounty) so that you can omit the capture card (Elecard).

For more information about artifical-intelligence (NN) based interpolation, which is the state of art, see https://www.svp-team.com/wiki/RIFE_AI_interpolation and you can tell it's pretty much supercomputing-league interpolation essentially, but it's light years quality far beyond any interpolation algorithm I've ever seen. Recommended if you /must/ use interpolation in retrogaming.

It's nigh native-looking on most retrogaming content! Sadly, gotta be piped into an Elecard capture card (lossless HDMI video input), I hope this changes so we can do it natively as a filter plugin (e.g. ReShade/SweetFX style, but it would be better to use a virtualized graphics driver to allow it to work on any content)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply