madVR Smooth Motion at 120Hz to reduce motion blur on OLED?

Everything about displays and monitors. 120Hz, 144Hz, 240Hz, 4K, 1440p, input lag, display shopping, monitor purchase decisions, compare, versus, debate, and more. Questions? Just ask!
Post Reply
Posts: 1
Joined: 24 Aug 2019, 10:02

madVR Smooth Motion at 120Hz to reduce motion blur on OLED?

Post by gorman » 24 Aug 2019, 10:30

Hi! I'm wondering whether anybody with a 120Hz capable display could test this for me. The use case I have in mind would be with an LG OLED television (when HDMI 2.1 GPUs will become available) but I think it could be tested in any case with any display capable of displaying at 120Hz, with no other functionality (LightBoost, etc.).

By reading this: I've been thinking that having Smooth Motion on the "always" setting, while driving the screen at 120Hz, while playing back 24fps content, could bring the motion blur down to the 8.3ms value. Frames would not change *much* but they would change. Meaning that every frame would be displayed for just 8.3ms.

For people unfamiliar with madVR, the Smooth Motion function does not generate extra frames as "intelligent frame creation" algorithms do on TVs or, on PCs, as SVP does. It blends one frame into the other, so it definitely does not create the dreaded (for me) soap opera effect.

Is my reasoning sound? Completely wrong? Somewhere in between? :)
Thanks for reading.

User avatar
Chief Blur Buster
Site Admin
Posts: 9005
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada

Re: madVR Smooth Motion at 120Hz to reduce motion blur on OL

Post by Chief Blur Buster » 03 Sep 2019, 23:06

Correct, madVR isn't the same algorithm as Smooth Video Project.

I have not looked closely at this time, but it currently appears more of a frame-smoothing or blending algorithm. It does not reduce motion blur. Just de-stutter / de-judder.

Such smoothing is often done by either:
(A) retiming/reclocking the frames (detecting pulldown and then re-framepacing the frames for smoother playback)
(B) and/or blending the frames.

The latter can dejudder at non-divisible refresh rates.

Like converting 3 : 2 pulldown into 2.5 : 2.5 pulldown via a frame-blending algorithm. It could even be as simple as a 50%:50% alphablend of 2 frames, during specific refresh cycles. This creates pulldown consistency. That definitely reduces stutter, but does not reduce motion blur. It might be also increasing framerate to match refresh rate, but only to create a framerate=refreshrate situation, to allow single-refresh-cycle frame-blending opportunities. Repeating frames where needed, and blending frames to prevent pulldown judder.

That's also how older judderless video format converters worked. NTSC->PAL 60/50 and PAL->NTSC 50/60 algorithms, they just basically cascaded a blend to de-judder, back in those pre-motion-compensated format converter days. Basically for 60/50 conversion of NTSC recordings to PAL users
The 1st PAL refresh cycle was 5/6th blend of deinterlaced NTSC field 1 and 1/6th blend of deinterlaced NTSC field 2.
The 2nd PAL refresh cycle was 4/6th blend of deinterlaced NTSC field 2 and 2/6th blend of deinterlaced NTSC field 3.
The 3rd PAL refresh cycle was 3/6th blend of deinterlaced NTSC field 3 and 3/6th blend of deinterlaced NTSC field 4.
The 4th PAL refresh cycle was 2/6th blend of deinterlaced NTSC field 4 and 4/6th blend of deinterlaced NTSC field 5.
The 5th PAL refresh cycle was 1/6th blend of deinterlaced NTSC field 5 and 5/6th blend of deinterlaced NTSC field 6.

That achieved a 5:6 ratio without judder. Basically the 1990s high end video format converters could achieve mostly stutterless/judderless NTSC->PAL conversion at a 5:6 ratio via blending to avoid pulldown judder.

They were essentially de-interlacing (line doubling), blending the framerate + scaling the resolution, and then re-interlacing. Basically a progressive-scan internal framebuffer between two interlaced video formats, to help assist scaling and blending. They weren't interpolating, but was removing judder through this slightly rube goldberg workflow, but it worked well, created vastly superior video format conversion, and was possible to do for live broadcasts (e.g. Olympics in both NTSC and PAL countries). Was an engineering achievement!

Occasionally there was motion shimmering, and slight motion blurring, but it looked a lot better than stuttering/juddering of frame-repeats or frame-dropping found in low-end or pre-1990s electronic format converter devices. You can also use motion compensated deinterlacing and interpolate the motion vectors to get 60 clearly separate looking frames per second from a 50 frame per second PAL source. But that wasn't done in the days before computing power of motion compensated video processing with real time motion vector detection (i.e. pre-2000).

Now to de-judder 3 : 2 pulldown while staying at 60Hz, would be a 2.5 : 2.5 pulldown approximation via blending.
60Hz Refresh Cycle #1 is movie frame #1
60Hz Refresh Cycle #2 is movie frame #1
60Hz Refresh Cycle #3 is a 50%:50% alphablend of movie frame #1 and movie frame #2
60Hz Refresh Cycle #4 is movie frame #2
60Hz Refresh Cycle #5 is movie frame #2

On a sample-and-hold display, especially those with slower pixel response (Common LCD HDTVs) -- the display motion blur often hides most of the blending effect. So it often looks natural, as if you had switched to 24 Hz, 48 Hz or 72 Hz refresh rate, but you're still at 60Hz. Except now playing 24fps without judder. You may notice it in some material, but not always in most. It just looks like original frame rate.

That doesn't increase frame rate though, just eliminates pulldown judder.
Just destutters/dejudders, does not reduce motion blur.

To reduce motion blur without BFI or strobing, you need a form of Frame Rate Amplification Technology (interpolation, extrapolation, reprojection, etc) that creates additional frames. Read that article -- it's a wonderful read for videogaming use cases.

This is not always preferred if you want to watch 24fps movies (I agree with the new Filmmaker Mode initiative by TV makers), but some forms of frame rate amplification technology can be preferred when watching full framerate material (e.g. sports). And virtually lagless frame rate amplification is now currently used for some video game situations (e.g. Oculus Rift's Asynchronous Space Warp algorithm). The goal is when max frame rate is preferred -- those use cases -- and if it can be done artifactlessly and laglessly (so well done that it looks like real frames) -- then it is welcome. It needs to be the Right tool for the right job where extra frame rate is welcome.
Head of Blur Busters - | | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

Post Reply