Framerateless Video Files (Infinite framerate)
Posted: 07 Jan 2019, 02:52
Crossposted from an ArsTechnica comments section reply:
(Note: This is not an article I've yet written for the Blur Busters Area 51: Display Science, Research & Engineering section of website, but I wanted to make sure this was on Blur Busters Forums -- I might formalize this later into an official article)
______________
8K video
All that beautiful sharpness wasted in motion-blurry motion.
We're hitting retina for HDR dynamic ranges (high-nit displays with excellent blacks), we're hitting retina for resolution (8K), we're hitting edges of practical screen sizes. Now, one little-turned stone is frame rates and refresh rates.
At least once we hit 8K, resolution is thusly retina-ized for televisions, and this may finally enable a (very slow) retina refresh rate race over the next twenty years, with true (nonfaked) refresh rates doubling every 5-10 years until refresh rates are fully retina for all possible artifacts (including stroboscopics and motion blur). Essentially, the Blur Busters version of Moore's Law.
The refresh rate race has started, with the emergence of 120fps HFR and the existence of 240Hz eSports market, and this will continue for decades, albiet in a much slower fashion than for resolution.
Theoretically, true 240Hz can be made cheap to add once the panels are capable of them, theoretically much like the much-shrinking 4K premium over 1080p today. And how surprisingly fast 8K is starting to arrive onto the consumer market that are already at prices lower than the first 1080p and 4K displays.
<FUTURIST MODE>
Something that will have to happen by 2030s is the video standardization of Ultra HFR: 240fps, 480fps, and 1000fps real-time video (not slow-motion) for ultra-high-Hz displays. I also discussed this in the reduser.net thread. Some successful bleeding edge tests were done. The common situation for the small group of UltraHFR pioneers is to grab a slo-mo camera and record high frame rate, then use post-process to speed up the slo-mo to real-time ultra high frame rate video for playback. The boom of 240fps-capable video cameras has made UltraHFR video experimentation easier.
If far-future advanced video compression codecs such as H.268 or H.269 becomes a retina-resolution 3D geometry video compressor or the defacto equivalent of timecoded photons/raytracing, it's possible for video files to go framerateless -- which you can play video at the framerate desired. Whether you're playing on a 60Hz smartphone, a 120Hz VR headset, or a 480Hz eSports display, or a 1000Hz Holodeck, you'd get native framerate from that video file -- no traditional classic black-box interpolation/fake frames. And if you hated the ultrasmooth effect, you can choose a framerate away from the director's preferred framerate (e.g. play at 48fps instead of 120fps), even any non-divisor framerate is perfect smooth when playing a framerateless format. And the amount of motion blur (shutter exposure length) can be easily edited in post-process.
Even the argument of light/resolution loss of high framerate (high speed cameras) is compensated by AI video processing because the AI can handle it (imagine internally reprocessing the same framerateless file at different speeds 24fps, 48fps, 100fps, 200fps, 1000fps, and merge the "have cake and eat it too" results) for a very bright, very sharp retina 1000fps video file for the human eye's consumption. The processing power for very sharp, very bright 8K 1000Hz does not exist yet, but could exist later, and be cheap someday.
The continual whac-a-mole on the diminishing curve of returns, especially as retina-resolution more blatantly reveals refresh rate and frame rate limitations, kicking off a slow (reluctant) frame rate / refresh rate race that has now started -- with the emergence of 120fps HFR video file formats and the playing of eSports on true-240Hz monitors. Sports professionals who want to film using 1/1000sec shutter for zero blur in sports video, have to constantly decide whether to increase motion blur (to fix stroboscopics) or keep the fast shutter (and worsen stroboscopic/stepping artifacts), and that's all before you have to deal with the display motion blur. Synchronizing source persistence (1/1000sec shutter) and destination persistence (1/1000sec refresh cycles) and avoiding black periods (requiring 1000 true individual frames in 1000 true refresh cycles per second) -- then goodbye, no phosphor needed, no flicker needed, true blurless low-persistence sample-and-hold. Real life doesn't flicker, and it looks like real life. From what we've seen in UltraHFR experiments it's very surreal to have zero-blur & zero-stroboscopics, in CRT clarity flickerfree strobefree motion. So UltraHFR is definitely human-visible benefits, at least to many who'd like something better than flicker (owie) or wanting something better than interpolation (faked images). Even if, in theory, it's beginning with just hinting data that helps the interpolator becomes more real than faked, to reduce data requirements. But ultimately, the humble video formats has a long-term incentive to go framerateless by the end of this century.
Some theoretical discussion of the path away from framerate-based macroblock style codecs to framerateless video files, were covered in that reduser.net thread. There'd likely be metadata for director preferred framerate, so you'd retain originality.
But with framerateless video files, a display (or the user, or the video editor, or whomever) choose the framerate it wants to play at. It can bypass guesswork for interpolation and use the (by-all-practical-purposes) infinite framerate-resolution possible of framerateless video files formats. Essentially becomes a vector format equivalent for framerate that doesn't granularize into any specific framerate. This can produce far better quality framerate-increase algorithms for blur-reducing experience, and it would work very well with low-persistence sample-and-hold (flickerless blur reduction) like future 1000Hz displays that can do CRT clarity with sample-and-hold. In other words, real life doesn't impulse/flicker, so why should a display do that to eliminate motion blur?.
A metaphor in today's technology is game animation files. Which can use vectors to represent motions of characters, some of them having no such thing as a framerate granularity, representing vectors, arcs, and other framerateless methods of encoding motion. Some types of game animation files have no frame rate. Eventually, this technology gets merged with a future H.269 codec that may theoretically compress as 3D world geometry at retina-quality -- or timecoded photons -- or other theoretical workflow not yet invented that we later find the most practical to pull of framerateless video formats. And thus -- voila -- framerateless video file format that can be played back at any desired framerate, high or low.
Given how slow video sources improve -- it is probably roughly a 20-to-40-year progression before Ultra HFR technology becomes mainstream. But framerate-hardcoding in a video file may go the way of the dodo by the end of the 21st century, shifting framerate choice responsibility to the viewing end.
A bit of a futurist sidetrack, but relevant to the resolution race -- and dynamic-range race -- which leads to other unturned stones. The slow transitioning into the (true) refresh rate race / frame rate race.
</FUTURIST MODE>
(Note: This is not an article I've yet written for the Blur Busters Area 51: Display Science, Research & Engineering section of website, but I wanted to make sure this was on Blur Busters Forums -- I might formalize this later into an official article)
______________
8K video
All that beautiful sharpness wasted in motion-blurry motion.
We're hitting retina for HDR dynamic ranges (high-nit displays with excellent blacks), we're hitting retina for resolution (8K), we're hitting edges of practical screen sizes. Now, one little-turned stone is frame rates and refresh rates.
At least once we hit 8K, resolution is thusly retina-ized for televisions, and this may finally enable a (very slow) retina refresh rate race over the next twenty years, with true (nonfaked) refresh rates doubling every 5-10 years until refresh rates are fully retina for all possible artifacts (including stroboscopics and motion blur). Essentially, the Blur Busters version of Moore's Law.
The refresh rate race has started, with the emergence of 120fps HFR and the existence of 240Hz eSports market, and this will continue for decades, albiet in a much slower fashion than for resolution.
Theoretically, true 240Hz can be made cheap to add once the panels are capable of them, theoretically much like the much-shrinking 4K premium over 1080p today. And how surprisingly fast 8K is starting to arrive onto the consumer market that are already at prices lower than the first 1080p and 4K displays.
<FUTURIST MODE>
Something that will have to happen by 2030s is the video standardization of Ultra HFR: 240fps, 480fps, and 1000fps real-time video (not slow-motion) for ultra-high-Hz displays. I also discussed this in the reduser.net thread. Some successful bleeding edge tests were done. The common situation for the small group of UltraHFR pioneers is to grab a slo-mo camera and record high frame rate, then use post-process to speed up the slo-mo to real-time ultra high frame rate video for playback. The boom of 240fps-capable video cameras has made UltraHFR video experimentation easier.
If far-future advanced video compression codecs such as H.268 or H.269 becomes a retina-resolution 3D geometry video compressor or the defacto equivalent of timecoded photons/raytracing, it's possible for video files to go framerateless -- which you can play video at the framerate desired. Whether you're playing on a 60Hz smartphone, a 120Hz VR headset, or a 480Hz eSports display, or a 1000Hz Holodeck, you'd get native framerate from that video file -- no traditional classic black-box interpolation/fake frames. And if you hated the ultrasmooth effect, you can choose a framerate away from the director's preferred framerate (e.g. play at 48fps instead of 120fps), even any non-divisor framerate is perfect smooth when playing a framerateless format. And the amount of motion blur (shutter exposure length) can be easily edited in post-process.
Even the argument of light/resolution loss of high framerate (high speed cameras) is compensated by AI video processing because the AI can handle it (imagine internally reprocessing the same framerateless file at different speeds 24fps, 48fps, 100fps, 200fps, 1000fps, and merge the "have cake and eat it too" results) for a very bright, very sharp retina 1000fps video file for the human eye's consumption. The processing power for very sharp, very bright 8K 1000Hz does not exist yet, but could exist later, and be cheap someday.
The continual whac-a-mole on the diminishing curve of returns, especially as retina-resolution more blatantly reveals refresh rate and frame rate limitations, kicking off a slow (reluctant) frame rate / refresh rate race that has now started -- with the emergence of 120fps HFR video file formats and the playing of eSports on true-240Hz monitors. Sports professionals who want to film using 1/1000sec shutter for zero blur in sports video, have to constantly decide whether to increase motion blur (to fix stroboscopics) or keep the fast shutter (and worsen stroboscopic/stepping artifacts), and that's all before you have to deal with the display motion blur. Synchronizing source persistence (1/1000sec shutter) and destination persistence (1/1000sec refresh cycles) and avoiding black periods (requiring 1000 true individual frames in 1000 true refresh cycles per second) -- then goodbye, no phosphor needed, no flicker needed, true blurless low-persistence sample-and-hold. Real life doesn't flicker, and it looks like real life. From what we've seen in UltraHFR experiments it's very surreal to have zero-blur & zero-stroboscopics, in CRT clarity flickerfree strobefree motion. So UltraHFR is definitely human-visible benefits, at least to many who'd like something better than flicker (owie) or wanting something better than interpolation (faked images). Even if, in theory, it's beginning with just hinting data that helps the interpolator becomes more real than faked, to reduce data requirements. But ultimately, the humble video formats has a long-term incentive to go framerateless by the end of this century.
Some theoretical discussion of the path away from framerate-based macroblock style codecs to framerateless video files, were covered in that reduser.net thread. There'd likely be metadata for director preferred framerate, so you'd retain originality.
But with framerateless video files, a display (or the user, or the video editor, or whomever) choose the framerate it wants to play at. It can bypass guesswork for interpolation and use the (by-all-practical-purposes) infinite framerate-resolution possible of framerateless video files formats. Essentially becomes a vector format equivalent for framerate that doesn't granularize into any specific framerate. This can produce far better quality framerate-increase algorithms for blur-reducing experience, and it would work very well with low-persistence sample-and-hold (flickerless blur reduction) like future 1000Hz displays that can do CRT clarity with sample-and-hold. In other words, real life doesn't impulse/flicker, so why should a display do that to eliminate motion blur?.
A metaphor in today's technology is game animation files. Which can use vectors to represent motions of characters, some of them having no such thing as a framerate granularity, representing vectors, arcs, and other framerateless methods of encoding motion. Some types of game animation files have no frame rate. Eventually, this technology gets merged with a future H.269 codec that may theoretically compress as 3D world geometry at retina-quality -- or timecoded photons -- or other theoretical workflow not yet invented that we later find the most practical to pull of framerateless video formats. And thus -- voila -- framerateless video file format that can be played back at any desired framerate, high or low.
Given how slow video sources improve -- it is probably roughly a 20-to-40-year progression before Ultra HFR technology becomes mainstream. But framerate-hardcoding in a video file may go the way of the dodo by the end of the 21st century, shifting framerate choice responsibility to the viewing end.
A bit of a futurist sidetrack, but relevant to the resolution race -- and dynamic-range race -- which leads to other unturned stones. The slow transitioning into the (true) refresh rate race / frame rate race.
</FUTURIST MODE>