Since OLED is going to unavoidably hurtle upwards in refresh rates throughout this decade (pressure from multiple vendors)...It's a good question.jl4069 wrote:Mark, I also had a idea I wanted to ask you about, if I may. Would it be possible to use a very fast monitor, say 900Hz (when one exists) and literally run x shows/x content through such a monitor and copy the motion output of this to create a master of sorts, and use it on a regular 120Hz screen with suitable processing abilty, or use machine learning techniques to learn many aspects of motion from a 900Hz reference, and apply them to a slower rate screen? thanks, j
First, your question opens a Pandora Box that needs a little disambiguation.
You're discussing a temporal (Hz) equivalent of the well known spatial science:
Just like 4K cameras downconverted to 1080p usually looks better than 1080p-native cameras, it is possible that future UltraHFR masters can produce superior lower-framerate material.
First, there's the caveat that Ultra HFR is lower quality (see Ultra HFR FAQ) because traditionally higher frame rates requires faster shutter speeds, more light per frame, and lower resolution per frame due to limited processing speed. But that is metaphorically classic newtonian style thinking. The weak links of Ultra HFR described in the Ultra HFR FAQ needs to be concurrently solved.
However, these are actually solvable problems:
1. Processing speed is solvable by various kinds of parallelisms (e.g. 8 video processing chip processing every 8th frames. Separate chips, or in 2030s as "multicore compression processors" or as concurrent shaders on a powerful GPU, etc).
2. Faster shutter speed is solvable by various kinds of tips (AI-based motion compensated noise filtering, and/or digitally stacking adjacent frames to lengthen exposure times longer than frametimes for dark sequences). And you can stack them to create native lower frame rates at native longer lower-noise exposure. Astronomy already stacks frames. Video will do that (with AI-based motion compensation) to create noise-free video frames by using adjacent frames as context (even near scene-change splices).
3. Lower resolutions are slowly becoming solved. We're finally getting 8K video in smartphones, and as processing improves, can be redirected to extra frames. Expansions in processing power will flow to better temporals once we've milked spatials to maximum humankind benefit;
We can envision a world where 1000fps masters can output flawless motion-blurry 23.975fps, 24fps, 50fps, 59.94fps, and 60fps from the very exact same UltraHFR master. Any framerate resembling analog framerateless, is a perfect framerate-independent master. One master, any framerate print, even non-integer divisible!
When frametimes are extremely final granularity (under 1ms), it is possible to forgo exact multiples when judder is below human detectable thresholds, and you have many ultrabrief samples before/after (massive context for ultra-accurate AI). Then you can simply use modern AI to produce a compensated "cut" between two adjacent 1/1000sec frame, by guessing what frame was captured about 0.4321ms after previous frame but 0.5789ms before next frame.
Imagine then, that, cinematographer-optimized versions of future realtime AI art engines like DALL-E 5 or whatever can use many previous frames and many subsequent frames to almost perfectly create intermediate frames at the non-divisible point, if necessary. AI-based partial-splce engines simply uses the whole video file as its own "training material" and creates near-perfect guesses of frames in between frames, including perfect parallax compensation (e.g. panning past picket fences and whatever won't have those infamous Motionflow style artifacts). And because this is only blended-visible as less than 5% of the stacked 23.976fps output. So we can eliminate the need for integer divisibility, creating perfect 23.976fps prints from 1000fps masters or framerateless masters. One simplified crude way for a human brain to imagine this, is basically 23.976 would essentially internally convert 1000fps into, say, 23976fps (with the help of AI), and then stack 1000 frames per 23.976fps output frame of that, to generate perfect 23.976fps output just like a native 23.976p camera.
The native Ultra HFR (1000fps+) must be a permanent 360 degree shutter -- unadjustable shutter -- with shutter speed and frame rate adjusted in post
Then the same 1000fps+ grainy noisy RAW master can output::
- Noisefree (Any fps) at (Any shutter speed) from the same UltraHFR master!
- Noisefree 60fps television material at 1/60sec (360-degree shutter emulation)
- Noisefree 60fps television material at 1/1000sec (low blur sports)
- Noisefree 50fps PAL television material at 1/1000sec (low blur sports)
- Noisefree 23.976fps 1/100sec-exposure-per-frame
- Noisy 60fps at 1/60sec
- Your favourite Hollywood Filmmaker setting
- Your favourite UltraHFR setting
- Or any dream framerate at any dream exposure, whatever you want
So the "native" purists are thinking Newtonian instead of Einsteinian, when all you really want is a file format that outputs as perfectly as a native-framerate native-shutter native-ISO workflow. And the allure of a framerateless video file has humongous advantages for tomorrow's multi-Hz consumption.
You can have your HOLLYWOOD FILMMAKER MODE cake and eat the ULTRAHFR cake too. Both output from the same framerateless master.
A single UltraHFR master can now produce all the above, without needing integer-divisible frame rates. You could even adjust ISO in post, shutter in post, framerate in post, etc, from an essentially framerateless RAW master! (without needing a timecoded photon camera). Eventually we will need a framerateless compression algorithm, for future framerateless masters, but for now, 1000fps+ 1000hz+ will be a defacto workaround for a framerateless video format.
(Creating accurate framerateless masters will mandatorily require 360-degree shutters, so you need to brute-force the frame rate to allow fast shutter speeds from a 360-degree master -- so there is a need to record continuously for maximum spatial and temporal video quality. 1000fps at 360degree is still a sports-fast 1/1000sec, after all! So you use that as a master for all motion-blurrier shutter speeds)
Achieved by a simultaneous combination of: AI-based stacking frames + AI-based infill painting algorithms + AI-based flawlessly motion compensated noise correction (using adjacent frames as context) + AI for the non-integer-divisible overlapping frames. It's amazing what AI can do, and just needs to be combined in Ultra HFR technologies to make a defacto framerateless master video that doesn't require a framerateless video file format yet (as such file formats have not yet been invented except for game engines like UE5).
How To Say Goodbye to Integer-Divisibility Requirement (Hello human-vision-perfect 23.976fps from 1000fps masters):
Early work is on things like AI-based interpolators, but instead you're doing things in a very different workflow -- e.g. recording at an ultra high sample rate to a framerateless video file -- and bumping the processing to post. You'd have the standard filters but other filters would be available. And you've seen some of the AI-based slow motion generators -- except it's that improved 100x better and put into a realtime workflow (at least in early simulations of framerateless masters, or only used at the non-integer subframe splices -- e.g. properly splitting an adjacent-frame pairs at the non-integer-divisible locations, which means for 1000fps, the AI-predict error margin would create <0.1% SNR (deviation from native camera), which is below 1/1000 color shade difference.
Today, AI is very flawed on some things, but look at the leap recently on some line items -- few perfectly parallax-compensated interpolates with SNR below human visible noisefloor. At 1000fps, you got a lot of temporal samples. Besides, 23.976fps on a 1000fps master just requires 23 non-AI-touched frames and 0.976 AI-touched frames in terms of temporal interpolation. So good AI based splicing at non-integer intervals (even if creating 1% error), it is stacked with non-AI-temporally-touched frames, pushing error margin to below 0.1%. Basically only 0.976/23.976ths of whatever AI error margin becomes put through to the final frame.
So now you can human-perception flawlessly output exactly 23.976fps from an exactly 1000fps Ultra HFR master! Goodbye integer divisibility requirement! Hello "Any framerate from the same master"!
It won't be native frame rate, but it is fundamentally the temporal equivalent of the world's best AI-based scaling algorithms. Except the science is applied towards the temporal dimension, instead of the spatial dimension to achieve the universal framerateless "master" (or "main" if preferred) file.
In fact, it is possible to train AI based algorithms to be also trained to recognize the camera's noise characteristics, and automatically compensate for them (e.g. pixels that have worse S/N margins, pixels that have hotter/colder floors aka hot pixels, how the noise "flows", etc). Optionally (at the video editor level) preserving camera noise or reducing it as much as you want (down to the camera's native noisefloor for long exposures)
In simple terms, the magic sauce allows 1/1000sec shutter to be as low-noise as 1/24sec shutter.
(This is thanks to AI-based noise correction from temporary frame-stacking -- with AI-based correction for motion), so even fast-motion 1000fps is correctly noise-filtered to create bright-image low-blur 24fps 1/1000sec motion. The science is imperfect today but context from thousands of adjacent frames. Properly applied, 1 second at 1000fps is a lot of training for an AI neural network, to the point where it contains enough information for the best skunkworks AI-based motion-compensated frame-stacking algorithm to successfully create 1/1000sec exposure frames as bright as 1/24sec exposure frames -- including excellent parallax behavior (e.g. things revealed behind objects). The brute sample rate solves a lot of problems with AI artifacts, when the science is applied properly. They already can create 3D worlds (AI-assisted photogrammetry) from a few photographs, more and more realistically in a shocking geometric improvement, and this science improves a lot when you've got 1000 photographs of the same scene over one second (1000fps = literally 1000 photographs per second), and you are using all 1000 photographs to estimate something that only happens for 0.5 milliseconds. AI error margin goes WAY down. Boom, problem solved (but not for commercially available AI engines -- this is next generation [bleep] in laboratories).
You see, AI working on processing a firehose of 1000 photographs per second would concurrently analyze multiple in-memory intermediate workflows (stacked for lumen resolution, unstacked for motion resolution) and combine the two for concurrent bright images at excellent fast-shutter motion resolution. This may not be quite realtime yet, but you can still do it in post production (at any time interval you want), if you only had a framerateless video file format that can be recorded in real time -- the AI magic sauce happens in post production, and does not have to be fully realtime. AI will also intentionally raytrace a few rays into the sceneary to "double check" its own AI work, and make adjustments as necessary, until the output framerate is "human perfect" (e.g. <0.1% SNR to theoretical original native workflow), in a loopback fashion, with AI smart enough to self-recognize "ugh, it doesn't look correct". Newer AI self-refactor the imagery until it's human pefect without parallax artifacts, like a perfect Photoshopper... (DALL-E and MidJourney doesn't even do this yet -- but when you combine with some temporal AIs -- amazing magic happens). This isn't AI creating frames from scratch. This is AI simply fixing things like a noise filter or scaler, except just the bare minimum AI necessary to make a framerateless master video file format to output any framerate while keeping that output framerate native.
This is "2030s Cinematography" talk mind you... But aspiring Ultra HFR enthusiasts need to be reminded we need dramatic geometric jumps in refresh rates for bigger-population majorly human-visible benefits (60 -> 240 -> 1000 -> 4000 fps=Hz), because some can't tell 60 vs 120 like Grandmas can't tell VHS vs DVD. But even my grandma can tell 120Hz vs 1000Hz in forced-motion-resolution-readability tests like www.testufo.com/map (or other tests that force identification of things inside motion blurs), it's a much more dramatic difference up the diminishing curve of returns, like VHS-vs-8K.
Creating a framerateless master will become more important in the next 10-20 years when vendors are able to successfully glue together all the necessary technologies to create a proper workflow that produces output (e.g. 24fps) that is not worse than today's workflows. It's now already possible, but the AI skills haven't reached the camera makers, the video codec engineers, and all. The individual components are now already available, but few people in those specific circles yet knows how to temporally glue all of this together to create the framerateless master video file. However, Blur Busters textbook reading such as blurbusters.com/area51 is training many of tomorrow's video researchers -- e.g. the kids of the people who work at orgs like Motion Picture Experts Group or VESA, etc. By the time 2030s roll around, combining AI skills and video standards skills and camera-manufacture skills, and then watch out.
Very few (<1%) is talking about the concept of framerateless masters, but it's like the 1980s Japan MUSE HD researchers talking about future digital 4K. We're roughtly at that stage when it comes to the science of framerateless master video files -- aka a video file that uses analog-coded motion (no discrete frame rate)
HDR masters can still output SDR prints
8K masters can still output 1080p prints
1000fps+ (or framerateless) masters can still output properly HOLLYWOOD FILMMAKER MODE motion-blurry 24p prints
Etc.
But mark my words, by the decade of 2030-2039, probably H.268 or H.269 might consider going framerateless once the technologies, researchers, and skills finally make the (defacto) "framerateless master video file" possible to flawlessly output all framerates of all ISOs of all shutter speeds you want in post-production.
In other words, you can output:
- Any ISO in post production (matching quality of the ISO-native workflow)
- Any framerate in post production (matching quality of the framerate-native workflow)
- Any shutter speed in post production (matching quality of the shutter-speed-native worlflow)
- Any amount of motion blur you want (or lack thereof) in post production (matching quality of native workflow)
- Any amount of "original" camera noise (up to camera's best) you want in post production (matching quality of native workflow)
- No longer need integer divisibility needed between original and output frame rate
Yep. It's possible!
Video production can artist all the camera settings in post-production instead, since the file has sufficient information necessary.
Understandably, temporal science is harder for people to understand than spatial science, given the number of researchers still going "huh?" at 240fps vs 1000fps (despite being human-visible to >90% of population at GtG=0ms). So there's some assumption/bias inertia that prevents some researchers from properly inventing a framerateless master video file, even though a few Einsteins have found out it's now technologically possible, to the point where enough subsections of workflow has been tested to confirm the theoretical framerateless video workflow;
Fractions of workflow was already demonstrated by various AI and neural-network researchers. Might be almost a human generation before the video guys "get" it and weave together an industry standard for such a universal master temporal format. Hopefully sooner, who knows?
This is all future stuff undergoing line-item research of the various tiny subsets of this workflow -- fascinating for researchers involved in the temporal domain.