Yep. Alas, it illustrates that we're going to be stuck with strobing for a long time. Strobing is a humankind bandaid because we can't fully fix display motion blur at current contemporary two-digit or three-digit refresh rates.
Motion blur is lovely soometimes (Hollywood Filmmaker 24fps Mode is gerat), but other times motion blur is absolutely astoundingly evil (display motion blur is massive nausea headaches in virtual reality headsets, as discovered in early non-strobed VR headsets). So the type of application used for the display can increase/decrease the importance of display motion blur...
I hate going off topic in these threads, but I love to help other researchers learn more about the refresh rate race, even if they are researching other things (e.g. camera shutters or optical illusions), because understanding the refresh rate race helps a researcher become smarter about unanticipated effects that are often overlooked... So without further ado:
The easiest way to understand display motion blur is Blur Busters Law: 1ms of pixel visibility time translates to 1 pixel of motion blur per 1000 pixels/sec. Blur Busters Law is simply an easy interpretation of MPRT(100%). Blur Busters Law is essentially our "E=mc^2" style simplification of MPRT formula which is complex only because of the 10%->90% cutoff. But if you assume GtG=0 and MPRT0->100, then motion blur is suddenly elementary-school mathematics, like a camera shutter.
240fps 240Hz 0ms GtG (for blurfree source material that is panning in motion) has exactly the same motion blur as a 1/240sec SLR camera shutter photographing panning scenery (like lens-following a sports athelete), for the same physical angular motion speeds relative to human vision. In other words, humans can see apart 1/240sec and 1/1000sec sports photographs better than 1/240-vs-1/480 or 1/480-vs-1/1000, the bigger blur differentials are easier to see by average laypeople. (It also applies to mudane things such as text scrolling in web browsers -- the raison d'etre of 120Hz iPads, and you can bet Apple is internally testing 240Hz iPads with the DisplayWeek's newly announced 240Hz OLEDs...).
So for some researchers, by reframing the refresh rate race debate as a blur equivalence to camera shutter, it helps certain classes of researchers in some trades (e.g. those experienced with camera blur behaviors). This helps them realize the uncapping of a specific Hz limit or Hz assumption, because Hz limitations creates aggregate effects like blurs (sample and hold effect...). And the need to test blur differentials, and understanding the "geometric requirement" of the refresh rate race. Today's Hz incrementalism hides a lot of this; and often leads to mistaken assumtions.
A low-resolution photograph (e.g. 8mm or 16mm camera frame) will be harder to see motion blur differentials because stationary images already have low resolution, so the better motion resolution doesn't help as much for very high resolution. So 70mm film (metaphorical equivalent of panning 8K scenery) will be easier to see shutter-speed differentials than 8mm film (metaphorical equivalent of VHS scenery). e.g. 1/240sec shutter vs 1/1000sec shutter (in camera parlance) versus the concept of 240fps-at-240Hz versus 1000fps-at-1000Hz (in refresh rate race parlance), have identical persistence blur effect -- the sample and hold blur mathematics maps exactly to camera shutter blur mathematics when GtG=0, jitter=0, framerate=Hz -- the blur mathematics suddenly becomes identical when those variables are set!! (not all researchers realize this)
It is just simply Blur Busters Law (our simplification to make it super-easy to understand display motion blur).
...Therefore for framerate=Hz, GtG=0, jitter=0;
Nonstrobed: Blur is exactly the same as frametime
Strobed: Blur is exactly the same as squarewave pulse width duration
...Therefore;
We know Oculus Quest 2 has a typical strobe pulse width of 0.3ms at least for early models at default setting (let's ignore squarewave ripple in an oscilloscope for practicality's sake);
...Therefore;
To match 0.3ms blur, we need 0.3ms frametime;
...Therefore;
1000 milliseconds per second, divided by 0.3ms ~= 3333
...Therefore;
We need 3333 unique frame snapshots per second along the motion vector, to match the motion clarity of 0.3ms strobing.
(Note: Error margin applies, it could be a 0.32ms or a 0.28ms pulse width; newer headsets may be different. The math still ends up creating a multiple-thousands refresh rate requirement -- still demonstrates the unobtainium without strobing)
For VRR, which is ideally permanently framerate=Hz within VRR range, you can see motion blur vary up/down as framerate ramps up/down. It's a very great demo of Blur Busters Law at www.testufo.com/vrr ... Low framerates vibrate, while high frame rates blends to motion blur. Another great example of "stutter=blur" continuum is www.testufo.com/eyetracking#speed=-1 as not all researchers realize stutter and MPRT blur is exactly the same thing; just a matter of the flicker fusion threshold (and any GtG-fuzzying of that threshold line since slower GtG displays will have a lower flicker fusion threshold due to the softer flicker wavefrom, and fast GtG displays will have a higher flicker fusion threshold due to the more squarewave stutter-edge-flicker waveform. That's why 60fps often is smooth on LCD but still stuttery on OLED displays, a "behavior" that not all researchers realize it's simply the shifted flicker fusion threshold from the different flicker waveforms (sinewave'd by slow GtG), which can vary from 40-50fps (for LCD) to 70-80fps (for OLED) -- the opposite side of the century-old refresh rate standard (aka 60Hz, dating back to the dawn of TVs).
Erratic stutter/judder (e.g. 3:2 pulldown et al) on top of regular perfect-regular stutter is simply an extension of sample and hold (picture an erratic-height stairstep rather than a perfect stairstep -- and the biggest-height stairsteps is the new motion blur weak link, aka stutter is a very simple mathematical worsening of perfect framepaced regular sample and hold). At low frequencies it's extra shakiness (janky) and at high frequencies it's just extra blur (more blurry than perfect framepacing, ala stuttery 360fps is possible to look blurrier than perfect framepaced 144fps).
I already teach training classes (I'm for hire; I fly over both Atlantic and Pacific to mainly display-related companies), to demonstrate various elements of the refresh rate race, so I do a bunch of TestUFO power points at different displays, and run a high-speed camera in real time in-classroom, and show off results of multiple displays.
You have a 360Hz display.
They're INCREDIBLY educational to show off to other researchers.
In the vein of the saying "give a fish, they're fed for the day. Teach how to fish, they're fed for life", I like telling researchers some custom tests to show off to other researchers, because there is almost an infinite number of useful TestUFO tests when you consider all the possible useful combinations & permutation of the TestUFO parameters....
So here are some of my favourite show-off custom-tweaked TestUFO tests for 360Hz displays are the following.
Bookmark for showing off to other researchers, or at least to self-educate yourself.
- TestUFO Stutter=Blur Continuum
Watch the 2nd UFO for at least 30 seconds. Observe how stutters blends to blur, and how blur blends back to stutter, as a matter of the framerate (from flicker fusion threshold relating versus stutter). Like slow vibrating music string (shaking) versus fast vibrating music string (blurry). This is literally fundamental "Sample And Hold 101"; researchers are surprised that they do not fully grasp sample and hold unless they finally understand stutter=blur (regardless of regular sample-and-hold "stutter" or erratic GPU/engine/pulldown "stutter"). For some people, this specific TestUFO demonstration is the metaphorical blur "Unifying Theory"-equivalent eureka moment, literally, for sample and hold blur=stutter continuum. No textbooks explain things this way, and TestUFO is worth a thousand words in one link -- one custom-configured TestUFO link sometimes teach more than a 100 page textbook! (Incidentally, silencing debates is exactly why I invented TestUFO as the Internet's defacto micdropping factory.)
TestUFO DLP Color Wheel Simulator -- wave your hand in front of this for rainbow effects. This TestUFO is displaying in color-sequential monochrome, to generate color temporally. So I'm simulating a DLP color wheel with this specific TestUFO. 240Hz and up recommended for this test. Do not run this TestUFO on anything less than ~165-ish, or you will get epileptic flicker. - Test UFO Variable-Persistence Black Frame Insertion Demo for 360Hz, 6 UFO at 60fps
This is a great demonstration of Blur Busters Law. More pixel visibility time = more motion blur. And I even compare low-framerate strobed with high-framerate non-strobed (bonus UFO at bottom). - Test UFO Variable-Persistence Black Frame Insertion Demo for 360Hz, 8 UFO at 45fps
This is a great demonstration of Blur Busters Law. More pixel visibility time = more motion blur. And I even compare low-framerate strobed with high-framerate non-strobed (bonus UFO at bottom)
One thing I've noticed is that many developers don't always improve jitter when increasing frame rates -- it depends on the game. I've seen many games where framerate improvements don't seem to be very visible.
Have you ever tried a Quest 2 VR headset? Virtually all games on Quest 2 are perfect framerate=Hz VSYNC ON (ultra low latency). It's amazing how VR developers pay 100x more attention to jitter/blur. I have so much fun playing games like Star Wars: Tales From Galaxy Edge, to other content, that sometimes PC games (even Cyberpunk 2077 visuals) kind of bore me. Yes, I even have more fun playing on Quest 2's mobile GPU than my personal NVIDIA RTX 3080, simply because the high-funded VR screen is so light years beyond any of the gaming monitors, in motion blur elimination technology.
The motion-quality improvements far more outclasses the rendering-quality loss, to the point that you actually see more detail (during turning) on a Quest 2 than 360fps-360Hz on a RTX 3090. Or I can cable-up with a PC (higher cost VR headset with a cable) to get even better visuals, at the loss of flexibility. I hate the dizzying rollercoaster demos, but the comfortable content is really lovely, whether be the VR-ified equivalents of Sierra Quest style stuff (Down The Rabbit Hole) or the AAA-class solo adventure-story FPS stuff (Tales From Galaxy Edge / Red Matter / Walking Dead standalone / and for cabled PCVR, Half Life Alyx), depending on what you're into, as a translation from the PC gaming experience.
They all use Vulkan API internally, and the developers have to follow extreme perfectly framepaced framerate=Hz, since the wide-FOV experience and the panny behavior (head turns = screen perpetual scroll) massively amplifies display motion blur to the hilt.
PC games are still far away from these VR principles, but you notice the massive improvements to Microsoft Flight Simulator fluidity because they were trying to improve the framepacing for VR, but it had a major side effect of becoming much better on high-hz PC gaming monitors, because a lot of jitter/stutter was removed. Some of these hybrid VR/nonVR games (like Microsoft FS 2020) are a great textbook example of how programming improvements can translate very well over to non-VR in improving the benefits of the refresh rate race on planar 2D displays.
But for now, the way even a lowly mobile GPU manages to preserve gametime:photontime to an accuracy of less than 100 microseconds, is astounding -- they really whac-a-mole'd the jitter weak links, even for smartphone-quality camera sensors used for room tracking (and surviving things like light changes, light switches, different time of days, increased camera sensor noise at night, etc). Without any external PC, without an external sensor, etc -- just put headset on in a random room, let it do the near-photogrammetry scan for RoomScale in a new room, and you're playing VR in 30 seconds. It's easier to setup than an iPad, I wish other VR headsets were that easy. Standing ovation to Carmack (but not Facebook), at least they removed the Facebook login requirement (finally).
This is essentially of keeping gametime:photontime as perfectly relative as possible considering error margins:Kyouki wrote: ↑17 Aug 2022, 02:40Vulkan API come to mind for instance, being much better frame pacing, frame times and overall resource use is much smoother and nicer. DOOM Eternal being one of those enjoyable examples. On the side I also invest time in some beta programs for games - or are some internal tester for some games, always trying to advertise or promote the use of better technologies for displays, rendering, graphics, etc. Trying for myself to understanding swapchain technology, MPO, and all the different display modes within game graphics api.
- Accurate motion math in engine
- Rendertime variances between adjacent frames
- Mouse jitter
- Graphics driver jitter (e.g. amount of time that passes between Present() and the first pixel spewing out of graphics output)
- Sync technology interaction with various refresh technologies (strobed, nonstrobed, vrr, non-vrr), remember not all pixels refresh at the same time, and turning on/off strobing can affect jitter/latency mathematics weirdly. If you want that particular wall of text, for your future reference, let me know, sometimes competitive advantage swaps between VSYNC ON versus VSYNC OFF, with different display refreshing patterns, because of less violation of gametime;photontime for the different pixels across the 2D display plane.
Error margins become more visible when we're talking about MPRT's smaller than stutter/jitter error margins, so things like turning on strobing can amplify visibility of jitter, and switching from VSYNC ON->VSYNC OFF can produce more massive jitter differences during strobed than nonstrobed. There are easy scientific explanations for these, that are non-obvious to most researchers though.
So as Hz goes up, gametime:photontime divergences need to remain smaller than MPRT to prevent it from throttling MPRT (aka adding more blur from high-frequency stutter/jitter)... So the refresh rate race puts more demand on accuracy of the entire gametime:photontime relativeness pipeline.
Even things like stutters punching through VRR are often a side effect of things like rendertime variances (wild yo-yoing) or framebuffer-delay fluctuations. VRR tries to keep gametime:photontime constant, which is why if you keep this variable accurate, even random framerate fluctuations look smooth: TestUFO Random Frametimes Look Amazing Smooth in VRR. But this only happens if you're successfully maintaining gametime:photontime ....things like mere rendertime fluctuations can add visible stutter, so if one frame takes much longer to render than the next frame (e.g. from texturestreaming!!), or vice versa, stutters can punch through. Which is why Optane was kinda impressive for 360Hz VRR, even though it only increase framerates by 1% -- because it helped the gametime:photontime quite a bit. Sadly, Intel discontinued Optane, so we'll have to rely on better stutter-resisting UE5 texturestreaming algorithms to prevent things like disk access and texture decompression overheads from jittering gametime:photontime relativity.
Vulkan API is a gift from the sky for the refresh rate race, it is certainly easier to keep gametime:photontime more consistent.
It is a double edged sword. It is a frame rate amplification technology of sorts, and you need to configure it in a way that AI-based detail and framerate enhancement behaviors outweighs the blurring of imperfect AI-based frame rate amplification. (Classic interpolation is the same, but DLSS 2.0 is vastly superior in preserving detail). And DLSS algorithms do sometimes add weird forms of frametime jitter (not always).
The DLSS 2.0 does a much better job than DLSS 1.0 so if configured to prioritize quality, when run in some games, its benefits outweighs the cons. It's a game of making the motion resolution increases (of more framerate) exceed the motion resolution decrease (of DLSS artifacts). It has a "sweet spot behavior", if you're prioritizing maximization of motion resolution. Maximize DLSS for maximum framerate, and the DLSS blurring exceeds the motion resolution improvements of the frame rate increases. So back off a bit, tweak, and you find the sweet spot median of maximal motion resolution improvement. But for some games (especially super-jittery games) the hump is unfindable -- DLSS works better when you've got good content framepacing + you're using a global sync tech (VSYNC ON) + global refresh tech (strobing). (That's why all VR headsets are always VSYNC ON + strobed, as it globalizes both the presentation and the refresh, for consistent gametime:photontime for all pixels of a display). Sometimes removing time differentials between pixel refresh, sometimes creates an unexpected outcome.
So better frame rate amplification AIs will do a better job as more time passes.
My vision of future framerate amplification technology is the combination of both temporal (ASW-like) and spatial (DLSS-like) frame rate amplification, to create 4x-10x frame rate increases that finally massively outweighs the side effects of DLSS. even more DLSS accuracy (DLSS 3.0 and up) combined with ASW-like algorithms (like Oculus Rift does), where 45fps is almost laglessly converted to 90fps via reprojection techniques. If you could combine DLSS+ASW, you could do 4x-5x frame rate increases pretty easily, for tomorrow's 1000Hz screens, while using only 200-250fps worth of GPU.
I feel we are still in the Wright Brothers era of frame rate amplification.