Haste wrote:What if you used tobbi eye tracking to identify which elements are being tracked and exclude them from the GPU blur? (That way you target the blur specifically to the areas where stroboscopic stepping occur.)
It's simple (to my human brain) -- being the wizard of persistence, also applies to understanding
motion blur vectors too.
It's much easier if doing it globally.
Eye-tracked GPU blur effects do help! Any eye-tracked GPU blur effect compensation must be global. The algorithm for eye-tracked GPU motion blur is globally modify the blur vector (whole screen, not locally) with the eye-tracked vector between two refresh cycles.
This is very useful to modify/enable/disable GPU blur effects to eliminate the stroboscopic effect of objects moving at speeds different from eye-tracking.
But eye-tracked blur vector compensation must be done essentially laglessly (less than 10 milliseconds) to make it perceptually lagless. In order to make eye-track-direction-changes and flit-back-forths perceptually lagless. Otherwise, at tens of milliseconds, the lagged blur effects become pukeworthy and nauseous and you should not bother with eye-tracked motion blur compensation.
When this happens, tolerating phantom arrays is the lesser of evils at this stage.
Now, basically, if you want to do it eye-track-blur-compensation globally for the whole frame, the formula becomes very simple:
1. Default GPU blur vector is the panning vector.
Slow panning has less blur, fast panning have more motion blur. Perfect counter-balance of Phantom Array always require motion blur from old position (previous frame) to new position (current frame).
2. Subtract GPU blur vector with eye-tracked vector
position of eye for previous frame to position of eye for current frame.
3. Execute final blur vector on the entire frame. Simple linear motion blur.
4. Optional: Add a little blur antialaising with a 10% fadein-fadeout at the end of linear motion blur. That way, the linear blur trail doesn't have a sudden start / sudden stop. Basically temporally anti-alias the motion blur trail between adjacent frames by blurring (gentle fadein/fadeouts) of the beginnings/endings of one-frametime's worth of linear motion blur.
This is mathematically very simple, for a global blur-compensation.
This fixes:
(A) Phantom array effect of moving objects during fixed-gaze-on-static-objects
(B) No motion blur for the specific object you're eye-tracking (frame-duration blur vector is subtracted by eye-tracking vector, resulting in a zero-blur for that particular moving object you're staring at).
(C) Works fine for multi-direction motion in the same frame. As soon as you eye-track a different object, its blur vector is zeroed successfully.
But does not fix:
(D) Strobed mode only: Phantom array effect of stationary background objects (and slower moving objects) while eye-tracking moving objects. Global blur vectors, used in the typical simple way, cannot reproduce the "sharp-ball-on-blurred-background" clear-moving-object blurred-background phenomenon during eye-tracking on strobed displays.
Fixing (D) simultaneously is much more complicated for a GPU. Imagine panning a camera on a moving object (a flying ball) -- you have a sharp ball and a blurred background. Trying doing exactly the same thing on a GPU -- that's what you also need to do to fix situation (D) for a strobed display. Multiple blur vectors (instead of global blur vectors) depending on relative motionspeeds of a specific pixel relative to eye tracking. Basically a per-pixel version of (1)/(2)/(3) formula. It's possible, just much more difficult -- more processing. Start small before you go big.
I would love to know current implementations of eye-tracked-motion-blur-vector compensation algorithms. I've not experimented much with actual practice at the moment. Many GPU blur effects for games have often been global-frame blur rather than per-pixel blur. So you can't reproduce the "sharp-ball-on-a-blurred-background" phenomenon of camera-tracking a ball flying through midair. But I wonder if some of them now do per-pixel GPU blur effects (different blur vector calculations depending on relative object movement speeds). If that's actually being done in actual practice, in any video game, at the moment, then, very excellent! Makes things much simpler. You skip (1) and just focus on (2)/(3) in the formula -- basically a blur vector modifier from the eye-track vector between two frames.
By doing all of this, eye-tracked motion blur compensation allow you to get away with low refresh rates (via strobing) and be largely blurfree + phantom-array-free, no matter fixed-gaze or fast-eye-tracking. For single-person applications, if (1)/(2)/(3)/(4) is all done simultaneously (skip (4) if using ultra-high-Hz sample-and-hold; persistence makes the human eye do the background motion blurring for you). So you may be able to avoid the need of ultra-high-Hz1000fps@1000Hz for virtual reality applications and still get fairly close to passing the reality test. But only if eye-tracked motion blur compensation is done pretty laglessly.
Otherwise, lagged blur effects become noticeable. Even an instantaneous momentary defocussing of text during rapid-startup or rapid-stop of eye movement, can still be noticeable. (A single-frame sudden momentary-defocussing experiment is visible at 144Hz (6.9ms!!) so that puts some rather severe demands for passing the Holodeck Turing Test using eye-tracked motion-blur-vector compensation.
Oculus found you ideally needed about 8ms. But if you're going 180-degree Retina (e.g. future 16K wraparound VR) and trying to do eye-tracked motion blur compensation, my super rough-guesstimate is I think you'll probably need the order of 1ms latency for eye-tracked motion blur vector compensation to pass the Holodeck Turing Test. It's not known the exact latency number yet for retina VR, but it's more aggressive than for 1200p or 1440p VR. In my opinion, I believe 1000fps @ 1000Hz (at, say, 8ms chain lag) is a much easier goal in humankind to achieve than 1ms full-chain input-to-photons latency (head-tracking/eye-tracking/etc). Ideally you want it all simultaneously (zero latency, retina persistence, retina frame rates, retina resolutions)...
Don't vouch me for this guessed number (1ms needed for perceptually lagless eye-tracked motion blur compensation algorithms on a sub-1ms-MPRT retina VR displays such as 8K and up) but it boils down to the
vicious cycle effect. The higher resolution the display and the more FOV you get, the more imperfections (versus real life) can reveal itself above the human detectability noisefloor from side effects like persistence/stroboscopic/motion blur/lag/etc.
The vanishing point for the diminishing points of returns is bumped far further down the curve, the closer you make a display to real-life resolution (spatially, temporally, gamut, lag, etc).
I'm super-glad there are multiple horses in this race, to solve all the VR problems (motion blur, stroboscopic effect, rendering power). And the eye-tracked-blur-compensating horse may very well win the (single-person) Holodeck Turing Test race. Obviously, it all falls apart if multiple people are eye-tracking the same display, but that's not an issue for VR headsets.
One might be able to come up with a subdisplay-in-display technology (like the
Varjo does, with successful seam-eliminating between displays, with a smaller retina subdisplay analog-moving (moves in perfect lagless sync with eye tracking), then combined with foveated rendering, then combined with eye-tracked motion blur vector compensation (GPU blurring applied independently on both of the two displays, concurrently, depending on each subdisplay's panel motion relative to fovea focus.) And do it on a processing chain that's small enough lag. And fix the long-standing virtual reality focal-depth issues (so you can actually comfortably focus/unfocus for virtual objects closer/further from you). If all simultaneously done flawlessly -- it may theoretically eliminate the need to do 16K VR and 10,000fps@10,000Hz by combining a lot of tricks/bandaids together all at once, but that's going to be an insane engineering feat to perceptually seamlessly and laglessly combine all the solutions together too.
Whatever happens, successfully passing the Holodeck Turing Test within my lifetime (by ~2050?) would be a display dream of many come true. Not being able to tell apart a transparent ski goggles versus a VR headset in an A/B blind test. Guessing realworld is VR, and VR is realworld -- incorrectly -- a statistically 50%-50% guess ratio -- because the VR becomes so good it's indistinguishable from real life. This may actually take until the end of the 21st century to universally achieve (except for stare-at-sun situations, which we never want -- ha!). But some tests may pass the A/B blind test (VR is reality, reality is VR) within a mere couple decades. Just not for all graphics/worlds.
Eye tracked motion-blur-vector compensation is a definitely useful item in the toolbox of battling the problem of combining "simultaneously blur-free + phantom-array-free" on a low-Hertz strobed display.