Temporal anti-aliasing for computer generated graphics

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
ScepticMatt
Posts: 37
Joined: 16 Feb 2014, 14:42

Temporal anti-aliasing for computer generated graphics

Post by ScepticMatt » 27 Feb 2014, 09:45

What is temporal aliasing and why is it a problem?

Due to the time-discrete nature of our displays, transformation speeds (movement, rotation) less then half the frame rate causes false detail. For example, a wheel rotating at 120 Hz filmed by a 24Hz camera with very low exposure duration speeds appear static, when they should be blurred ('wagon-wheel effect). Without motion blur, computer generated images show aliasing even at 10,000 frames per second. Therefore, computer graphics have to simulate motion blur to reduce it.

Rendering Examples:
Assume we have a 100 Hz display with 2ms persistence and 100% fill factor.
1000x1000 Pixels, 50 degree FoV.

Example 1. Stroboscopic effect:
Movements faster than 1 pixels per frame causes objects to jump.
Setup: Fixed eye, pixel moves at 233 pixels per second from left to right

No aliasing - inconsistent, jumpy movement (yellow=ideal)
Image
Spatial aliasing makes the movement time-consistent, but gaps remain
Image
Directional motion blur reduces gaps, but inconsistency remains
Image
A combined approach is needed.
Image

Example 2: Eye tracking:
Eye tracking needs to be accounted for, especially when the screen has a large FoV, such as head mounted displays. Same setup as Example 1 with motion blur, but now our eyes track the movement of the pixel. Pixel movement is blurred, but should be sharp. Display-static objects have an apparent motion of 233 pixels per second from right to left and suffer from stroboscopic effects.
Image

Example 3. Phantom array effect:
Lower persistence reduces blur during saccades, disabling saccadic surpression and causing perceived backward motion.
Setup: 20 degree horizontal saccade (25ms duration), static pixel (pixel width = 0.05 degree)
blue line = saccadic movement
black line = hold-type blur (non-linear as the eye-velocity changes)

2ms persistence. Up to 66 pixels blur (3.3 degree)
Image
Full persistence. Up to 177 pixels blur (8.8 degree)
Image

Conclusion:
Time discrete rendering causes temporal aliasing. As resolutions increase, impractical frame rates are necessary to avoid it. Spatio-temporal anti-aliasing can converge to an ideal solution, thus eliminating the need for higher frame rates (beyond flicker fusion), but eye movements need to be accounted for.

Limitations:
In practice it isn't possible to deal with temporal and spatial aliasing separately and achieve 'perfect' motion blur. See:
Shinya et al. 1993 / Spatial Anti-aliasing for Animation Sequences with Spatio-temporal Filtering
Fast eye-tracking system is still an unsolved problem for head-mounted displays.
Furthermore, as seen in example 3, the complex relationship between the human visual system and object motion is still an active area of research, and can improve effectiveness and efficiency of motion blur rendering techniques.

Annex: Overview of algorithms.
Image
Navarro et. Al 2010 / Motion Blur Rendering: State of the Art

User avatar
RealNC
Site Admin
Posts: 3757
Joined: 24 Dec 2013, 18:32
Contact:

Re: Temporal anti-aliasing for computer generated graphics

Post by RealNC » 28 Feb 2014, 00:02

So I suppose the moral of the story is that not all kinds of motion blur are a bad thing. Sometimes we do want it.

Btw, wasn't NVidia's TXAA supposed to help mitigate the situation? (I wonder why this AA method failed so horribly; almost no games supported it, other then those where NVidia had an influence over the studio or publisher.)
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

ScepticMatt
Posts: 37
Joined: 16 Feb 2014, 14:42

Re: Temporal anti-aliasing for computer generated graphics

Post by ScepticMatt » 01 Mar 2014, 09:05

RealNC wrote:So I suppose the moral of the story is that not all kinds of motion blur are a bad thing. Sometimes we do want it.
Yes.
Btw, wasn't NVidia's TXAA supposed to help mitigate the situation?
As I said in the OP, temporal and spatial aliasing are inseperable, so one helps the other.
But "temporal supersampling anti-aliasing" in games is still spatial AA.
It just uses samples from previous frames - with motion compensation - to get supersampling almost "for free".
Image
I wonder why this AA method failed so horribly; almost no games supported it
TSAA/TMAA is starting to become popular in new-gen console games, but combinations with MSAA/SSAA are still too expensive.
In any case, SMAA 4x is far superior to TXAA and isn't vendor-exclusive, so no reason to implement TXAA unless it's NVIDIA sponsored.

User avatar
RealNC
Site Admin
Posts: 3757
Joined: 24 Dec 2013, 18:32
Contact:

Re: Temporal anti-aliasing for computer generated graphics

Post by RealNC » 02 Mar 2014, 04:34

Games with SMAA are few and far between though. I think the only game I came across that offers it was Dead Space?

I do not think that professional 3D graphics developers are not aware of SMAA. I'm pretty sure they are very aware of it. And yet, they choose not to implement it. It has to be a conscious decision to not support it. Why is that?

(And no, the "SMAA Injector" doesn't really do SMAA justice. It looks more like "poor man's SMAA". And it's hopelessly outdated anyway.)
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Temporal anti-aliasing for computer generated graphics

Post by Chief Blur Buster » 05 Mar 2014, 14:09

Hello --
Your Example 3 images have disappeared -- were they removed?
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

ScepticMatt
Posts: 37
Joined: 16 Feb 2014, 14:42

Re: Temporal anti-aliasing for computer generated graphics

Post by ScepticMatt » 06 Mar 2014, 08:11

Chief Blur Buster wrote:Hello --
Your Example 3 images have disappeared -- were they removed?
No they are still there, just replaced with correct images, which are very different.
Comparing the saccade with the eye-tracking example, the pixel width becomes negligible, hold-type skey become longer and changes with varying eye velocities. (black cured line instead of parallelogram)

As the eye integrates over tc (1/CFF), we percieve blur or strobing.
Which is where your blurbusters law comes from (10 pixels per frame, 10% duty cycle : 1 pixel blur per side, 1pixel->2 blurred pixels)

Example: 2ms persistence, 100 Hz screen, 2pixels per frame movement from left to right, 50 Hz CFF. No motion blur.
No eye tracking: We perceive 2 pixels with 1 pixels in between, and as the left pixel fades out, the next right on fades in. (strobing)
Image
Eye-tracking: During fade in/out, blurring of the sides occurs at different times (judder, hold-type blur)
Image
Edit: Full persistence, 4 pixels per frame (not in sync):
Image
100:1 time scale 10:1 pixel scale

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Temporal anti-aliasing for computer generated graphics

Post by Chief Blur Buster » 06 Mar 2014, 12:55

[my post edited below to be improved]

Thanks for the update!
As the creator of TestUFO, and very familiar with motion blur observations dozens on many displays, I immediately notice the need for some minor tweaking in your animations:
No eye tracking: We perceive 2 pixels with 1 pixels in between, and as the left pixel fades out, the next right on fades in. (strobing)
Image
Sequence of fade-in is display tech dependant, but let's assume LCD here.
Under high speed camera, adjacent pixels fades in (GtG in one direction) while simultaneously other pixels fade out (GtG in other direction). This simultaneous effect is also seen in this YouTube video (see ~0:30) modern LCDs, 2ms 60Hz, three refresh cycles would produce this effect:
~2-3ms of noticeable GtG movement (all pixels for a specific screen area, e.g. scanout region)
~14ms of mostly staticness (GtG ~99% complete, slowly moving to 100%)
~2-3ms of noticeable GtG movement (all pixels for a specific screen area, e.g. scanout region)
~14ms of mostly staticness (GtG ~99% complete, slowly moving to 100%)
~2-3ms of noticeable GtG movement (all pixels for a specific screen area, e.g. scanout region)
~14ms of mostly staticness (GtG ~99% complete, slowly moving to 100%)

So if you're modelling movement of single-pixel-width, the middle pixel should fade-in simultaneously while the first pixel fade-out.
If you were modelling more-than-one-pixel-wide objects (e.g. objects wider than the motion step between refreshes), then that should fill into that black gap (so pixel #2 and #4 would not be continuously black).
(Either way, whichever modelling you were intending to do, make sure this is clarified as such).
Eye-tracking: During fade in/out, blurring of the sides occurs at different times (judder, hold-type blur)
Image
Yes, judder would cause variances. But judder/stutter (a matter of semantics) only occurs when you're not doing exact-step movement between frames, but your diagrams suggest you're modelling exact-step movement. Am I correct in this assumption? If so, then you're modelling perfectly smooth motion (stutterfree/judderfree).

In this case, hold-type blur remains constant assuming constant tracking speed (eye tracking, camera tracking), where eye saccades aren't injecting error factors.

Since we can't predict eye saccades, and we're modelling motion blur for situations of motionspeeds where eye saccades aren't a significant blurring factor (e.g. eye tracking moving objects moving only half a screenwidth per second -- aka 960 pixels/second at 1080p or 1920 pixels/second at 4K -- while viewing 1:1 screen ratio viewing distance common for computer monitors -- or high-FOV situations such as virtual reality goggles), we're not including blurring caused by eye saccades in these diagrams. Human observations is consistent with the lack of observed motion blur variances: When you view the moving UFO object at http://www.testufo.com on an adjustable-persistence display (LightBoost 10->100%, or new firmware on XL2720Z with Blur Busters Strobe Utility), you notice that edge-blurring is constant.

Since strobe backlights mostly squarewave and thus creates blur visually virtually identical to simple-to-calculate linear blurring. In this case, the edge blurring is equal to the duty cycle percentage of the frame step (e.g. 20 pixel step at 25% duty cycle, dark 75% of the time, translates to 5 pixels of perceived motion blur). This is the persistence Law (1ms = 1 pixel of blur at 1000 pix/sec) that I have discovered is consistent on all modern strobe-backlight monitors. So at 60Hz = 1/60 = 16.66666666666... ~= 16.7, then full persistence is 16.7ms = 16.7 pixel of blur at 1000 pix/sec = 16.7 pixels of blur at 16.7 pixels/frame. So quarter persistence (25% duty cycle) is one quarter that, which is ~4.2ms of persistence = ~4.2 pixel of blurring. You scale that depending on motionspeed, so that means at 500 pixels/second, that's 2.1 pixels of blurring rather than 4.2 pixels of blurring. Of course, we're excluding spatial anti-aliasing for non-integer-step motion (e.g. 1000 pixels/sec at 60Hz would require moving objects to do exact pixel steps of 16.7 pixels, which requires anti-aliasing, so TestUFO uses 960 pixels/second as a speed that's closest to 1000 pixels/second to make motion blur math easy to double check and correlate with human-visual observations).

In your situation, this animation should be modified to have linear motion blur that's 1/5th the width of the pixel at both leading/trailing edge, and the linear motion blur is constant (no udulations in motion blur) during constant speed exact-pixel-step motion. (Of course, I'm assuming you're modelling exact-step 2 pixels per frame motion, with no antialiasing)

My ability to calculate the exact blur trail length (assuming squarewave persistence), allows me to produce motion blur optical illusions, like the below.


NOTE:
-- Use a stutter-free web browser such as http://www.testufo.com/browser.html )
-- Works better if you click the above to zoom, and maximize window; for longer eye tracking region, to allow saccades to stabilize.
-- Assumes fast-transition displays. Fast LCD (1-2ms transition,15ms static) and OLED tend to resemble squarewave persistence, and produces intense checkerboarding compared to slower LCDs
-- Fast LCDs will often have more symmetric GtG (equal artifact at leading edge & trailing edge)
-- Slower LCDs will often have more distorted checkerboard illusions (especially for asymmetric GtG, more artifact at trailing edge)
-- Strobe backlights will horizontally reduce the width of the squares. 50% duty cycle = squares becomes 2:1 tall rectangles. 25% duty cycle (75% dark) = squares becomes 4:1 tall rectangles (1/4th width). Etc.

The checkerboard remains constant provided you keep eye-tracking it accurately. As you observe, the checkerboard motion blur does not udulate at all unless you inject eye-saccade errors. If you don't have significant eye-saccades during the motionspeed of this checkerboard, and the checkerboard illusion seems perfectly constant, then eye saccade motion blur error factors can thusly be considered insignificant for the purposes of this specific motionspeed being tested for your individual viewing situation (ability, display, viewing distance, etc) and thus don't need to be factored in the motion blur math. And since pixel steps remain exact, there's no udulations in motion blur.

The shape of the illumination curve (e.g. phosphor versus strobing) determines the shape of the motion blur -- phosphor creates more trailing motion blur than leading motion blur, while squarewave strobe backlight produces symmetric linear motion blur at leading/trailing edges. Strobe backlights aren't *perfectly* squarewave (they got ramp-up and ramp-down), but at the current strobe lengths being used, the factors have been observed to be insignificant from a human perceptual perspective (at least by me) and motion blur appears linear, much like a PhotoShop linear motion blur of an exact length equalling the step between two frames, then reduced by the duty cycle (e.g. 50% duty cycle = half linear motion blur trail length), which follows the "Law of Persistence".

Since GtG is the same for all 3 locations of pixels, and you're modelling exact-pixel-step motion which is technically zero-judder, then the hold-type blur (from the duty cycle -- 10% duty cycle at 2 pixels per frame motion = motion blurring exactly equal to 1/5th of pixel width (so pixel will be 7/5ths wider, accounting for leading edge and trailing edge), and motion blurring from 'perfect' tracking will always be constant for exact-step movement, since exact-step movement (e.g. 2 pixels per frame) produces zero judder, and thus, hold-type blur remains constant.

However, if you're modelling something differently than what I thought (exact pixel step motion) then this needs to be clarified, with the animations being updated to be consistent.
Edit: Full persistence, 4 pixels per frame (not in sync):
Image
Is that moving in the opposite direction as the above animation? Shouldn't this specific one be moving in reverse?
If this is an eye tracking situation, then there should be no vertical boundaries between the pixels -- motion blur can be subpixel.

(Good ideas of the diagrams, just needs some tweaking!)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Temporal anti-aliasing for computer generated graphics

Post by Chief Blur Buster » 06 Mar 2014, 13:16

-- Oh, if you're also trying to model lead-in and lead-out (e.g. when a motion suddenly begins and when a motion suddenly ends), you may want to model this using 4 or 5 steps instead of just 3 steps. Excluding lead-in and lead-out, you have only 1 intermediate frame, which can be very limiting to draw conclusions from. I'm not sure how time-consuming these animations are.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

ScepticMatt
Posts: 37
Joined: 16 Feb 2014, 14:42

Re: Temporal anti-aliasing for computer generated graphics

Post by ScepticMatt » 06 Mar 2014, 13:54

Edited.
Chief Blur Buster wrote:Thanks for the update!
Sequence of fade-in is display tech dependant, but let's assume LCD here.
Under high speed camera, adjacent pixels fades in (GtG in one direction) while simultaneously other pixels fade out (GtG in other direction). This simultaneous effect is also seen in this YouTube video
I'm not sure, but I think I'm talking about a different issue here. The strobing and fading effect I talk about here is purely a result of persistence of vision and pixel persistence, not other properties of the display. I assume a global display with ideal pixel response, like a buffered OLED. Also pixel movement starts at pixel 1, and stops at pixel 3 (e.g. screen edge)
Am I correct in this assumption? If so, then you're modelling perfectly smooth motion (stutterfree/judderfree).
I'm modeling fast start-stop, so motion isn't always visually judder free
Explanation (your 25% duty cycle example with 16.7 pixels/frame):
Image
Human observations is consistent with the lack of observed motion blur variances: When you view the moving UFO object at http://www.testufo.com on an adjustable-persistence display (LightBoost 10->100%, or new firmware on XL2720Z with Blur Busters Strobe Utility), you notice that edge-blurring is constant.
In your situation, this animation should be modified to have linear motion blur that's 1/5th the width of the pixel at both leading/trailing edge, and the linear motion blur is constant (no udulations in motion blur) during constant speed exact-pixel-step motion. (Of course, I'm assuming you're modelling exact-step 2 pixels per frame motion, with no antialiasing)
I'm modeling start-stop, but otherwise you're right.
Is that moving in the opposite direction as the above animation? Shouldn't this specific one be moving in reverse?
If this is an eye tracking situation, then there should be no vertical boundaries between the pixels -- motion blur can be subpixel.
This rather poor image is an example of eye tracking. And yes, it should have no boundaries.
Last edited by ScepticMatt on 06 Mar 2014, 14:53, edited 4 times in total.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Temporal anti-aliasing for computer generated graphics

Post by Chief Blur Buster » 06 Mar 2014, 14:02

ScepticMatt wrote:Edit: you edited faster than I could post, so some of the below is moot. For modeling human vision and motion bllur, you need to factor in flicker fusion to get it completely 'right'.
Whoops. My apologies, I didn't know you were replying while I was editing -- Either way, go ahead and re-read my expanded post above, I'm done editing for now. (I will wait several hours before replying)

Terminology clarification: judderfree is when it's visually judderfree. Basically, judder of exact-integer-step motion is so regular and high frequency that it completely blends into constant, consistent, non-varying blur, and also the judder strobing is above flicker detection threshold, and the judder cycle is consistent relative to correct tracking vector, the judder is perceived as perfectly constant nonvarying motion blur, like the checkerboard illusion. This is refresh rate dependant, I see the juddering effect at 30fps@60Hz (LCD) but not at 72fps@(72Hz or 144Hz LCD) or 120fps@120Hz (LCD). Basically judder and motion blur becomes equivalent in this situation because judder is so fast that it's visually perceived as tracking based motion blur. I am assuming you are thinking this way too, of course. Basically, the judder amplitude mathematically converts to tracking based motion blur. This is what happens during perfect even-step motion at framerates creating judder above flicker detection threshold. Since we are modelling exact step motion at frame rates (e.g. 60fps, 120fps) near or above flicker detection threshold, we can mathematically interpret judder as motion blur, and consider such motion as visually judderfree to human eyes.

If you see judder in the TestUFO checkerboard illusion in a stutterfree browser, then run it at a higher frame rate (e.g. 75Hz or 120Hz versus 60Hz) or adjust brightness/ambient lighting until 60Hz is below your flicker detection threshold, and you will see the judder completely disappear from the checkerboard, having the judder visually completely converted to linear motion blur, from a typical human perceptual perspective (i.e. people who successfully see a checkerboard; vision problems such as motion blindness can interfere)

A great example of this is to view http://www.testufo.com#count=3 on a 120Hz monitor (non-strobed) at 120 frames per second.
You cannot see the judder in the 120fps object at 120Hz
120fps -- judder amplitude -- and judders so fast that it's motion blur.
60fps -- judder amplitude is twice as wide as 120fps -- and judders fast that it is equivalent to motion blur
30fps -- judder amplitude is four times as wide as 120fps -- and judders so slow it's detected more as judder than as motion blur
This is see-for-yourself proof of judder amplitude equivalence to motion blur amplitude (motion blur trail length).

I really wanted to make sure you understood that (for constant motion) judder amplitude equals motion blur, whenever judder frequency is higher than flicker threshold ... It's an important distinction. To me as Chief Blur Buster, "judder amplitude" is terminologically equal to "size of persistence blur" and "hold-type blur" for consistent moment at frame rates higher than flicker detection threshold. And when you do 50%-50% duty cycle, the judder amplitude shrinks by 50% too. That's easily seen and understanding can be improved with the 3-object and 4-object version of http://www.testufo.com/blackframes#count=3 (use 4 objects if you are testing at 120Hz).

Actually -- yes, I do humbly apologize for the above edit, but I wondering if you might be thinking of judder as a separate variable than motion blur, when they're actually related/can become equivalent, especially when judder frequencies (and whatever harmonics -- e.g. 3:2 pulldown) starts to exceed flicker detection threshold.

EDIT: My post above doesn't account for start/stop effects.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply