Is eye tracked motion blur measurable?

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Discorz
VIP Member
Posts: 999
Joined: 06 Sep 2019, 02:39
Location: Europe, Croatia
Contact:

Is eye tracked motion blur measurable?

Post by Discorz » 01 Oct 2022, 08:38

Is motion blur our eyes make when tracking objects on display measurable? We know when GtG is 0, 1ms of persistence results in 1 pixel of motion blur when moving at 1000 pixels/second. If we have a line moving across the screen at 2000 pixels/second on 100 Hz/fps display the blur would be 20 pixels wide. This tells us for how long the blur is visible for or how wide it is. But what I wonder lately is what are the "eye RGB" values in-between when display is switching from one color to another, or what is the eye transitioning curve? Is it even measurable or convertible to RGB? We can measure GtG curve and extract the RGBs but those do not match final blur eyes see.

As an example I used RGB 64-191 as rise and 191-64 as fall transition and Blurinator 9000 for simulation.

Reference Image.png
Reference Image.png (26.92 KiB) Viewed 9650 times

This program nicely matches eye tracked blur and we can see there is in fact is a certain curve (light green)(resembles 1.5/0.67 gamma curve).

MPRT.png
MPRT.png (34.02 KiB) Viewed 9650 times

3rd image is not showing pure GtG (as I was unable to extract it) but final mix of GtG and eye tracked MPRT curve RGB values. We can see when the two are mixed, the end result is something quite different to measured photodiode RGB values (A+B=C). If this concept of merging the two is doable it means we can measure the final real blur.

MPRT with GtG.png
MPRT with GtG.png (43.08 KiB) Viewed 9650 times

The blur amount would be expressed as cumulative error or deviating area from perfect square wave response (red area shown in images bellow).

Image

We can see instant GtG response in motion with non-instant persistence is no a longer a perfect square wave. So here the target/ideal response is pushed further back and set more realistically.

Same is applicable to strobe/blur reduction modes. This means we would finally be able to directly compare how much improvement there is by using strobe modes over sample and hold. Although it might take few extra steps to measure. Even blur at different parts of the screen (crosstalk) would be measurable.

Image

One issue is; the size of cumulative deviation area is movement speed dependant. Faster moving speeds naturally have wider/more motion blur, therefore more deviating area. But the MPRT or GtG blur length (or both combined) will scale proportionally. For example 2000 pps is double the blur of 1000 pps, so the blur curve width or deviation area will perfectly double as well.

Image

In Blurinator the "tracking eye blur curve" overall shape stays consistent for leading and trailing edge and all transitions. It only gets scaled in X or Y directions. X axis is persistence or moving speed and Y is transition size.

Image

In the end I imagine we could get similar results by taking a picture of a display, but I'm not sure how camera compares to eye motion blur.

The overall concept is appealing since it would provide accurate blur score. But unfortunately I'm unable to properly test or confirm it. Can we somehow try to measure/predict that curve, then mathematically and correctly merge it with GtG to derive the final RGB blur values?
Compare UFOs | Do you use Blur Reduction? | Smooth Frog | Latency Split Test
Alienware AW2521H, Gigabyte M32Q, Asus VG279QM, Alienware AW2518HF, AOC C24G1, AOC G2790PX, Setup

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is eye tracked motion blur measurable?

Post by Chief Blur Buster » 04 Oct 2022, 22:26

I love this discussion! I've been working on research/projects along these lines already.
Discorz wrote:
01 Oct 2022, 08:38
In the end I imagine we could get similar results by taking a picture of a display, but I'm not sure how camera compares to eye motion blur.
Given proper camera settings, it's uncannily accurate at least for static blur and ghost artifacts (e.g. not the temporal-dither, not the flicker-effects).

You can even create virtualized pursuit camera images simply by measuring GtG heatmap at a fine granularity, playing them out to a virtualized display in RAM, and it outputs very accurate WYSIWYG "pursuit camera" / "eyetracked" images. A future descendant of Blurinator would need support to accept data from a photodiode oscilloscope, for all GtG transition pairs (heatmapped at higher resolution, perhaps as many as 256x255=65280 curves for an 8-bit panel)

It is possible to have a sync between:
- What the average human eye perceives (needs to be fully integrated aka flicker-fused)
- What the camera perceived (assuming excellent camera + good settings + good tracking)
- What the virtual pursuit camera outputs (photodiode GtG measurements + playout in memory + render virtualized pursuit camera image)

So theoretically a future Blurinator would accept photodiode oscilloscope GtG measurements for a lot of combos (or even per color channel, if GtG is different per R G B, as it is on phosphorescent display tech) and create blurring/ghosting/coronas virtually identical to that display.

It is the dream of Blur Busters to have an open-source universal display simulator, that can be played in realtime (e.g. windows indirect display driver would benefit emulators, etc), played static (Blurinator images), or played in slow-motion (educational)

____________

Now that said, some commentary:

You nailed it on many of these images. And yes, faster motion speed creates more motion blur at the same MPRT. And you know, that's why higher resolutions are easier to see motion blur in (and thus needs refresh rate increases more badly).

But one image, isn't an interpretation I feel comfortable going with --

Image

This does not translate well / confuses a bit because it's an interpretation that I don't use -- the blur doesn't stop in a horizontal line in between -- it just gaps. So it's not cumilative -- it simply adds an artifact (aka gap in the blur). So I think this needs to be a separate measurement metric (crosstalk benchmark), in my opinion. A reviewer score can factor in GtG's, MPRT's, crosstalk, etc, into one metric, but when trying to measure accurately, I think that crosstalk needs to be a separate benchmark metric -- even if it is converted to a number.

Good impulsed (CRT) 30fps at 60Hz on also "looks" like strobe crosstalk. I can simulate that with this TestUFO:
www.testufo.com/blackframes#easteregg=1&count=2&bonusufo=1&equalizer=1&background=000000&multistrobe=2&pps=960

Notice that multiple image effects are simply gaps in the motion blur -- and is 100% software simulatable and photographable too.

To me, these create duplicate images:
- PWM dimming
- Strobe backlight + strobe crosstalk
- Low frame rates on impulsed displays
It's all the same vision science / physics actually!

So the simulator would need to genericize the image duplicating, to be able to universally accomodate for ANY cause of repeat-flash of unchanged pixels / unchanged frames.

You can even have multiple concurrent effects (low framerate duplicate images superimposed on strobe crosstalk), but it's pretty much all the GtG-blending physics essentially.

But yes, need to come up with some kind of a crosstalk benchmark.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Discorz
VIP Member
Posts: 999
Joined: 06 Sep 2019, 02:39
Location: Europe, Croatia
Contact:

Re: Is eye tracked motion blur measurable?

Post by Discorz » 06 Oct 2022, 10:13

Chief Blur Buster wrote:
04 Oct 2022, 22:26
I've been working on research/projects along these lines already.
Oh its good to hear someone actually considered such solutions, its needed. :)
Chief Blur Buster wrote:
04 Oct 2022, 22:26
Given proper camera settings, it's uncannily accurate at least for static blur and ghost artifacts (e.g. not the temporal-dither, not the flicker-effects).
Now I didn't want to go into camera stuff as it is more time-consuming and requires more of... everything.
Chief Blur Buster wrote:
04 Oct 2022, 22:26
You can even create virtualized pursuit camera images simply by measuring GtG heatmap at a fine granularity, playing them out to a virtualized display in RAM, and it outputs very accurate WYSIWYG "pursuit camera" / "eyetracked" images.
This is great idea; better bring measured GtG data into method rather than vice versa. If the software simulates the blur accurately we could measure it directly from program. I'm afraid by the time any of this gets realized, ultra fast 1kHz+ displays might already become a standard. After that we perhaps won't need it as much.
Chief Blur Buster wrote:
04 Oct 2022, 22:26
And you know, that's why higher resolutions are easier to see motion blur
I haven't considered resolution/ppi stuff at all but now I see at some point that also needs to be taken into account because motion blur is fixed to pixels. As we switch to higher resolution, pixel size usually gets smaller, then higher sensitivity is required to match the former, therefore we get more blur per inch. I guess this should be solvable with something like pixels/inch speed instead of pixels/second.
Chief Blur Buster wrote:
04 Oct 2022, 22:26
This does not translate well / confuses a bit because it's an interpretation that I don't use -- the blur doesn't stop in a horizontal line in between -- it just gaps. So it's not cumilative
I can confirm the strobed chart is accurate, at least for this case. The reason it looks the way it does is because given transition is sent and held for a longer period to make sure it fully completes, basically treated the same way as classic GtG measuring. But you're right. I can see why it shouldn't be accumulated - with wide enough line like here, the edge blur does stop and hold, but If we gave it a 1 pixel wide line the blur would rise-stop (persistence/backlight ON) with gaps in-between (backlight OFF), matching photodiode data more accurately. So displayed content affected result here. So how do we treat it? Stick with transition edge and subtract the dark area?

VG279QM ELMB 280 Mid, 64-255-64, 1px line.png
VG279QM ELMB 280 Mid, 64-255-64, 1px line.png (710.1 KiB) Viewed 9315 times
source: https://www.aperturegrille.com/reviews/ ... -Crosstalk

Also there needs to be some proof of differing transitions with similar/same CDs matching similar/same perceived blur as we'd want to directly compare one to another (e.g. sample and hold 144Hz OLED matching 240Hz LCD...) or even strobing on vs off situations. Not to mention interpreting over/under/shoot. Obviously two will never look identical but according to CD they should at least feel. This is where we run into a problem.

Image
Image
I measured same CD for these two
Chief Blur Buster wrote:
04 Oct 2022, 22:26
So the simulator would need to genericize the image duplicating, to be able to universally accomodate for ANY cause of repeat-flash of unchanged pixels / unchanged frames.
It definitely needs to be done right and be fully flexible at the same time. There are so many things to take into consideration. Phosphor decay also. If everything turns out right stroboscopic effect simulation should be just one click toggle.

It really is a dream.
Compare UFOs | Do you use Blur Reduction? | Smooth Frog | Latency Split Test
Alienware AW2521H, Gigabyte M32Q, Asus VG279QM, Alienware AW2518HF, AOC C24G1, AOC G2790PX, Setup

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is eye tracked motion blur measurable?

Post by Chief Blur Buster » 08 Oct 2022, 11:05

Discorz wrote:
06 Oct 2022, 10:13
This is great idea; better bring measured GtG data into method rather than vice versa.
You can do it bidirectionally! Measure both by oscilloscope AND pursuit camera, and make sure they match.

It works well when you do it from both ends -- it is verifiable data that verifies each other. If the photograph is identical in blur to the pursuit photograph, then both measurement methods were very likely done correctly.

The great thing is that you can even extract GtG data already from a TestUFO pursuit camera image! You just capture a pursuit camera photo of https://testufo.com/blurtrail#thickness=-1 simply by mapping the horizontal axis of the lumas along the photo to a graph.

Pursuit-photographed blur edges measurements actually map very accurately to a photodiode oscilloscope of one GtG transition! It's low samples/sec but it is a 1:1 perfect match to an oscilloscope curve (to the error margin of greyscale / camera noise).

It's also further evidence that confirms pursuit camera is effectively WYSIWYG too (during eye tracking), excluding temporal effects (like temporal dithering), which are not animated in a photograph.

Also, the faster motion speed, the more GtG plots per second you can do.

At the moment, 1000 pixels/sec allows roughly a resolution of 1000 GtG/sec in the resulting pursuit camera photograph, if the camera is tracked correctly.

Pursuit photos are not as high resolution as a photodiode oscilloscope, but it uncannily matches the result of a photodiode oscilloscope if you adjust the exposure settings that the blacks & whites do not clip, and if the camera has linear CCD response (most do, at least if you use manual settings / use RAW format).

In theory an app can do this, and with an accurate sync track, with an AI-recognition of the sync track, could probably make it very easy for end users to do in a smartphone app on modern Galaxy/iPhone.

With AI recognition of sync track accuracy (realtime in a pursuit camera app), I imagine a resolution of about 4000-8000 samples/sec can be automatically achieved with no human intervention with mere smartphone pursuit photography with just mere iPhone ~8-10ish and later, and Galaxy ~S15-20ish and later -- with just a single long exposure -- so a smartphone camera can replace a photodiode oscilloscope at low sampling resolution. The horizontal resolution of a smartphone camera image will be the limiting resolution, since you have one “photodiode oscilloscope sample” per horizontal pixel. Assuming you keep the line extremely thin, and horizontal resolution high, the horizontal axis of a horizontal pursuit camera that’s very accurately tracked, is a defacto perfect plot of a photodiode oscilloscope graph!

So that’s pretty neat that the pursuit camera + oscilloscope is in sync! What is needed is a new measurement method that can bring the two together even more closely, and new scoring metrics/formulas that avoids the wholly outdated 10%-90% cutoffs, while respecting noise thresholds. Cameras, understandably, can generate noisy images, so higher cutoffs is still needed. The NEAT thing about a camera image is that you can average multiple pixel rows together (since the vertical axis is duplicate — the line is the same at top edge as bottom edge), so you can get practically 16-bit oscilloscope precision from a single smartphone image!

Because it’s literally ~4000 pixel rows in a modern smartphone camera image — so like ~4000 runs of a photodiode oscilloscope! So you GtG-trace each pixel row of a pursuit camera image, and average all the pixel rows of a pursuit camera image — and BOOM — literally 16-bit oscilloscope precision. (You may need to geometrically/rotation-correct the image data to improve samples/sec accuracy — but this can be done as a pre-step or compensated per-pixel row, based on known image geometry for the photography distance).

Albiet at low samples/sec, a single image is only necessary, because of the sheer megapixels of a camera sensor. How beautiful it is, such math tricks are possible — to pull an uncannily accurate 1000-4000 samples/sec photodiode oscilloscope directly from a SINGLE pursuit camera image.

This is actually worth a science paper sometime (yoo hoo researchers, please let me vet your paper, and add me as a cite. I’m already cited in 25+ papers and inspired over 100+ papers without a cite — www.blurbusters.com/area51 …)

Obviously, camera photography technique varies, so it has to be a continuously-open shutter for the duration of the photograph, with no ISO variances, to accurately do this. This works when selecting the most unprocessed setting of your specific smartphone camera — latest Galaxy and iPhones are capable of being configured into a pretty unprocessed mode that has a good 1:1 mapping to a photodiode oscilloscope within the dynamic range of the camera sensor, assuming you calibrate/compensate-for the reference blacks & whites. But some niche settings will do weird things to camera sensors.

A smartphone camera being an accurate photodiode oscilloscope is improved by averaging multiple runs — and the multiple runs are already in a single photograph!!! The vertical dimension of the photograph has many pixel rows. This can be averaged/stacked to create a more accurate GtG curve with literaly 16-bit-like Tektronix oscilloscope quality precision! Even if a very low samples/sec (1000 to 8000ish. This is possible because GtG is plotted horizontally across the horizontal axis of a pursuit photo, and a single blur edge is an exact map to a photodiode oscilloscope! This was also confirmed by the researchers that did the pursuit camera paper.
Discorz wrote:
06 Oct 2022, 10:13
If the software simulates the blur accurately we could measure it directly from program.
You can measure it from a good math calculation on an ultra-high-resolution GtG1%-99% heatmap (preferably 256x256 heatmaps, not the puny manufacturer 17x17 heatmaps or the reviewer 5x5 heatmaps).
Discorz wrote:
06 Oct 2022, 10:13
I'm afraid by the time any of this gets realized, ultra fast 1kHz+ displays might already become a standard. After that we perhaps won't need it as much.
We still need it even with 1Khz+
A 500 Hz OLED has clearer motion than a 1000 Hz LCD precisely because of GtG speed limitations.

GtG is independent of refresh rate -- GtG can overlap multiple refresh cycles (e.g. 33ms-50ms+ 60Hz LCDs of 1990s, and today's 10ms+ 240Hz VA LCDs in actual reviewer measurements). While average MPRT100% cannot become less than one refresh cycle, GtG can blur far beyond a refresh cycle, that's why we need to get as close as possible to 0ms-GtG.

On LCD, a strobe backlight can hide GtG away from MPRT fully, where GtG is panel-based in the dark cycle, and MPRT is the backlight-driven part. But...

For, OLED GtG of 0.1ms is a big problem for OLED strobing. The GtG:MPRT ratio during BFI can cause color shifts and other artifacts. For example, if GtG is 10% of MPRT on an OLED, then you can have artifacts. 0.1ms GtG versus 1.0ms MPRT is a 10% ratio!

We will still need to measure GtG even in this era.
Discorz wrote:
06 Oct 2022, 10:13
Chief Blur Buster wrote:
04 Oct 2022, 22:26
And you know, that's why higher resolutions are easier to see motion blur
I haven't considered resolution/ppi stuff at all but now I see at some point that also needs to be taken into account because motion blur is fixed to pixels. As we switch to higher resolution, pixel size usually gets smaller, then higher sensitivity is required to match the former, therefore we get more blur per inch. I guess this should be solvable with something like pixels/inch speed instead of pixels/second.
Inches of blur is unchanged even at higher resolutions -- so more pixels are blurred for the same physical motion speed -- which is why motion blur is instantly more noticeable by humans at higher resolutions than lower resolutions. If the source image was already blurry to begin with (camera blur, low resolution), display motion blur of the same physical inch/sec is harder to see. But as resolutions go higher, you see bigger difference between stationary image and moving image.

That's what we call the Vicious Cycle Effect -- www.blurbusters.com/1000hz-journey#viciouscycle

If you didn't understand the Vicious Cycle Effect before, now you do.

Because the static resolution becomes sharper but motion resolution does not improve, you see a bigger difference between static resolution and motion resolution (for the same sample-and-hold effect aka same refresh rate at same 0ms GtG pixel response)

That's why as soon as we reach 16K virtual reality 180-degrees, where pixels are still individually resolvable, the retina refresh rate starts to rocket to quintuple digits (>10,000 Hz). We've estimated it to roughly 20,000fps at 20,000 Hz for the vanishing point of diminishing curve of returns for the most extreme motion on the most extreme displays. But this may need to be oversampled to 40,000Hz+ if you want to add GPU motion blur effect to eliminate stroboscopics (stationary gaze moving object situation).

For most displays, most humans can't accurately eye-track faster than one screenwidth in about 0.5 seconds (there's another thread about this). We discovered a new rough rule of thumb of retina refresh rate for a display is roughly 2x horizontal resolution, as long as the static pixels are individually resolvable. e.g. stationary versus scrolling starfield of pin-point stars. The stars become ovals instead of dots. Or you do tiny-text readability tests like www.testufo.com/map -- anything that forces a human to try to tell apart the resolution of a stationary image versus a moving image.

The rule of thumb isn’t perfect, but it’s an amazing estimate that has a smaller error margin than expected — it is a function of the maximum fastest eye tracking speed from left edge to right edge of display, and enough time to stare at a moving object to identify if it’s sharp or blurry. That was deemed to be approximately 0.5 seconds. Some people can track faster, and others slower, but it also takes time to identify an object too — if moving object appears and disappears too fast (from one edge to other edge of screen), you don’t have enough time to pay attention to the moving object in order to notice whether it’s sharp or blurry.

So in other words: A great rough estimate of a display’s retina refresh rate is approximately twice the resolution along the vector of the moving object (e.g. the display’s horizontal resolution for a horizontally moving object).

If the human eye’s angular resolution along the dimension is lower than the physical resolution (e.g. display excess spatial resolution far beyond retina resolution), then that applies instead of the actual physical resolution.

When done, retina refresh rate becomes incredibly high for a sample-and-hold non-strobed non-PWM non-flicker display:

As an example:
Rough ballparks, but error margin is almost certainly far less than one order of magnitude from these estimates:
- For a smartphone display at arms length, the retina refresh rate could be about 500-1000
- For a 1080p 24" display at arms length, the retina refresh rate could be about 2000-4000
- For a 4K 27" display at arms length, the retina refresh rate could be about 4000-8000
- For a 180+ degree VR headset with 16K resolution, the retina refresh rate could be about 20000-40000
During eye-tracking maximally-detailed maximally-fast motion (for motion blur), for an extreme cherry-picked test.

Obviously “retina resolution” and “retina refresh rate” is a great oversimplification, but Blur Busters like to toe the “Popular Science” line, where I use terminology that’s reasonably easy for most educated people to understand — not just researchers.

Once the pixels of a display becomes tinier than human angular resolving resolution at the (real/virtual) viewing distance, you've hit "retina resolution" (Apple term, I like using user-friendly terms on these forums). Now you've maximized the retina refresh rate of the display. So if you stretch a 16K VR display to 180 degrees, the pixels become big and even resolvable (barely), and twice that is 32000Hz. So that's a crazy 32000fps 32000Hz. I often quote 20000fps 20000Hz, but that is only lower-bound estimate for an extreme-wide-FOV retina-angular-resolution display.

1000fps 1000Hz is just a boilerplate as the most economically feasible "almost retina refresh rate" for a 24" 1080p OLED display. But once we go to 4K or 8K, watch out, it really starts to feel non-retina Hz and you need even more frame rate and Hz to begin to see panning images as clear as stationary images without needing strobing.

Obviously, once you strobe (like a CRT), motion blur disappears, but stroboscopics gets worse whenever eyetracking is not in sync with the moving images. This can be fixed in a VR headset with eye-tracking-compensated GPU-motion-blur effect, that dynamically motionblurs the delta between eye gaze motion vector and the moving-object motion vector. So that stationary-gaze stationary-object is tack-sharp, and tracking-gaze moving-object is tack-sharp, but all divergences (stationary-gaze moving-object, as well as moving-gaze stationary-object) no longer has stroboscopics because the eye tracker automatically told the GPU to add artifical motion blur to hide stroboscopics during tracking-divergence situations. I already cover this topic at the bottom of The Stroboscopic Effect of Finite Frame Rates, a section of www.blurbusters.com/area51
Discorz wrote:
06 Oct 2022, 10:13
Chief Blur Buster wrote:
04 Oct 2022, 22:26
This does not translate well / confuses a bit because it's an interpretation that I don't use -- the blur doesn't stop in a horizontal line in between -- it just gaps. So it's not cumilative
I can confirm the strobed chart is accurate, at least for this case.
It is accurate if you're not trying to sync quantitative and qualitative (aka what the human eye saw).
But, it just confuses my mind more than it puts at ease, so I prefer a totally different visualization.

Also, complicating matters, we have complex situations of strobe crosstalk concurrently combined with KSF phosphor decay:
Image
Discorz wrote:
06 Oct 2022, 10:13
The reason it looks the way it does is because given transition is sent and held for a longer period to make sure it fully completes, basically treated the same way as classic GtG measuring.
The GtG transition doesn't pause -- it continues unseen by eyes.

That being said, deep thought is required for a user-friendly visualization that doesn't confuse end-users nor confuse me; because everybody thinks differently. Some people think "geometically" when doing some math formulas (they don't read out the numbers in their mind) while other people think more algebraically. It's a major reason why some people have a tough time with algebra-based professors while having much easier time with geometry-based professors -- there are MANY ways to teach the same kind of mathematics to the same person. It might be a math-equivalent of dyslexia. I don't know.

Regardless, that one earlier graph you posted is totally confusing to my mind -- and that must be solved with a more universal visualization that is less confusing to a wider audience.

But the newer graph, is perfect to my mind:
Image
That one is easy to understand -- the blue line is what the panel is doing independently of the backlight -- and the purple line is what the human eye (and pursuit camera) saw. And correlates perfectly to the intensities of the duplicate images afterwards. It even correlates perfectly to the GtG-plot recorded from a pursuit image of www.testufo.com/blurtrail (single-pixel thick for MPRT/crosstalk analysis) or www.testufo.com/blurtrail#thickness=16 (thickness=ppf for double-GtG curve) www.testufo.com/blurtrail#thickness=-1 (single edge for single-GtG curve) using the technique I described earlier -- In the photo, the first line is strong, the next duplicate line is fainter -- and so on -- and and the final line is faintest. And within each duplicate is a soft blur gradient that matches the shape of the spikes of that graph! And when plotted to a graph, the graph matches that image. That's why pursuit images are in sync with such complex graphs!
Discorz wrote:
06 Oct 2022, 10:13
Also there needs to be some proof of differing transitions with similar/same CDs matching similar/same perceived blur as we'd want to directly compare one to another (e.g. sample and hold 144Hz OLED matching 240Hz LCD...) or even strobing on vs off situations. Not to mention interpreting over/under/shoot. Obviously two will never look identical but according to CD they should at least feel. This is where we run into a problem.
That's part of a project I am currently working on.
One major solution is obsoleting the old VESA GtG 10-90% thresholds, as even a $25 Arduino photodiode tester (with a gain adjustment) can measure GtG 1%-99% pretty accurately for the entire heatmap of a 300-400nit, even the darks.

VESA designed the standard using an oscilloscope directly connected to a photodiode without an opamp.

But today, the maker culture builds an opamp straight into an Arduino photodiode oscilloscope to amplify the sub-GtG10% and post-GtG90%.

Other techniques such as repeating the GtG curve read and averaging the GtG curves removes a lot of noise too, to allow a $25 tester to surpass accuracy of a single-pass of a $1000 Tektronix oscilloscope. It's year 2022. This ain't the 1990s, VESA.

The VESA GtG and MPRT measurements are so outdated I now considered "compromised" and violates honesty with the people complaining about it. A better measurement method requires consultation with a wide number of reviewers (something I will begin doing soon too). They came up with a new motion blur measurement standard completely different from GtG and MPRT, without also concurrently fixing GtG and MPRT 10%-90% thresholds. Now, they've started major confusion.

I'm dissapointed that VESA did not consult Blur Busters first for comment -- nor even cited the pursuit camera sync track which is also useful even for motorized cameras. Though pixel-shifted images from a high speed camera (>1000fps) can also be used to create pursuit camera images. But that's not even necessary -- you just need a single pixel photodiode and oscilloscope it all -- and generate virtual pursuit camera images. But we will have an answer (articles about this) in the coming months as we begin to re-activate the main page of Blur Busters news.

Keep tuned for an announcement. I can't say much yet, but I can confirm we're developing new standards. But the thread really touches upon some current projects that I am doing internally.
Discorz wrote:
06 Oct 2022, 10:13
It definitely needs to be done right and be fully flexible at the same time. There are so many things to take into consideration. Phosphor decay also. If everything turns out right stroboscopic effect simulation should be just one click toggle.

It really is a dream.
And we're going to make this dream happen this decade.
A universal software-based display simulator is already in the works, but it will need a long incubation period.

I have lots of raster beam racing knowledge (see Tearline Jedi) and understand exactly how to make the same codebase simulate almost any single-pass scanout display (CRT/LCD/OLED) simply by changing variables. And with some further iteration, multipass-refresh algorithms (colorwheel simulation, DLP/plasma temporal dithering simulation).

It only requires less than approximately 10,000 lines of code -- it's not that big for a display simulator kernel that runs off simple variables. It's just complicated to understand. The problem is understanding the concepts -- most researchers don't concurrently understand enough simultaneously (a CRT researcher might not understand LCD+strobing behaviors), but I understand refreshing behavior of so many displays simultaneously.

A professional may understand spatials excellently (e.g. MAME HLSL), but they don't understand temporals.

Some people have photogenic memory. Others math brillance. I've got the temporal brillance: I can simulate displays in my mind. Adjust a variable and I can usually picture the artifacts that outputs from it.

People who know me know that I've got a very good temporal mind -- I can simulate displays in my mind! That's how I invented www.testufo.com/ghosting and www.testufo.com/eyetracking ... I saw the artifacts in my head before I wrote those tests. I happen to be an occasional computer programmer too. That makes me ideally situated to convert my brain-based display simulator to a computer program. It's easy.

The main chicken and egg is I need to be paid (funding) for the time, because I can't afford to do too much things for free -- I already do (e.g. TestUFO) -- but perhaps a Patreon could help light up certain projects like these, although I don't know what audience would be willing to pay. I don't have the catchet of a LinusTechTips or VSauce or MrBeast, to pull in even a single thousand dollars worth of Patreon per month -- so I rely on services at services.blurbusters.com (working with manufacturers) as well as banner ads -- to pay the bills as well as pay my subcontractors and everything.

Nontheless, a universal display simulator codebase can easily be created in 2023 if funding is available... A single $5000-$10000 donor would instantly pay for my time and get it done in a couple months or so, and open-source it under Apache-2.0 or MIT on github.
- Useful for end users as education (super Blurinator equivalent)
- Useful for reviewers when importing photodiode data to create virtual pursuit camera images
- Useful for manufacturers to prototype future displays.
- Useful for non-realtime (e.g. simulated pursuit images, or slow-motion simulation of a display)
- Useful for real-time
.....CRT electron beam simulator for emulators
.....Add G-SYNC native quality overdrive to generic VESA Adaptive Sync displays (using GPU shader to do the VRR overdrive processing)
.....Etc

GPU shaders are so astoundingly powerful, that this can be done with less than 5-10% of GPU overhead now, even at 240Hz.

Initially it'd start with simulating a display utilizing single-pass rolling scanout (CRT, LCD, OLED), with an optional global or rolling strobe. It would successfully do accurate virtual pursuits of all 3 major display technologies (of any subtype like TN, VA, IPS), with various kinds of adjustable variables like phosphor decay, nits, Hz, GtG formula or recorded curve data (from photodiode oscillscope / Arduino) etc. Hell, the Arduino device could be directly connected to the app! Allowing users to do DIY custom overdrive, as one example.

That is pretty simple. Easier design of superior overdrive would be the first spinoff benefit of a display simulator.

Then it'd iterate to include more complex things like HDR features, like VRR features, like VRR dynamic overdrive, like local dimming and its blooming artifacts, like like multi-pass refreshes (e.g. DLP, plasma) with subrefresh temporals, and other display-simulator features. Plug-in modules or plug-in shaders could add displays. You could even test new subrefresh dithering algorithms in a GPU shader before implementing it to a FPGA, or test new LCD overdrive algorithms before programming a scaler/TCON.

But initially -- for the basic display simulator kernel Version 1 -- It's not a complex program, just time-consuming to test and compare with displays I have to purchase to validate, or the time spent recruiting volunteers to compare to displays, etc -- stuff that takes away from paid time. For now, I have to stick to paid projects.

It is harder to do other things such as language porting or programming a Windows Indirect Display Driver (e.g. to do things like virtualize a 60Hz Windows Display which is then used to CRT electron-beam-simulate onto a future 1000Hz OLED) -- so that Windows thinks it's connected to a 60Hz CRT when the real display is a 1000Hz OLED (and the Universal Display Simulator is embedded in a third party Windows Indirect Display Driver)! Simulate the retro display of your dream, assuming you have enough Hz to simulate the retro display at fine enough temporal granularity. In theory, longer-term, I'd even be able to add beamracing support, so that software-simulated tearing can happen, or hooks can be added for lagless CRT scanout (via frameslice beam racing algorithm), e.g. an API called from an emulator to deliver a frameslice to be scanned out.

But the Display Simulator Engine is easy for me to write -- under 10,000 lines for Version 1 of CRT/LCD/OLED simulation engine -- just very complex for anybody other than me. You can see how tiny Blurinator is -- it's just a few lines of source code! But it only simulates LCD and only global refresh -- it does not simulate scanout (so it doesn't show top/center/bottom crosstalk differences). I also want to see it in a different language such as C# which is much more easily portable to C++ (e.g. emulators) and JavaScript (e.g. TestUFO simulators), and it so happens that my favorite casual development language is.... C#

The bottom line is the kernel of a relatively expandable & universal display simulator is shockingly rudimentary -- partially because displays are just electronic visualizations of an unchanged display signal -- a raster display signal is mostly unchanged in topography (vertical & horizontal sync and porches) and thus most displays have processed the signal in surprisingly minimal ways, despite the advancedness of current modern VRR FALD displays.

On the other hand, manufacturers might find a display simulator useful to prototype future displays.

TL;DR: The *start* of a Blur Busters Universal Display Simulator engine isn't a particularly hard project for me -- just needs to be funded by either a donation or a client. Or a BountySource prize, perhaps.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is eye tracked motion blur measurable?

Post by Chief Blur Buster » 09 Oct 2022, 16:16

Updated my post with additonal information.
I’d be happy to answer any questions to clear up any confusions.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Discorz
VIP Member
Posts: 999
Joined: 06 Sep 2019, 02:39
Location: Europe, Croatia
Contact:

Re: Is eye tracked motion blur measurable?

Post by Discorz » 15 Oct 2022, 17:43

Sorry for late reply. I wanted to do a proper full reply to all you wrote but kinda ended up lost in in it as time went by...

My goal with this post was at least for someone to see and consider such blur measuring approach. Because none of the techniques we have today are showing the full picture. Ever since I realized most people don't know GtG and MPRT are separate metrics I've been trying to figure out how can MPRT and GtG be merged together in one number. So there's no confusion when people see stacked charts from reviewers where a 120Hz monitor is faster than a 240Hz monitor. Now that you say you are already working on something along these lines makes me very happy. The right person really is required to do the job, and you happen to be one, if not the only one.
Chief Blur Buster wrote:
08 Oct 2022, 11:05
The GtG transition doesn't pause -- it continues unseen by eyes.
I think I got it now. As frames scan through, same color transition stay flat, but any color differences between two frames will curve normally. So GtG is consistent regardless of frame content. I did not have scan-out as precisely/pixel accurate imagined in my mind.
Chief Blur Buster wrote:
08 Oct 2022, 11:05
But the newer graph, is perfect to my mind:
VG279QM ELMB 280 Mid, 64-255-64, 1px line.png
That one is easy to understand -- the blue line is what the panel is doing independently of the backlight -- and the purple line is what the human eye (and pursuit camera) saw.
Purple line is what photo diode saw. In this case purple line resembles what eyes sees only because transition is 1 pixel wide. If the content was a 2 pixel wide line, the photo diode line and the eye line would already start to mismatch (new 3rd line) which is why I think this one shouldn't be used as well. This is where mentioned eye tracking curve starts to blend in. If we take cumulative error as a metric of final blur where Y-axis is RGB and X-axis is pixels shouldn't eye curve be taken into account? Eye curve can not be captured by photo diode (eye curve is not very good expression but idk what else to call it). To photodiode GtG and MPRT look consistent regardless, but to our eyes does not as moving speeds and content are constantly changing. Finding connection between the two is the key to this concept. In the end we are measuring eye tracking blur.

All this makes me wonder if there should be some agreement on what content to use for measuring, at what moving speed so it could be put in a heatmap to pull out the average just like with classic GtG measuring.
Compare UFOs | Do you use Blur Reduction? | Smooth Frog | Latency Split Test
Alienware AW2521H, Gigabyte M32Q, Asus VG279QM, Alienware AW2518HF, AOC C24G1, AOC G2790PX, Setup

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is eye tracked motion blur measurable?

Post by Chief Blur Buster » 15 Oct 2022, 22:33

Discorz wrote:
15 Oct 2022, 17:43
Chief Blur Buster wrote:
08 Oct 2022, 11:05
But the newer graph, is perfect to my mind:
VG279QM ELMB 280 Mid, 64-255-64, 1px line.png
That one is easy to understand -- the blue line is what the panel is doing independently of the backlight -- and the purple line is what the human eye (and pursuit camera) saw.
Purple line is what photo diode saw. In this case purple line resembles what eyes sees only because transition is 1 pixel wide. If the content was a 2 pixel wide line, the photo diode line and the eye line would already start to mismatch (new 3rd line) which is why I think this one shouldn't be used as well. This is where mentioned eye tracking curve starts to blend in. If we take cumulative error as a metric of final blur where Y-axis is RGB and X-axis is pixels shouldn't eye curve be taken into account? Eye curve can not be captured by photo diode (eye curve is not very good expression but idk what else to call it). To photodiode GtG and MPRT look consistent regardless, but to our eyes does not as moving speeds and content are constantly changing. Finding connection between the two is the key to this concept. In the end we are measuring eye tracking blur.
The purple line graph is best created in a separate test run involving a solid flashing color, not the moving lines test.
Photodiodes are generally not used with moving-line tests
The best way is to
1 - Run pursuit camera on your monochrome test (moving line or moving edge)
2 - Run photodiode oscilloscope in a separate flash pass (full screen flash)
3 - Superimpose the data.
This just also my beautiful point: How the pursuit camera result is in sync with the photodiode oscilloscope test;

For majority of reviewers, moving lines aren't and shouldn't be used for photodiode tests, but on a solid patch of screen (e.g. this alternating flash test)
This is because a thin line doesn't emit enough light to create high-quality low-noise photodiode data, and it can be 'confused' by indirect emission of light from adjacent pixels (e.g. the line in a different position in next refresh cycle can still backscatter light). So, that's not a best-practice.

Remember:

A photodiode is a defacto 1-pixel eye, a defacto 1-pixel ultra-high-speed camera, capable of 100,000+ "frames" per second, and is best tested on a solid block of color, and never moving imagery. It's not well-suited for moving-line tests, although it could be used for it (with major caveats).

Measuring a solid color change (e.g. full screen grey to full screen dark grey) which is effectively 1-pixel. The great thing is that a solid screen is the same to a multi-megapixel camera or a 1-pixel camera, and that's exactly the kind of test pattern a photodiode is designed to test.

Photodiode tests aren't used for multicolor tests all at once -- you test one color at a time.

Then, with a clever trick (aggregating all the data) -- you can make virtual pursuit camera images just from that by running measurements in the entire colorspace for GtG colorspace (8-bit color space original color, transitioning to 8-bit color space destination color would be 65536-256=65280 measurements).

From that, you can compute behaviors for all possible images of all possible resolutions (and virtualize pursuit camera images), e.g. you're temporally testing each possible color combination one at a time, with a 1-pixel ultrahighspeed camera (aka photodiode). Once you've aggregated the data, there's no limit to the resolution of a virtualized pursuit camera image that looks WYSIWYG.

Thus, the vision vs photodiode symmetry is preserved, at least for static motion artifacts including GtG and MPRT ghosting/coronas/blurs along edges (e.g. constant speed, no temporals aka temporal dithering, etc).

Now, vision curves is a different matter altogether -- it's akin to the color matching problem (e.g. matching print to on-screen), but the error margin of vision curves is far below the error margin of LCD GtG and MPRT curves, as long as the images are within the dynamic range of the eye and also concurrently within the dynamic range of the camera. For the purposes of testing, we omit the vision curves since they vary a lot between humans much like how different humans perceive R,G,B slightly differently. One human may have most sensitive red sensitivity at 612nm, another at 610.5nm, or 615nm, etc. Ditto for green or blue. 12% of population has enough abberation to be considered "color blind" (major color-primary sensitivity shifts). Yet we don't account for human colorspace curves when we just boilerplate-calibrate a display to an aggregate human-population-averaged standard such as sRGB or DCI-P3 or Rec.2020.

So like the color dimension (and the fact we don't worry about human vision color response curve differences between humans, nor print-matching between paper and screen, when a reviewer is doing colorimetry tests to a specific universal standard) -- the temporal measurement dimension doesn't have to worry about human vision photoreceptor curve differences. Besides, the response of the photoreceptors on bright images (like UFO) for the average human is generally far below the finite GtG differences between displays, to the point where pursuit photography still effectively looks WYSWIG, even the subtle sub-artifacts embedded within a pursuit camera image.

This is a large rabbit hole to fall in -- as definitely for dimmer images (moving dark images) the human vision curves can be far slower than the GtG curves. But GtG is so slow that it's still seen (e.g. dark VA ghosting). Once we hit HDR OLEDs with ultra-black blacks, seeing motion blurs in dark objects (whose all pixels are darker than LCD black), vision curves definitely starts becoming a bigger error margin. But, we aren't there yet, and that's a cart before the horse. Other standardizations are needed first for motion blur measurements that combines GtG and MPRT into a single metric.

At the end of the day, photodiodes are easy to understand when you view them simply as a 1-pixel ultrahighspeed camera, designed only to measure a single solid rectangle of color -- it's not used for moving-line tests.

HOWEVER, pursuit photography of a moving-line test DOES map to a photodiode oscilloscope in the axis of the motion vector (as a temporal plotting manoever), and that is how the purple line was drawn.

The correct way to map a single GtG transition between doing a photodiode versus doing a pursuit camera, is a moving-edge, not a 1-pixel-thick line. Right Test for the Right Job, to create bidirectional symmetry between graph drawn only from a photodiode versus a graph drawn only from a pursuit camera.

Moving lines should have a line thickness equal to pixel step per refresh, if you're measuring a 1-refresh-cycle double GtG transition (e.g. black->white->black). Not a 1-pixel thick line. Now if you're adding a 4 frame delay between refresh cycle with a photodiode (e.g. Black->(4 refresh cycles of white)->Black), then you need a line thickness 4x the pixel step, to generate an identical chart (as a photodiode oscilloscope) without using a photodiode oscilloscope.

Basically, you can use a photodiode oscilloscope alone, and you can use a pursuit camera alone, and generate exactly the same curve with both, as long as you configure variables within the overlapping venn diagram of capabilities (e.g. sample rate is much more limited with a pursuit camera). It is all in the correct configuration of test, and how to convert the resulting data into a chart.

It is much easier to sketch/diagram this to explain better. I probably should.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Discorz
VIP Member
Posts: 999
Joined: 06 Sep 2019, 02:39
Location: Europe, Croatia
Contact:

Re: Is eye tracked motion blur measurable?

Post by Discorz » 24 Oct 2022, 09:40

Chief Blur Buster wrote:
15 Oct 2022, 22:33
The correct way to map a single GtG transition between doing a photodiode versus doing a pursuit camera, is a moving-edge, not a 1-pixel-thick line. Right Test for the Right Job, to create bidirectional symmetry between graph drawn only from a photodiode versus a graph drawn only from a pursuit camera.
Im confused. Is it possible to extract gtg curve from a pursuit camera or not? Cause no matter what I try curves always overlap with MPRT blur. I guess only extractable part is outside overlapping?

A+B=C.png
A+B=C.png (47.48 KiB) Viewed 7015 times
Compare UFOs | Do you use Blur Reduction? | Smooth Frog | Latency Split Test
Alienware AW2521H, Gigabyte M32Q, Asus VG279QM, Alienware AW2518HF, AOC C24G1, AOC G2790PX, Setup

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is eye tracked motion blur measurable?

Post by Chief Blur Buster » 24 Oct 2022, 10:52

Discorz wrote:
24 Oct 2022, 09:40
Chief Blur Buster wrote:
15 Oct 2022, 22:33
The correct way to map a single GtG transition between doing a photodiode versus doing a pursuit camera, is a moving-edge, not a 1-pixel-thick line. Right Test for the Right Job, to create bidirectional symmetry between graph drawn only from a photodiode versus a graph drawn only from a pursuit camera.
Im confused. Is it possible to extract gtg curve from a pursuit camera or not? Cause no matter what I try curves always overlap with MPRT blur. I guess only extractable part is outside overlapping?
The answer is yes.

However:
(1) Max oscilloscope “sample” rate can’t exceed camera resolution (of the line width)
(2) Camera response needs to be linear;
(3) Blacks should not be clipped (adjust until blacks are dark greys)
(4) Whites should not be clipped (adjust until whites are light greys)
(5) Camera tracking accuracy errors will smear it to closer to MPRT than MPRT+GtG, since MPRT is usually the bigger component, and that will start to flood the small GtG noise as the data is smeared from the camera blur.

Capturing multiple refresh cycles is simply reducing noise, like doing multiple oscilloscope runs and averaging the curves together.

Perhaps things will become more accurate if I add a sync track to the moving line test. I didn’t think of this ability when that test was originally designed to be a visual-only test.

To spread the curve accuracy, you could use the darker color as the black reference and the lighter color as the white reference, but you kind of loose the zero reference, though you could use the photodiode.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Discorz
VIP Member
Posts: 999
Joined: 06 Sep 2019, 02:39
Location: Europe, Croatia
Contact:

Re: Is eye tracked motion blur measurable?

Post by Discorz » 24 Oct 2022, 14:44

Chief Blur Buster wrote:
24 Oct 2022, 10:52
The answer is yes.
Yes? But what about overlapping?
Compare UFOs | Do you use Blur Reduction? | Smooth Frog | Latency Split Test
Alienware AW2521H, Gigabyte M32Q, Asus VG279QM, Alienware AW2518HF, AOC C24G1, AOC G2790PX, Setup

Post Reply