Are Hand-Wave Pursuit Cameras Silly/Stupid?

Many sites including LinusTechTips, RTINGS, TomsHardware, and others use the free Blur Busters pursuit camera invention. Now also available as a rail-less smartphone wave, too!
Post Reply
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 10 May 2020, 17:25

Joel D wrote:
10 May 2020, 15:11
They all look the same to me, and not that important.
Valid criticism, but do you understand why they look all the same? (We do).

It's mostly a camera quality issue. Hobbyist pursuits can be crap, but some of the better hobbyist hand-wave pursuits are more accurate than the worst reviewer pursuits.
Joel D wrote:
10 May 2020, 15:24
Then this whole thing about who does it right, and errors, etc.. can be put to bed. We'd see it for real.
Actually, most motorized rails are worse -- we tried! (see below for why)

Also, for hand-waves, the bigger problem is camera quality making them all the same. You can digitally equalize a photo to make a TN photo look the same as IPS photo, blow out those highlights, saturate the colors, denoised, etc, so you ignore the colors, and only focus on the amount of blur / ghosting / coronas.

Also, people who do fixed-gaze-only-on-crosshairs, and don't play any scrolling/panning/turning (eyetracking) won't care as much about pursuits, because they more apply to the amount of motion blur (and trailing artifacts) in eye-tracking situations.

But with an upgraded camera, you'd be surprised even several university graduates / PhD's now agree with me there is validity in handwaves. The great thing is that a properly exposed sync track is literally metaphorically a recorded certificate of error margins:

Some of the professional reviewers do hand-waves too (don't use a rail) yet the photos look much better than all the hobbyist hand-waves here.

This is because of the literal accuracy-certificate factor of the sync track the sheer brute-force sampling factor (remember: The human eye is not on a rail).

Image

The bigger problem is the camera quality and the willingness to practice vs willingness to spend money. There were situations where a person with 10 hours of handwave practice can outperform a person with 10 hours of rail pratice -- because of various factors like rail vibrations and tripod vibrations (motorized and otherwise). Read on:

Ultimately, how you pursuit doesn't matter as long as it's consistently accurate to your required error margins. The error margins are measurable in the sync track! Also motorized setups sometimes are less accurate than a professional manual propelled rail, because of motor vibrations + camera vibrations + granular digital speed settings (slightly too fast versus slightly too slow). I've seen a $100 manual propelled rail be far, far, far more accurate than a $2000 motorized setup. See peer reviewed conference paper proof. It's how accurately you set up your rig.

The proof is in the sync track, no matter what your setup is -- the venn diagram of accuracy overlaps. I've seen hand-wave pursuits that were superior to a cheap rail based setup.

The moral of the story:
1. Prefer 4 tickmarks exposed
2. Photos includes multiple sync tracks
3. Lightly processed, not overexposed, not underexposed, not de-noised, etc.

How you pursuit it, is up to you. I've seen clever setups.
But, of course, interpretation of photos is more proportional to eye-pursuiting (e.g. eye tracking while looking at panning motions in DOTA2, LOL, browser scrolling, FPS turning (looking at objects scroll past rather than fixed-gaze at crosshairs), and other similar.

Static photography is a good analog of static eye gaze (e.g. stare at crosshairs)
Pursuit photography is a good analog of pursuit eye gaze (e.g. moving eyeballs tracking motion)

Image
(RTings)

Image
(HDTVtest.co.uk)

Image
(Blur Busters Early prototype #1)

Image
Wheeled LEGO device from the 2nd page of this thread.

Image
(Blur Busters Early prototype #2, achieved this near-perfect LightBoost photo. A bit underexposed but represented the dimness of old LightBoost monitors, and the strobe crosstalk seen is very close to exact WYSIWYG)

An enlargement of a pursuit photo using the above wood-blocks rails:
Image
- This is from LightBoost 10% on an old ASUS monitor
- Successful WYSIWYG: The sync track was fully preserved
- Successful WYSIWYG: The sync track inversion artifact (WYSIWYG) was preserved including the tinting of the pixels (the patterning effect also seen in sync track). You often saw this when you did flickering patterns on 6bit+FRC TN LCDs (e.g. www.testufo.com/blackframes on a LightBoost monitor -- the 2nd UFO often gained amplified inversion artifacts)
- Successful WYSIWYG: The odd misalignment between sync track and the monitors' inversion pattern, creating those 2-1-1-2-1-1 vertical patterning in the sync track (green-tinted pixels and cyan-tinted pixels); explained via the TestUFO graphics being 1 pixel downwards relative to the monitor's native voltage-inversion pattern.
- Successful WYSIWYG: The vertical screendoor effect was preserved (darn ner zero vertical vibration)
- Successful WYSIWYG: The horizontal error margin is so accurate it preserved the subpixel-level blur artifacts including the WYSIWYG red fringe along the left edge of the UFO dome; that part is truly WYSIWYG. This happens with monitors in low-MPRT strobe backlight operation; the subpixel color fringes can be preserved for moving objects not just static objects (because red is the leftmost subpixel cell in RGB, and a tiny 1/3-pixelwidth red fringe shows up on yellow objects on black backgrounds -- you notice this with a magnifying glass).
- Successful WYSIWYG: The faint strobe crosstalk effect was successfully preserved (LightBoost used to be really good on the VG278H and VG248QE), seeing the UFO.
- So many WYSIWYG elements was preserved, all the way down to subpixel artifacts
Assumption: You track eyes on the UFOs.
(Yes, that's not equal to real-world stationary-gaze on crosshairs in CS:GO. But there are other games and situations such reading text while scrolling (web browsers), or looking at things during RTS panning, or seeing tiny details during fast sports panning, or identifying camoflaged enemies in high speed low altitude helicoptor flybys (Battlefield 3) etc. And it is super important if you're in VR because the head movements and shakiness create perma-panning in VR, necessitiating zero-added blur above-and-beyond human vision. A slow 30 degree head turn over 1 second can create blurry panning on most older VR screens. Creating a distracting world-defocussing effect as you turn head. Instant nausea. See! Applied necessity.)

Anyway, I haven't seen a motorized pursuit ever gotten this accurate (so far) in tracking error margin.

Sometimes blur is wanted, but sometimes blur is unwanted. Everybody is picky in different ways. Some hate tearing. Some hate blur. Some hate stutters. We respect if you want blur and other better aspects.

Mind you, it helped that the camera avoided doing noise filtering.

In this situation, I found that vise-mounted wooden blocks actually stabilized the rail better than camera tripods, and actually performed better than the $30,000 rig. I've never seen a motorized camera track as well the Blur Busters Prototype #2. Due to motor vibration, or other factors like motors being 1/10th pixel too fast or 1/10th pixel too slow.

It's counter-intuitive. Yes, I thought motorized was better. But with incredible amounts of practice, a manual pursuit + sheer number of samples can compensate.

Motorized can still be worth it and motorized can automate things -- someday -- I'd like to see someone invent an auto-compensating motorized setup that feedbacks itself in real time (e.g. camera app that monitors the sync track automatically), sending realtime feedback via BLE or WiFi to an Arduino-controlled pursuit camera motor.

Either way, people are inventing pursuit camera methods. It doesn't matter HOW you pursuit. The sync track is literally almost a cryptographic certificate of error margins no matter how you tracked. That's why I invented the sync track because it embeds proof of tracking accuracy into the photo, which revolutionized display motion blur photography industry-wide in the last seven years.

The quality of the camera is more important than the method of pursuiting. You simply keep testing, testing, testing different pursuit methods until the sync track looks more perfect. Many motorized setups (>90%+ of them), amazingly, are far less accurate than manual method, or takes longer to setup, because it's so goddamn difficult to get the motor speed exactly correct. The error margins of repeated manual propulsions means enough speed attempts that you can get a good manual within 10 minutes, compared to spending 2 hours trying to get motor speed perfect on a motorized setup, and even some motors just has too much issues.

Now, I've seen some excellent motorized setups, and the sync track is a great verifier of motor accuracy, but most motors don't have realtime feedback on pursuit speeds, just like you can do it manually with your human eyes, because if the sync track tilts forward in the live preview (screen on camera or smartphone) you know it went too fast, and if the sync track tilts backwards in the live preview (screen on camera or smartphone) you know it went too slow.

Sufficiently trained, you can dynamically speed up / slow down your pursuit as you manually slide your camera during your shutter-held-down burst shoot (Current favourite: Sony Alpha a6000 Mirrorless with 11fps 24megapixel burst-shoot on U3-speed SDXC card ... much lighter than a big SLR and thus less vibrations in a sliding rail). So you get good pursuits in fewer attempts than using motorized approaches, because you can do 20 manual propulsion attempts simply letting the camera slide on its own continuous momentum.

An example good manual pursuit with a practically perfect ladder, completed on the Blur Busters Prototype #2 (woodblock version). The woodblocks gave better stiffness than tripods, especially when vise-mounted to a desk.

Image
(Click to zoom the photo, to look at the faint LightBoost strobe crosstalk more closely)

However, I've very rarely seen a motor do this accurate a sync track ladder done via the manual rail technique. Only when you get 5-figure pricing.

As long-time readers already know, a peer-revewed paper, which I confirmed along with NOKIA, NIST.gov, Keltek researchers -- had confirmed that manual-propelled pursuit camera can get the same accuracy as a $30,000 motorized rig!

Image

Image Image Image Image


A good reviewer can also simply set your error margin goal and design your rig around it.
That said, many reviewers are bad...

Mind you, compression problems and other issues (Denoising, overexposure, underexposure), remain endemic issues.
This is common in hobbyist pursuits. But also on the worse reviewers (that are worse than the best hobbyist pursuits).

So interpretation of hobbyist pursuit may be much more limited due to the problems. That said, it can still reveal nice information like amount of ghosting, amount of coronas, strobe crosstalk, and other WYSIWYG effects that aren't filtered out by cheap cameras. Stuff you normally see in the LCD Motion Artifacts FAQ and LCD Overdrive Artifacts FAQ.

I see a lot of problems with pursuit photography. Yes. But the fact that lots of good stuff exists now, is great. I love reviewers that boast their sync tracks (pictures of the sync track). You don't see those on all review websites, so it's hard to trust their accuracy. The sync track, again, is literally like a certificate of camera tracking accuracy. One should not care how one track the camera as long as the certificate of tracking looks good. Those A+ pursuits are quite obvious when I see the sync track. It's also nice when the EXIF is included, so I know what camera and settings they used on an unretouched, so I can trust results even more.

I prefer rail. Best rails do better.

However... The bottom line is that the best hand-waves are superior to the worst rail-based.

Scientifically, the brute-force sampling (30fps video creates 1800 freezeframes in just 1 minute), meaning you can spray literally 1000 photo attempts in just 1 minute of handwaving. The sheer number of scientific samples compensate for the inaccuracy of handwave, because the multiple-orders-of-magnitude of extra samples, means there's chances that a sufficiently accurate pursuit occured.

It might only happen 1 out of 1000 video freezeframes that it becomes within your desired error margin. Video as a stand-in for burst shooting. Video giving 1000 photos instead of 1 simply by waving multiple times. Then one file contains all thousands of "photos". A single player can single-step through them to find the perfect freezeframe. The lack of cost outlay (no need to buy a rail if you have lots of time to become a professional hand-waver after 30-minutes of practice).

The worst reviewers rail-based approaches is much worse than the best hobbyist video-based approaches. There is scientific validity in using a zero-cost pursuit camera, but it's easy to screw up. Eventually I'd like to see an app that automates it intelligently and create results similar to a rail. (Analog human-brain-driven human eyeballs are not on a rail, after all, and technology has come to a point where video is suitable stand-in under certain circumstances).

Now, it's easy for people do to a crappy attempt at video, to the point where the pursuit is useless. You prefer a recent smartphone, 3rd party software, ability to disable as much of the noise filtering, manual control over the per-frame video, and everything else. Then the video becomes useful.

One can diss the video pursuits all they wish, or burst-shoot handwaves of a Sony Alpha a6000 (it works well too, stiffen your arms, while holding camera and shutter, while spinning computer chair pursuiting). You can get maybe 100-200 photos in about 5 minutes, and then use a good photo viewer to quickly leaf through those. But most people only have smartphones, and the video player (With a good manual app that adjusts exposure, ISO, fixed focus, etc in the video), and it provides an opportunity of the $0 pursuit camera.

Refresh cycles don't even need to be aligned with shutter, because the human eyes don't align with refresh cycles, and the stacking of multiple exposures compensate. Though for temporally-generated tech (e.g. DLP, plasma, etc), you want exposures an exact multiple of refresh cycles ideally, and over an integer multiple of the dither cycle (e.g. 2-frame dither cycle, 4-frame dither cycle). That way, you don't get artifacts from trunctated dithering, and you more accurately represent human vision averaging behaviours -- that's the principle of multi-exposure stacking of pursuit camera. Human vision integration behavior.

To do it professionally though, you need video that is as good as 30fps burstshooting, and that usually requires expensive cameras. But some newer phones have manual apps that enables an ultrahighbitrate 1080p or 4K mode (because of their 4K and 8K capability) that has less filtering than older video. Video has also been used on a rail before too, and that is valid if the video is good enough.

In the long term, apps will choose the perfect freezeframe automatically, and a rail eventually ultimately doesn't matter due to duplicating a human's eyeball stabilization system, with those ever-improving camera sensors. Personally I'd prefer the app to use 10fps burstshoot over 30fps video, for higher quality photos/freezeframes, but the ultimate goal is to not require a rail at all.

The devil is in the details -- crappy videos by inexperienced members -- but we're not discouraging such practice because there is validity -- and it is allowing us to study the shortcomings of cheap compression codecs too. Even when I compliment on the tracking, they are not as good as the best video pursuits by a professional camera with a more trained hand-waver.

This doesn't mean that there are major problems with hobbyist pursuits, including all of those lately. However, I still encourage them because it is helping me discover which smartphone cameras are doing good jobs, as well as helping me brainstorm ways to create future apps that simulate the stabilizing nature of an analog human eyeball, making rails less necessary, etc.

Anyway, even Ph.D's have agreed that the hand-wave method has validity, because of the brute-sample factor (sheer numbers of photo samples to compensate for decreased accuracy, yielding more opportunities of better-than-rail accuracy).

Once graduates/researchers realize more details, they already stop laughing -- in exactly the same way people around here stop laughing anymore about 1000Hz displays.

P.S. Anyone want to help out in creating a hand-wave pursuit app? Perhaps make it your Ph.D project? Using various Matlab / ffmpeg / shader / AI algorithms / etc, to replace the rail while automating analysis / automating ensuring WYSIWYG effects. Today's camera software and smartphone sensors are crap but rapidly getting better and emulating a human eyeball without a rail will progressively get easier and easier.

______________

Blur Busters is obsessed about display motion characteristics. Full stop. Mic drop. ;)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 11 May 2020, 11:59

Chief Blur Buster wrote:
10 May 2020, 17:25
They all look the same to me, and not that important.
Let me cross-post an example analysis.
Imperfect as it is, let's show some scientific validity:

Let's try to scientifically analyze a YouTube which is often crappy because of camera compression artifacts + YouTube compression artifacts + Camera noise filtering + Camera equalization + Hand shaky.

phpBB [video]


It looks crappy on first glance, right, eh?But read onwards:
Chief Blur Buster wrote: Freezeframe selection can obviously be better -- I have chosen a better one out of the video.

So, anyway, to zoom onto an example good freezeframe of the RLCSContender video (regardless of opinion of his technique).

I analyzed this YouTube video, just for kicks.

Searching freezeframes are my proper way to analyze handwave pursuit camera footage. To grab the best freezeframe, I singlestepped using "," and "." keys at the highest YouTube resolution, and found something near 0:07 to screen capture and zoom, of a sync track error margin that I liked.

Hand wave shakiness varies widely throughout a whole video file, but there are often instantaneous chances of sufficiently small shakiness. For this specific freezeframe, handwave shakiness is definitely subpixel both horizontally and vertically. From what I see in the sync track, the accumulated hand tracking vertical error margin over 4 refresh cycles was only approximately one-quarter pixelheight -- while an estimate, it is clearly subpixel combined error margin.

Now tracking accuracy confirmed, time to closely analyze what's true WYSIWYG and what's not true WYSIWYG. This is somewhat subjective due to so many smartphone auto-everything behaviours, but at least the auto-everything behaviours are so well-known, that they are quickly identified.

Image

Once I trust the sync track certificate, I can finally analyze:

WYSIWYG Cons
- Compression artifacts out of the wazoo. Biggest error margin in analysis.
- Clipped histogram issue. Camera automatically oversaturates the colors (red bleed) blowing out other detail (e.g. black-lines detail lost in red UFO body).
- Focus is not subpixel-league, but adequate for basic analysis
- Much noise is completely filtered out due to compression artifacts and lowness of resolution when zoomed
- Auto-sharpening in smartphone camera shows sharpening artifacts at high-contrast edges

WYSIWYG Pros
- It correctly shows faint horizontal screendoor lines (vertical error margin is very subpixel).
- It correctly shows the faint brightened corona to the left of the black UFO legs.
- It correctly shows a blurrier left-edge than right-edge, very common on most panels
- It correctly shows dome haze to the left (ghosting). All LCDs do this to 'some' degrees, some below human visibility noisefloor of some humans, while others really get distracted by it (much like they get distracted by tearing more than the next person, or pick-your-favourite-nit-pick of motion -- everyone is picky in different ways).
- It correctly shows the faint color tinting of yellow UFO (reddish on left edge, greenish on right edge) from the blurred subpixels (R leftmost subpixel, B rightmost subpixel). I see this even in display-motionblurred motion when tracking eyes on UFO.

I see these artifacts instantly when I play videogames, but the next person don't. Everybody is picky in different ways for real-world games. Poor color? Tearing? Stutter? Faint ghost artifacts? Discolored ghosting? Etc. And sometimes what is not visible in some games (CS:GO) is amplified visibility in another (Fortnite) due to different colors chosen.

Fortnite often uses more saturated TestUFO-like colors than CS:GO does, and the 3rd person view means turn-blur and overhead flying objects more resemble TestUFO motio ntests. So some realworldness of TestUFO varies on a game-by-game basis. And of course, if you haven't controlled your microstutters (e.g. 400dpi mouse preference), you won't see TestUFO-smoothness in your game, and thus won't see the artifacts as well.

And of course, artifacts are more visible at 4K 144fps than at 1080p 144fps, if you're LUCKY to have a 4K 144Hz vs 1080p 144Hz, and a GPU powerful enough (RTX 2080) to do a 144fps versus test in your favourite game. That's the Vicious Cycle Effect in action.

See? Still has some usable analysis value.
I prefer these stuff be done by professional reviewers, but I continually encourage hobbyists (of all stripes) to keep practicing hand-wave pursuit camera. It's still scientifically usable, even Ph.D / graduates agree.

The sync track to me, is like a certificate of tracking accuracy, no matter how you tracked (rail or railless). Once it's done, it's easy to identify camera limitations (e.g. compresison artifacts, histogram blowouts, noise filtering artifacts etc), THEN continue analysis of what wasn't filtered out.

Another reason why I encourage readers to experiment with pursuit camera is to better understand how to improve error margin analysis.

I don't like the bait headlines (I edited that out, RLCSContender), but anyway, I'm not discouraging hobbyist pursuit camera. Keep it up everyone (no matter who you are). Just remember this is a discussion forum, carry a backpack full of salt grains, and remember to correctly analyze without boasting "This Is My Perfect Pursuit. LOL. MICDROP. TL;DR. TRUTH" -- that's going to be unceremoniously edited out. ;)

Pursuit camera is inherently imperfect. But as seen in the above, there's value especially when one works to find-best-freezeframe-and-zoom.

The art of the freezeframe selection is sometimes a skilled job. The good news is that job can partially be outsourced sorta (hobbyist pursuit video + me choosing best freezeframe with sufficiently low tracking error margin)
The bottom line is that the sync track is like a built-in certificate of tracking accuracy.

This is the magic of my free pursuit camera invention -- it makes the pursuit method not matter -- the sync track is the certificate of trust that I use to analyze pursuit camera images. It's the proof that some motorized cameras were worse than some manual setups. It's the proof that handwaves has merit.
- There are good & bad motorized setups.
- There are good & bad manual rail setups.
- There are good & bad Rube Goldberg setups.
- There are even analyzeable handwaves. Surprisingly so.

See -- even these crappy imperfect pursuits still has some analyzeable content if you exercise the proper art of freezeframe selection AND then analyze it with the acknowledgement in view of easily-known smartphone limitations.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Joel D
Posts: 158
Joined: 25 Apr 2020, 19:06

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Joel D » 11 May 2020, 22:23

Hey thanks again for starting a whole thread based off my statements.

Yea so, let me say this, I definitely think a good "rail" system would be better than freehand. Thats what I meant to say. Motorized I was going too far, just bringing up something I was thinking I seen in a movie making behind the scenes thing and thought, hey motorized ! - No, I was mistaken as far as that>.. I was visioning a smooth as all hell rail system.

To me - FreeHand = possibility of moving up and down a bit, and the speed always seems to change.

A good rail system will offer a *bit* of resistance to get a totally smooth flowing feel. Should never move up and down or jitter, wobble, shake, or you got a cheap one. The tiny unnoticeable resistance adds stability and smoothness.

They shouldn't use ballbearings IMO, as that would cause some jitter.

Hey, anyone invent a magnetic floating camera rail yet ? Done properly, (like some speedrail trains in Japan and elsewhere) should be ultimately smooth. Cause its touching nothing. Poorly done magnetic floating has the little "ball" type on feeling. Proper is literally floating on air, but stability controlled by proper placed fields.

Done right it shouldn't mess with the electronics either. Also, done right magnetics can control adjustable speed. Theoretically could then be synced to the computer to follow what you want. No motors needed. More rudimentary (but prob more solid) way would be that in can just repeat a exact speed you just pushed it at, so you start it by doing one with your hand. It then keeps repeating that while you dial it in. All passive, thanks to magnetics.

Cause I agree, motors are never going to be totally smooth. Its a motor. Magnetics can be a passive motor in a sense. Nothing "running" or "moving" so totally smooth.

But all that aside, wow, great info. Still reading it all. Prob gonna take weeks to set in. LOL

Joel D
Posts: 158
Joined: 25 Apr 2020, 19:06

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Joel D » 11 May 2020, 22:30

So, my first question.

The way a monitor is and how pixels are positioned on the screen, is up and down movement and diagonal movement across pixels a equal representation to the famous side to side you always do ? Curious why you don't have to do all 3 directions to truly prove a monitors worthiness ?

Like why only side to side ? I'd like to see it diagonal across the screen and up and down too. Cause this is how things in real world usage will indeed move across the screen. What if the way monitor "A" is designed just happens to look really great in the UFO test side to side, and Monitor "B" does not. But if you were to do it diagonally then from sheer accidental design, Monitor B preformed that better than Monitor "A" ?

Is this impossible ?

If not, then it seems we would want to run all 3 tests always and pick the monitor that performed the best average of the 3 tests.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 12 May 2020, 14:51

Great replies and agree with all of it, except:
Joel D wrote:
11 May 2020, 22:23
To me - FreeHand = possibility of moving up and down a bit, and the speed always seems to change.
You are correct but that's not necessarily a 100%-con-only con, it actually feedbacks into making hand-wave possible.

That's what makes the sync track invention so great.

Hand Wave Science: "The randomization of hand movements" PLUS "Brute-force samples of video" PLUS "Embedded accuracy verification" EQUALS "Useful science"

You can see that there are many freezeframes with subpixel vertical and subpixel horizontal (See above freezeframe shot).

The magic of the randomization, gives a statistical spray, whereupon one rare frame is more accurate than most average rails. The magic of video, means 1000 frames in less than 40 seconds, and 10,000 frames in less than 6 minutes.

30fps video = 900 photos in 30 seconds
30fps video = 18,000 photos in 10 minutes.

Some cameras now provide photo-quality freezeframes. The high-bitrate 4K and 8K cameras for example, with light compression. Even some smartphones now have a high-bitrate video mode where each freezeframes are better quality than a classic 12 megapixel photo. So video, is a great burst-shoot alternative, assuming the video disadvantages continue to be deleted technologically.

Brute sample rate + randomization = chances of great accuracy

That's 4 orders of magnitude of random samples, in a single video file, that can thus be analyzed automatically or manually (via a rapid jog shuttle + single framestepping). The sheer sample rate made the hand wave scientifically viable, because the error margins are measurable!

See? In our situation, the randomization is good during a brute-force sample rate such as video files.

So, it's no longer necessary to dismiss hand wave as useless. The bigger problem is camera quality, as seen in above. It's not as good as the best rails, but it definitely outperformed many $100 rails, and it's the "Free $0 Pursuit Camera" approach for those who want to pay with time instead of with money, since most people have a smartphone, and thus most people have a camera in their hands (that is currently rapidly improving technology, including freezeframe quality and stabilization, etc).

This is the rationale of the free hobby pursuit camera -- $0. In the long term, camera stabilization can become as good as a human eyeball, with none of the antishake artifacts of old-fashioned motion stabilization. But in the interim, all pursuit approaches achieving successful accuracy, is scientifically valid, if the error margin is measurable -- as it is for the sync track invention.

All the rest of what you said is correct and useful. I'd love to see a maglev rail with automatic speed feedback. I think Arduinos and speed-verifying sensors are now good enough to create a method that provides viable realtime speed regulation.

Even plain motorized on a vibration-absorbing sprocket ribbon. That's how 3D printers are so accurate, since the sprocket ribbons absorb the motor vibrations. I think that's easier, the inkjet-head / 3D-printer-head technique. The problem is applying analog speed control to that digital movement.

If you look closely, you will agree that (for the purposes of 960 pixels/second science), that bigger problem is camera compression and video quality, moreso than the handwave technique itself.

BTW, related article, Making Of: Why Are TestUFO Display Motion Tests 960 Pixels Per Second?.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 12 May 2020, 15:07

Also, let's analyze a freezeframe screenshot further from a hand-wave:

Image

Yes, this is an enlarged/zoomed freezeframe screenshot from a hand-wave video.

Most of the problem is compression related / camera resolution related, but you can still observe the relatively good "chance" tracking accuracy in the spray of random samples.

The currently measured error margin is approximately 0.25 pixels horizontally and vertically. And that's a hand wave. The combined vertical error occurs at approximately between the third and fourth tickmarks (two lines merging into one bright line)

The bigger problem is camera quality. Compression artifacts, focus, de-noising, color auto-equalization, etc.

But videos are rapidly improving in quality to the point where some video files are literally equal burstshoot, like a 30fps burstshoot of thousands of great photos. Phones arriving with an 8K-capable sensor; usually have a higher-quality high-bitrate 4K mode or 1080p mode that has low amounts of noise filtering and low amounts of compression, that is equivalent to photos per freezeframe, as long as you adjust correctly -- technology is improving.

All pursuit approaches are welcome though -- we'd love the maglev rail, but that will never replace a $0 free pursuit camera.

If that still doesn't convince you that hand waves can have scientific merit (Even Ph.Ds agree too), please look closer:

Image
(Click to zoom)

All of those tickmark misalignments are all universally less than one pixelwidth and less than one pixelheight.

We already know pixel size by looking at the horizontal lines, and making sure that the black lines and white lines are roughly the same thickness (within subpixel margins); it shows that vertical error margin was sufficiently subpixel. But the tickmarks themselves even provide mid-handwave error margins! And this provides a certificate of multiple locations of errors -- even the inconsistencies at the left and right edge are simply from rotation vibrations (e.g. minor rotation of camera around camera axis, because the misalignments are not symmetric at left edge and at right edge).

The tiny random shakiness, specifically only for this frame, was consistently continuously subpixel for the whole duration of this specific photo, for the time interval that it was collecting photons from the monitor (e.g. any time there's any illuminations from the monitor, and all 4 refresh cycles were captured, translating to 1/60sec for a 240Hz monitor, or 1/30sec for a 120Hz monitor). Being a sample-and-hold display, that's also pretty good shutter-time alignment to darn nearly exactly 4x refresh cycle too. (Shutter time-vs-refresh imperfections show up as this diagram).

Although camera sensor scanout (e.g. 1/240sec, 1/480sec, 1/1000sec) for non-global camera sensors can be error margin, most modern smartphone sensors now use sensor scanout velocities faster than a single refresh cycle, and this error margin is bypassable (portrait smartphone on landscape screen, or landscape smartphone on portrait screen) or tolerable (as in www.testufo.com/scanskew as seen on a 60Hz DELL monitor). In this situation, camera scanout artifacts are negligible thanks to the lack of scanskew tilting (even within the 4-pixel-tall tickmarks).

Lots of the error margins are knowable and measurable, the sync track was intentionally designed this way to make so many error margins visible -- basically the sync track is metaphorically like a certificate of embedded error margin, for every single video frame, every photo, etc.

Horizontal error shows as horizontal tickmark misalignments.
Vertical error shows as vertical tickmark misalignments.

But none of that happened more than a tiny fraction of a single pixel.

I may be exaggerating a bit in this paragraph -- but you see how so well I designed the sync track -- to the point it is treated literally like a cryptographic-league certificate of tracking accuracy.

As you can see, the zoomed screenshot of a video freezframe -- confirmed that for this specific freezeframe moment (in the brute-force firehose sample rate of a video file), that the random tracking accuracy sufficiently fell within the needed error margins.

See? ;)

Now you're convinced!

But yes, I like your auto-compensating maglev rail idea. Hard to do in a $0 method. But in 25 years, timecoded photons in a theoretical timecoded photon camera, can make handwave as perfect as a maglev rail, since an algorithm can theoretically realign all of them to perfect locations...

Mind you, technique of handwave pursuits need to improve, but apps can make that happen more automatically (gamify it a bit -- a user keeps sweeping phone until the phone goes beep, via an artificial-intelligence error-margin analyzer, and via software that turns off camera compression and camera noise filtering, only saving/choosing good freezeframes).

But imagine in a few years, there could be a near-perfect 8K camera (below human vision noisefloor for pursuit camera use) -- with no compression artifacts, pretend camera limitations don't exist. The main problem becomes an error-margin issue.

I'm hoping that within five or ten years, a rail-quality hand-wave app can be developed. Much easier to do than a maglev rail, given the rapidly improving-ease of artificial-intelligence toolkits. One that will bypass a lot of camera problems (video compression, noise filtering).

Basically a custom pursuit camera app can be created -- one could configure your required tracking accuracy ("0.3 pixel tracking accuracy") and then keep waving the smartphone until the smartphone goes beep. Done. Preconfigured. It'd do focussing / correct color temp / avoid denoise / avoid overequalize / avoid colorclip / correct ISO / etc. No rail, no cost, just an app to enforce consistency.

Future 8K camera sensors of year 2025 should be able to do photo-quality freezeframes quite easily that can be realtime analyzed by the app via GPU AI. Then it'd embed numbers ("Vertical Error: 0.3; Horizontal Error: 0.29") So that pursuit camera is simply a gamified task that goes beep when accuracy margins are met.

I've even seen many rails way, way, way worse than that (jiggle, bends, vibrations, stiction, camera weight, lens vibration, internal camera flex, tripod flex, tripod vibrations, old wood floor flexing under tripods, etc, etc, etc, etc, etc).

Such a theoretical "easy AI pursuit camera app" is cheaper (potentially free app) and more mainstream than a maglev rail, and potentially just as accurate (at least with tomorrow's camera technology and nanosecond-granularity or photon-level stabilization algorithms). Blur Busters is an incubator of display testing inventions and many reviewers includes Blur Busters testing methods.

Now mind you, forum members posting videos without freezeframe selection -- sometimes look useless as they look all the same (90% camera fault and user fault) -- UNTIL one properly executes cherrypicked zoomed freezeframe analysis like I did -- whereupon, the differences starts to actually emerge from the different pursuits.

People doing very crappy freeze-frame selection is also part of the issue. Record a video, but not selecting a proper freezeframe -- with massively worse accuracy margins than a different freezeframe in the same video. Forum members mislead by other users doing crappy pursuit photos, however, is no worse than forum members mislead by crappy static photos, and just requires similar moderating -- or some guidance on improving the technique (whether be static or pursuit).

The magical convenience of a single video upload (compression nonwithstanding) is it's a recorded of all the random shaky samples -- that somebody else re-selecting a better freezeframe is possible. Outsourced analysis. A professional freezeframe-selecting person (like me) to find an adequate 1 in the spray. One can rapidly jogshuttle the video slider and watch how the sync track wavers, tilts back and fourth, focuses and defocusese, and hone quickly on those accurate freezeframes. To other forum users, all the videos are crap and look the same...

...But a video file means it's possible to hone quickly into accurate freezeframes (if any), one can easily manually hone through 10,000 frames via jogshuttling in a very good desktop video player by sliding the slider back and fourth, watching the sync track, until you get close to potentially good, then single-framestep towards the good sample. I can find great freezeframes manually (with just my eyes) in less than 30 minutes of slider-jogshuttling (drag back and fourth) 1,800 frames of a 1-minute video clip of pursuit camera waving back and fourth, combined with keypress single step forward/backwards.

The magic of video players make it easy to surf for good sync track freezeframes. No photo viewer can surf through over 1,000 photos as quickly as a good video player surf through 1,000 freezeframes, which is why I just absolutely adore photo-quality video (lightly compressed 8K) as a surrogate stand-in for burst-shooting. If I had a choice of 100 photos versus 2,000 video freezeframes (of identical compression quality and resolution per freezeframe), I'll take the video file -- it's much faster to manually find accurate tracks -- thanks to how convenient video players are compared to photo viewers in surfing through the equivalent of thousands of photos.

Now imagine an "AI pursuit camera app" to do that automatically, going green/beep when it confirms it captured a photo where tracking was momentarily rail-quality (thanks to randomizations of shakiness creating instantaneous moments of stellar accuracy as it crosses through the toofast/tooslow points in the handshake randomness).

This is how a handwave eventually bcomes as stable as a maglev rail -- and for free.
(In a decade or so)

But yes, we would love a maglev rail today, since it will take time for tech to catch up -- to allow app tech to catch up (it's already almost there). All pursuit methods are valid, and the sync track is a great verifier of random pursuiting methods.

That said... we think outside the box -- just like frame rate amplification technology, LCDs with less motion blur than some CRTs, Journey to 1000Hz displays, and tons of Blur Busters worthy topics -- micdropping these debates.

See? ;)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 12 May 2020, 18:48

Camera Issues Is Often The Bigger Problem

Here's additional proof that cameras are usually the bigger problem, versus pursuit accuracy.

One needs to understand the camera. Imagine taking a photography class. Know how to operate a manual SLR camera? Understand the histogram? Understand color clipping issues? Digital zoom issues? Compression artifacts? (Which is usually worse for video than for static photography).

Allow me to post an original-vs-camera image (freezframe from a digital-zoomed video using a 3rd party video recording app on an iPad running in Vibrant mode), and then shrinking the video file size slightly to demonstrate video compression interference. There's some side effects that are rather visible. This is an intentional exaggeration of camera weak links, but some phones are worse than that at their default settings! So, there are limitations to WYSIWYG-ness of camera images.

Observe how the camera compression, color distortion, and histogram distortion, can distort the WYSIWYG-ness of strobe crosstalk.

Image

Examples:
- Camera compression & codec overcompression
- Color gamut clipping
- Saturation effects
- Blowout effects

Observe that the 1% and 3% is gone in the camera image, and even the 10% is much fainter.
Even the compression-artifact fringing becomes more visible than the crosstalk visibility of 1% and 3%

Cameras vary a lot, too. Cameras are useful but cameras are not perfect replacements for human eyeballs.

Maglev accuracy is useless without a sufficiently good camera: In other words, don't use an old 720p webcam with a maglev rail. :D

...However, cameras are improving rapidly. Some cameras now do video as good quality as identically (or nearly) to 24fps, 30fps or 60fps burst-shoot. So, video becomes a burst-shoot stand-in. If you look at hobbyist smartphone pursuits (video or static, rail or not!), they all look the same mostly because of camera limitations, not because of pursuit technique.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 13 May 2020, 00:33

Joel D wrote:
11 May 2020, 22:30
The way a monitor is and how pixels are positioned on the screen, is up and down movement and diagonal movement across pixels a equal representation to the famous side to side you always do ? Curious why you don't have to do all 3 directions to truly prove a monitors worthiness ?
Diagonal and vertical motion will be very similar.
Eye-tracking display motion blur is practically identical vertically/horizontally/diagonally on most displays.

There are few where it diverges due to sheer display algorithm such as video interlacing www.testufo.com/interlace (of old CRT days). But progressive scan sample-and-hold is identical in all dimensions. Though you have to go about 1.4 pixels horizontally to match 1 pixel diagonally. But physical distance motion blur is identical in any dimension.

However, horizontal motion is easiest because
-- It's the wide dimension of the monitor
-- More tracking time means more time to pursuit
-- It's easier to level a camera rail horizontally
-- Most panning motion in FPS games are horizontal (turning / strafing)

That said, vertical motion testing has merit and some TestUFO motion tests should eventually have a vertical mode, at least to answer people questions. Some of them already do, but not at quite obvious configuration ability.
Joel D wrote:
11 May 2020, 22:30
Like why only side to side ? I'd like to see it diagonal across the screen and up and down too. Cause this is how things in real world usage will indeed move across the screen. What if the way monitor "A" is designed just happens to look really great in the UFO test side to side, and Monitor "B" does not. But if you were to do it diagonally then from sheer accidental design, Monitor B preformed that better than Monitor "A" ?
I haven't seen this happen with progressive scan LCDs and OLEDs. But it's not impossible for CRTs due to video interlacing.

Also, some Jumbotrons using multiscan (multiscanning = zigzag artifacts during www.testufo.com/scanskew ...) can produce some interesting artifact differences for horizontal motion versus vertical motions
Joel D wrote:
11 May 2020, 22:30
If not, then it seems we would want to run all 3 tests always and pick the monitor that performed the best average of the 3 tests.
Useless for majority sequential-scan progressive scan displays such as LCD and OLED.

You can also test vertical motion by rotating your display into portrait mode and running TestUFO that way. That will allow you to record vertical motion artifacts. However, no visible differences has been measured for IPS / TN / VA LCDs have thus far been measured -- all motion vectors on progressive scan displays generate the same amount of persistence motion blur at these human timescales. Scanout-direction does produce scanskew differences www.testufo.com/scanskew and inversion-amplification patterns such as www.testufo.com/inversion can show differences. But these don't affect persistence display motion blur.

P.S. www.testufo.com/scanskew is fun to look at on older 60Hz iPads and on DELL 60Hz monitors. They're really slow-scanning, so there's a lot of scanskew. It's an artifact of sequential refresh becoming human visible (See high speed video of display refreshing scanout where not all pixels refresh at the same time). It can appear/disappear after rotation (either the pattern or the display physically). But even that scankew difference doesn't affect the amount of motion blur at all, regardless of vertical or horizontal motion.

Although, we sort of vaguely getting offtopic from "Are Hand-Wave Pursuit Cameras Silly/Stupid", so scanskew questions should be posted in the Scanout Skewing Thread. It's an amazing side effect that most people don't notice until they pay attention -- it also explains the parallelogramming or jelly effect of windows during dragging windows on a 60Hz monitor (while eyetracking the window being dragged around).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Are Hand-Wave Pursuit Cameras Silly/Stupid?

Post by Chief Blur Buster » 07 May 2022, 14:34

Joel D wrote:
11 May 2020, 22:30
Like why only side to side ? I'd like to see it diagonal across the screen and up and down too. Cause this is how things in real world usage will indeed move across the screen. What if the way monitor "A" is designed just happens to look really great in the UFO test side to side, and Monitor "B" does not. But if you were to do it diagonally then from sheer accidental design, Monitor B preformed that better than Monitor "A" ?
Update: More data as of year 2022.

Further data collected shows that motion blur is identical on nearly all (>99%) of flat panels regardless of motion direction. It used to be only an issue on old interlaced displays, but all modern flat panel progressive scan displays have the same motion blur mechanics in all axes.

Since horizontal is the long dimension, it's a longer motion for a given number of pixels per second. It's also easier for rails too.

In addition, we also found a handwaved iPhone 13, an optically stabilized smartphone, can outperform some midrange to highend-ish mobile field rail setups --

See BREAKTHROUGH: Hand-waved iPhone 13 pursuit camera outperforms rails for rapid field testing

The improved optical stabilization and SLR-quality freezeframes in high-quality high-bitrate 4K H.EVC recordings, with accurate camera color and exposure (with the assistance of existing apps), reduced 5-minute attempts at handwaves to just 15 seconds in many cases.

As phones gets better and better, a rail will become less and less necessary (unless a large budget is spent on the rail). Many things such as artificial intelligence can also assist in automatically recognizing perfect pursuits by AI-recognizing the sync track.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply