Properly Designing A Blind Test That >90% Of Humans Can See 240Hz-vs-1000Hz (non-Game Use Cases Too!)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Properly Designing A Blind Test That >90% Of Humans Can See 240Hz-vs-1000Hz (non-Game Use Cases Too!)

Post by Chief Blur Buster » 26 May 2022, 17:58

Discorz wrote:
26 May 2022, 03:38
For #3 control device jitter, in addition to 2000+ Hz mouse poll rate I'd note to use high DPI + low in-game sensitivity combo for reducing jitters even further. This only applies to in-game tests.
Good point about this. If a mouse must be used for this blind test, it must be acknowledged as an error margin (mouse Hz, mouse jitter, sensor accuracy, mouse pad texture, mouse feet stiction, etc).

Related links:
- Research paper: Mouse Jitter on 1000 Hz mice
- Proposed High Definition Mouse API (a future solution for software vendors)

From the paper, 1000Hz gaming mice still has jitter, that needs to be solved with an 8000Hz+ mouse.
Research DOI: https://dl.acm.org/doi/10.1145/3472749.3474783

Image

Image

*IMPORTANT NOTE: Tested using existing technology; aka existing-Hz sample-and-hold displays at only 1080p.
However, even the 4000Hz-vs-8000Hz pollrate jitter can become more visible with more extreme variables of future displays:
(A) MPRT (in milliseconds) is equal or less than jitter error (in milliseconds). In this case, display motion blur no longer hides the tiny jitter.
(B) Sufficiently fast motion speeds (e.g. motion speeds faster than (1/MPRT) in pixels per second) that are still eye trackable.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

stl8k
Posts: 23
Joined: 15 May 2019, 07:59

Re: Properly Designing A Blind Test That >90% Of Humans Can See 240Hz-vs-1000Hz (non-Game Use Cases Too!)

Post by stl8k » 06 Jun 2022, 08:44

Great thinking here!

I'd also point out the work Rafal and others have done over the past 3-5 years on motion quality metrics. Effectively in a series of papers, his team has improved the state of the art in motion quality metrics. To follow this motion quality metric work, start with his latest paper below and follow its references to the other metrics (linked for convenience below).
A contrast sensitivity function, or CSF, is a cornerstone of many visual models. It explains whether a contrast pattern is visible to the human eye. The existing CSFs typically account for a subset of relevant dimensions describing a stimulus, limiting the use of such functions to either static or foveal content but not both. In this paper, we propose a unified CSF, stelaCSF, which accounts for all major dimensions of the stimulus: spatial and temporal frequency, eccentricity, luminance, and area. To model the 5- dimensional space of contrast sensitivity, we combined data from 11 papers, each of which studied a subset of this space. While previously proposed CSFs were fitted to a single dataset, stelaCSF can predict the data from all these studies using the same set of parameters. The predictions are accurate in the entire domain, including low frequencies. In addition, stelaCSF relies on psychophysical models and experimental evidence to explain the major interactions between the 5 dimensions of the CSF. We demonstrate the utility of our new CSF in a flicker detection metric and in foveated rendering.
https://www.cl.cam.ac.uk/research/rainb ... /stelaCSF/

Content-adaptive Metric of Judder, Aliasing and Blur (CaMoJAB)
https://www.cl.cam.ac.uk/research/rainb ... h_Rate.pdf

FovVideoVDP is a video difference metric that models the spatial, temporal, and peripheral aspects of perception. While many other metrics are available, our work provides the first practical treatment of these three central aspects of vision simultaneously.
https://www.cl.cam.ac.uk/research/rainb ... deoVDP.pdf

The FovVideoVDP paper above has a great section dedicated to explaining the different types of metrics for motion quality.

We need to try to unify the applied work that you and the community do to the work that Rafal and others in academia. One place to start is to map the applied language to the academic language. I will put something together for further refining here.

Post Reply