It's actually pretty close, but not identical.thatoneguy wrote:I still stick by my theory that if we can achieve a lower image persistence than the human eye can perceive(say for example 1 nanosecond) the multiple image effect will be eliminated at any refresh/framerate combo.
Persistence affects eye-tracking.
Multi image affects non-eye-tracking (stationary gaze).
-- Imagine a theoretical robot arm moving a theoretical 1,000,000 Hz mouse.
-- Imagine the robot arm moves the mouse really, really, really fast -- like 1,000,000 pixels per second.
-- You do a fixed gaze.
-- On a theoretical 100,000 Hz (100KHz) refresh rate display -- the mouse arrow steps would be 10 pixels apart (1,000,000 / 100,000 = 10)
-- That said, with the refresh cycles only 1/100,000sec you will need a huge number of nits (e.g. 10,000 nits or 100,000 nits) for white pixels) in order for the 100KHz phantom array effect to be really visible
-- Indeed, such a display would be too blindingly bright (even if only the mouse arrow pointer was that insanely bright).
(Note: An infinitesimally short flash is visible to the human eye, given sufficient photons. For example, a 1 microsecond flash is visible to human eye if it's 1000 times brighter than an equivalent 1 millisecond flash. Same number of photons. Equally "instantaneous looking" to human eye. But both are still visible.)
The solution to make both equal is:
(A) Enough hertz to solve phantom array effect for movements matching maximum human eye-tracking speed
This fixes eye tracking situation
(B) Add GPU motion blur between refresh cycles to eliminate the phantom array effect
This fixes stationary gaze situation
Doing (B) will slightly interact with (A), so you might need to go to a 2x factor above maximum to simultaneously satisfy (A) and (B). Basically X,XXX Hz with GPU-added motion blur of 1/X,XXXth second between all adjacent refresh cycles.