Can algorithms be used to approximate shorter persistence image your eye samples?

Ask about motion blur reduction in gaming monitors. Includes ULMB (Ultra Low Motion Blur), NVIDIA LightBoost, ASUS ELMB, BenQ/Zowie DyAc, Turbo240, ToastyX Strobelight, etc.
Post Reply
jman54
Posts: 10
Joined: 20 Aug 2023, 00:27

Can algorithms be used to approximate shorter persistence image your eye samples?

Post by jman54 » 02 Mar 2024, 15:16

Correct me if I am wrong, but higher persistence is blurrier because the eye takes multiple samples of a single frame with slightly offset samples due to eye jitter and blurs them together. With that in mind, take a super high refresh display, instead of just reducing persistence what if you showed a series of images where the average equals what a 1ms persistence image looks like. Now this might just be a single frame with black frames everywhere else, but I think it should be possible once you get high enough refreshes to just flash colors fast enough for them to combine? How fast does that need to be? If we get displays fast enough to do that, we could use algorithms to predict what colors to show in each frame to approximate what the eye would see with a 60hz 1ms persistence video. It probably won't be 100% accurate enough for predicting 1ms accuracy but it might be able to make 60hz look like some higher framerate. This way we don't need super gpus and we don't have to take a brightness hit.
I hear a lot of talk about how oleds have potential for super high refresh rates and this might be a use case for that.

Post Reply