Another software strobing implementation on a 3D engine

Talk to software developers and aspiring geeks. Programming tips. Improve motion fluidity. Reduce input lag. Come Present() yourself!
fuzun
Posts: 17
Joined: 20 Jan 2018, 21:18

Re: Another software strobing implementation on a 3D engine

Post by fuzun » 22 Jan 2018, 23:35

lexlazootin wrote:As a HL speedrunner it does work very well! We can't use Xash for runs but it's still a very neat experience. It's a shame that the FPS cap in Xash is rubbish (can't hold a steady fps) because with this and G-Sync you could set it to any value and get a strobed experience.

I found it also neat that you can adjust the OD setting on your monitor to get a brighter image ;) and it will quickly show you how some colours get over shot or under.

I wish someday soon Valve will open source GoldSrc to add features like this directly. :(
Thank you for the feedback. There is a forum called twhl.info if you do not know. We discuss open source Goldsrc and bunch of other stuff there.

A developer there is actually developing a Goldsrc clone or remake just like this Xash3d along with hlsdk remake (hlenhanced). It will be good because Valve does not like Xash3d and thinks that it violates copyrights (which I do not know if it is a real thing). And Xash3d runs on really old code base.
Last edited by fuzun on 23 Jan 2018, 00:31, edited 1 time in total.

fuzun
Posts: 17
Joined: 20 Jan 2018, 21:18

Re: Another software strobing implementation on a 3D engine

Post by fuzun » 22 Jan 2018, 23:48

BattleAxeVR wrote:What I'm curious about, in order to make motion look as realistic as possible, and given ultra high native framerates from older games with simpler engines / graphics, would be to do accumulation-type motion blur (blending 300 fps into 60 or 120 fps), then BFI those frames.

So you present a framerate-appropriate blurred image then using low persistence get the exact right amount of blur, no more, and no less.

There is definitely IMO such a thing as not enough blur, i.e. when you see objects skipping instead of smoothly smeared across the image sensor or out the display finally. Really, the enemy of clarity is low framerate, no matter how much you reduce persistence further you're just exposing a lack of motion blur and things start to look jumpy. Two different types of temporal aliasing, basically. So pick your poison: studdering or over-blurring. There's a crossover point where you don't get extra persistence based blur from the display added on top of the image being presented, but that image is captured or rendered at a certain framerate and objects should *not* teleport from one position to another, that's physically impossible / inappropriate.

So really, what I'm saying is, motion blur isn't the enemy per se, it's excessive motion blur that's the problem. To my mind, a framerate appropriate amount of blur, without anything extra added by the display, but also without missing anything by naively rendering a delta function snapshot in time, is the best.

Critiques / thoughts? I could be totally wrong about this. I've seen jerky motion even at 90 fps in VR experiences, when there isn't any synthetic motion blur added and objects cross your field of vision closely (say a projectile or person travelling close to you such that the number of pixels / frame the objects traverse is much higher than the framerate so that it looks like they teleport from one static moment in time to another, discretely. This doesn't look right to me and takes me out of the moment. IRL objects smear in your perception when they cross your head quickly, in terms of high degrees / second movement).
Yes there must be certain amount of blur but it needs to be natural and caused by high frequency, not by software.

I thought about blending but I am not sure if it would make the situation any better in terms of motion blur. And would not this blending make soap opera effect? I am not much into that effect and I rather dislike it.

There is a project called SmoothVideo Project (SVP) and it does a similar thing you are saying. If input media is 24Hz, it makes its refresh rate equal to monitors refresh rate. Like 24 Hz -> 144 Hz.

It makes this by doing mathematical calculations between frames and inserting artificially generated frames.

Result: Not good (in my opinion). Because films already have that captured "natural" motion blur which we do not have in computer ecosystem. The only thing that should be done is synchronizing the input media's refresh rate with monitor's refresh rate. Certain renderers like MadVR can do it so I think there is no need to pay for this software.
Last edited by fuzun on 23 Jan 2018, 00:33, edited 2 times in total.

fuzun
Posts: 17
Joined: 20 Jan 2018, 21:18

Re: Another software strobing implementation on a 3D engine

Post by fuzun » 23 Jan 2018, 00:03

Chief Blur Buster wrote: So how do you fix strobosopic effect AND motion blur simultaneously? Easy. >1000fps at 1000Hz. Okay, yes that's hard.

The higher the frame rate, the less GPU motion blur effect you need to fix any remaining stroboscopic effects. So a higher Hz display at a higher framerate. Now you're figuring it out :)

That way, you only need to add 1ms worth of GPU motion blur effect to fix any remaining minor phantom array effect. That's extremely minor (to most people) -- even LightBoost is 2ms persistence, more motion blur than doing the 1000fps@1000Hz + 1ms GPU motion blur effect for 100% Phantom Array Free with less motion blur than LightBoost/ULMB at default pulse width settings!!!

That's why blurless sample-and-hold is so superior to strobing -- no phantom array effect AND no motion blur. For more information please read Blur Busters Law: The Amazing Journey To Future 1000Hz Displays. Strobefree ULMB will arrive during the 2020-2030s decade, I have seen prototypes (including tantalizing glimpses via our [url=http://www.blurbusters.com/480hz[/url].

NVIDIA also tested a 1700Hz zero-latency display. ViewPixx also now sells a 1440Hz experimental DLP projector for vision scientists. 1000Hz is already in the lab today and I have seen similar true-1000Hz technology privately too! For the first time, I now realistically expect refresh rates will be commercially available by roughly mid-2020s in top-end gaming monitors.

It's strobeless ULMB bliss. And without strobing, there's no lag. Blur reduction without strobing. And you can fix the Phantom Array, easy peasy at these stratospheric refresh rates.

Some people:
- Hate motion blur
- Or hate phantom array
- Or hate stutters
- Or hate stroboscopic effects
- Or hate strobing lag
- Or hate strobing color degradation
- Or hate brightness loss of strobing
- Or hate VSYNC OFF tearing
- Or hate VSYNC ON lag
- Or hate flicker
- Or multiple of the above
- Or all the above
Etc.

The only way to solve ALL above hates simultaneously is blur-free sample-and-hold (aka 1000fps at 1000Hz). That's why it's such a Great Thing that a form of strobeless/lagless ULMB will finally come within our lifetimes. It may take ten years but it's coming.

Resolutions have gone retina, but refresh rates have not yet. The Hz race will keep going for a long time.
Once upon a time, HDTV costs 5 figures. Now it costs only $100 for a cheap 1080p60.
Yes, 1000Hz may be expensive at first.
But by year 2050, it's possible 1000Hz will be cheap, too.

1000Hz: It's not an "if" anymore, but "when".
While I think >1000Hz is the future, there are several things that needs to be considered.

There is a "minimum rule" in science. It is also used to describe "A computer is as fast as the slowest component.".

Most of the components in the production chain needs to be compatible with 1000hz and after that, mass production starts.

The transition progress will not just be painful for GPUs. It will even also be for side components like mouses!

Image

User avatar
Chief Blur Buster
Site Admin
Posts: 6485
Joined: 05 Dec 2013, 15:44

Re: Another software strobing implementation on a 3D engine

Post by Chief Blur Buster » 23 Jan 2018, 02:17

Yep. It's only as good as the weak link.
  • Display -- 1000Hz as discussed before
  • GPU -- Future "Frame Rate Amplification Technologies" for reliable 1000fps
  • Mouse -- Perfect poll sync between Mouse+Display (genlock) -or- alternatively even higher polling Hz (2KHz, 4KHz, 8KHz) if unsynchronized with display Hz.
  • etc.
P.S. Great changes, I need to look at it closer over the coming days! Also, you don't need a database of monitors unless you wanted to know which is more prone than others (to determine how frequently to switch phases). Some may go by a 4-frame pattern, but it will always be an even number. So the algorithm you did, is already a catch-all for all kinds of inversions, since the number of polarity of voltages is two (law of physics) and the 1-frame phase-swap always counterbalances no matter what the inversion sequence is. It eventually rotates through all of the possible inversion offsets (shifting through by 1, if it's a sequence of more than 2 inversion patterns). So it's already catch-all as far as I can think. The main variable is phase-swap frequency, since some displays burn in much faster than others, and others don't. That said, I can see phase-switch flickers if it's done once a second, so phase-switching should only be done more infrequently (ideally). Once a minute works fine for some LCD displays. Phase switch frequency should be configurable.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

User avatar
Chief Blur Buster
Site Admin
Posts: 6485
Joined: 05 Dec 2013, 15:44

Re: Another software strobing implementation on a 3D engine

Post by Chief Blur Buster » 23 Jan 2018, 02:29

BattleAxeVR wrote:What I'm curious about, in order to make motion look as realistic as possible, and given ultra high native framerates from older games with simpler engines / graphics, would be to do accumulation-type motion blur (blending 300 fps into 60 or 120 fps), then BFI those frames.

So you present a framerate-appropriate blurred image then using low persistence get the exact right amount of blur, no more, and no less.
I do not recommend adding any GPU blur to any BFI. GPU blur can be useful and BFI can be useful, but combining the two is not recommended as they counteract with each other.

The "one frametime of blur" rule is to eliminate stroboscopic effects completely. But one frametime of blur completely undoes the blur-reduction of BFI!

Can't successfully do both simultaneously (eliminate stroboscopic effect with nearly no blur penalty) unless you use stratospheric framerates at stratospheric refresh rates.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

User avatar
Chief Blur Buster
Site Admin
Posts: 6485
Joined: 05 Dec 2013, 15:44

Re: Another software strobing implementation on a 3D engine

Post by Chief Blur Buster » 23 Jan 2018, 02:31

fuzun wrote:There is a project called SmoothVideo Project (SVP) and it does a similar thing you are saying. If input media is 24Hz, it makes its refresh rate equal to monitors refresh rate. Like 24 Hz -> 144 Hz.

It makes this by doing mathematical calculations between frames and inserting artificially generated frames.

Result: Not good (in my opinion). Because films already have that captured "natural" motion blur which we do not have in computer ecosystem. The only thing that should be done is synchronizing the input media's refresh rate with monitor's refresh rate. Certain renderers like MadVR can do it so I think there is no need to pay for this software.
Try SmoothVideo on GoPro videos at 1/1000sec shutter (bright daylight 60fps videos). Nearly no camera blur in those videos. The SmoothVideo results are much more amazing on a ULMB/LightBoost display -- CRT clarity video.

Also, you don't need SmoothVideo for 60fps videos + 120Hz + software BFI -- just use black frames every other fresh cycle.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

Haste
Posts: 304
Joined: 22 Dec 2013, 09:03

Re: Another software strobing implementation on a 3D engine

Post by Haste » 23 Jan 2018, 12:17

Chief Blur Buster wrote:
BattleAxeVR wrote:What I'm curious about, in order to make motion look as realistic as possible, and given ultra high native framerates from older games with simpler engines / graphics, would be to do accumulation-type motion blur (blending 300 fps into 60 or 120 fps), then BFI those frames.

So you present a framerate-appropriate blurred image then using low persistence get the exact right amount of blur, no more, and no less.
I do not recommend adding any GPU blur to any BFI. GPU blur can be useful and BFI can be useful, but combining the two is not recommended as they counteract with each other.

The "one frametime of blur" rule is to eliminate stroboscopic effects completely. But one frametime of blur completely undoes the blur-reduction of BFI!

Can't successfully do both simultaneously (eliminate stroboscopic effect with nearly no blur penalty) unless you use stratospheric framerates at stratospheric refresh rates.
What if you used tobbi eye tracking to identify which elements are being tracked and exclude them from the GPU blur? (That way you target the blur specifically to the areas where stroboscopic stepping occur.)
Monitor: Asus ROG Swift PG279Q

User avatar
Chief Blur Buster
Site Admin
Posts: 6485
Joined: 05 Dec 2013, 15:44

Re: Another software strobing implementation on a 3D engine

Post by Chief Blur Buster » 23 Jan 2018, 14:20

Haste wrote:What if you used tobbi eye tracking to identify which elements are being tracked and exclude them from the GPU blur? (That way you target the blur specifically to the areas where stroboscopic stepping occur.)
It's simple (to my human brain) -- being the wizard of persistence, also applies to understanding motion blur vectors too.

It's much easier if doing it globally.

Eye-tracked GPU blur effects do help! Any eye-tracked GPU blur effect compensation must be global. The algorithm for eye-tracked GPU motion blur is globally modify the blur vector (whole screen, not locally) with the eye-tracked vector between two refresh cycles.

This is very useful to modify/enable/disable GPU blur effects to eliminate the stroboscopic effect of objects moving at speeds different from eye-tracking.

But eye-tracked blur vector compensation must be done essentially laglessly (less than 10 milliseconds) to make it perceptually lagless. In order to make eye-track-direction-changes and flit-back-forths perceptually lagless. Otherwise, at tens of milliseconds, the lagged blur effects become pukeworthy and nauseous and you should not bother with eye-tracked motion blur compensation.

When this happens, tolerating phantom arrays is the lesser of evils at this stage.

Now, basically, if you want to do it eye-track-blur-compensation globally for the whole frame, the formula becomes very simple:

1. Default GPU blur vector is the panning vector.
Slow panning has less blur, fast panning have more motion blur. Perfect counter-balance of Phantom Array always require motion blur from old position (previous frame) to new position (current frame).

2. Subtract GPU blur vector with eye-tracked vector
position of eye for previous frame to position of eye for current frame.

3. Execute final blur vector on the entire frame. Simple linear motion blur.

4. Optional: Add a little blur antialaising with a 10% fadein-fadeout at the end of linear motion blur. That way, the linear blur trail doesn't have a sudden start / sudden stop. Basically temporally anti-alias the motion blur trail between adjacent frames by blurring (gentle fadein/fadeouts) of the beginnings/endings of one-frametime's worth of linear motion blur.

This is mathematically very simple, for a global blur-compensation.

This fixes:
(A) Phantom array effect of moving objects during fixed-gaze-on-static-objects
(B) No motion blur for the specific object you're eye-tracking (frame-duration blur vector is subtracted by eye-tracking vector, resulting in a zero-blur for that particular moving object you're staring at).
(C) Works fine for multi-direction motion in the same frame. As soon as you eye-track a different object, its blur vector is zeroed successfully.

But does not fix:
(D) Strobed mode only: Phantom array effect of stationary background objects (and slower moving objects) while eye-tracking moving objects. Global blur vectors, used in the typical simple way, cannot reproduce the "sharp-ball-on-blurred-background" clear-moving-object blurred-background phenomenon during eye-tracking on strobed displays.

Fixing (D) simultaneously is much more complicated for a GPU. Imagine panning a camera on a moving object (a flying ball) -- you have a sharp ball and a blurred background. Trying doing exactly the same thing on a GPU -- that's what you also need to do to fix situation (D) for a strobed display. Multiple blur vectors (instead of global blur vectors) depending on relative motionspeeds of a specific pixel relative to eye tracking. Basically a per-pixel version of (1)/(2)/(3) formula. It's possible, just much more difficult -- more processing. Start small before you go big.

I would love to know current implementations of eye-tracked-motion-blur-vector compensation algorithms. I've not experimented much with actual practice at the moment. Many GPU blur effects for games have often been global-frame blur rather than per-pixel blur. So you can't reproduce the "sharp-ball-on-a-blurred-background" phenomenon of camera-tracking a ball flying through midair. But I wonder if some of them now do per-pixel GPU blur effects (different blur vector calculations depending on relative object movement speeds). If that's actually being done in actual practice, in any video game, at the moment, then, very excellent! Makes things much simpler. You skip (1) and just focus on (2)/(3) in the formula -- basically a blur vector modifier from the eye-track vector between two frames.

By doing all of this, eye-tracked motion blur compensation allow you to get away with low refresh rates (via strobing) and be largely blurfree + phantom-array-free, no matter fixed-gaze or fast-eye-tracking. For single-person applications, if (1)/(2)/(3)/(4) is all done simultaneously (skip (4) if using ultra-high-Hz sample-and-hold; persistence makes the human eye do the background motion blurring for you). So you may be able to avoid the need of ultra-high-Hz1000fps@1000Hz for virtual reality applications and still get fairly close to passing the reality test. But only if eye-tracked motion blur compensation is done pretty laglessly.

Otherwise, lagged blur effects become noticeable. Even an instantaneous momentary defocussing of text during rapid-startup or rapid-stop of eye movement, can still be noticeable. (A single-frame sudden momentary-defocussing experiment is visible at 144Hz (6.9ms!!) so that puts some rather severe demands for passing the Holodeck Turing Test using eye-tracked motion-blur-vector compensation.

Oculus found you ideally needed about 8ms. But if you're going 180-degree Retina (e.g. future 16K wraparound VR) and trying to do eye-tracked motion blur compensation, my super rough-guesstimate is I think you'll probably need the order of 1ms latency for eye-tracked motion blur vector compensation to pass the Holodeck Turing Test. It's not known the exact latency number yet for retina VR, but it's more aggressive than for 1200p or 1440p VR. In my opinion, I believe 1000fps @ 1000Hz (at, say, 8ms chain lag) is a much easier goal in humankind to achieve than 1ms full-chain input-to-photons latency (head-tracking/eye-tracking/etc). Ideally you want it all simultaneously (zero latency, retina persistence, retina frame rates, retina resolutions)...

Don't vouch me for this guessed number (1ms needed for perceptually lagless eye-tracked motion blur compensation algorithms on a sub-1ms-MPRT retina VR displays such as 8K and up) but it boils down to the vicious cycle effect. The higher resolution the display and the more FOV you get, the more imperfections (versus real life) can reveal itself above the human detectability noisefloor from side effects like persistence/stroboscopic/motion blur/lag/etc.

The vanishing point for the diminishing points of returns is bumped far further down the curve, the closer you make a display to real-life resolution (spatially, temporally, gamut, lag, etc).

I'm super-glad there are multiple horses in this race, to solve all the VR problems (motion blur, stroboscopic effect, rendering power). And the eye-tracked-blur-compensating horse may very well win the (single-person) Holodeck Turing Test race. Obviously, it all falls apart if multiple people are eye-tracking the same display, but that's not an issue for VR headsets. ;)

One might be able to come up with a subdisplay-in-display technology (like the Varjo does, with successful seam-eliminating between displays, with a smaller retina subdisplay analog-moving (moves in perfect lagless sync with eye tracking), then combined with foveated rendering, then combined with eye-tracked motion blur vector compensation (GPU blurring applied independently on both of the two displays, concurrently, depending on each subdisplay's panel motion relative to fovea focus.) And do it on a processing chain that's small enough lag. And fix the long-standing virtual reality focal-depth issues (so you can actually comfortably focus/unfocus for virtual objects closer/further from you). If all simultaneously done flawlessly -- it may theoretically eliminate the need to do 16K VR and 10,000fps@10,000Hz by combining a lot of tricks/bandaids together all at once, but that's going to be an insane engineering feat to perceptually seamlessly and laglessly combine all the solutions together too.

Whatever happens, successfully passing the Holodeck Turing Test within my lifetime (by ~2050?) would be a display dream of many come true. Not being able to tell apart a transparent ski goggles versus a VR headset in an A/B blind test. Guessing realworld is VR, and VR is realworld -- incorrectly -- a statistically 50%-50% guess ratio -- because the VR becomes so good it's indistinguishable from real life. This may actually take until the end of the 21st century to universally achieve (except for stare-at-sun situations, which we never want -- ha!). But some tests may pass the A/B blind test (VR is reality, reality is VR) within a mere couple decades. Just not for all graphics/worlds.

Eye tracked motion-blur-vector compensation is a definitely useful item in the toolbox of battling the problem of combining "simultaneously blur-free + phantom-array-free" on a low-Hertz strobed display.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

fuzun
Posts: 17
Joined: 20 Jan 2018, 21:18

Re: Another software strobing implementation on a 3D engine

Post by fuzun » 24 Jan 2018, 23:52

I have been trying to implement a mathematical function to find how bad is the phase balance correlated with brightness perception when phase swapping is active.

Image

I am calling it as "Badness" right now because it does not mean anything otherwise :)
  • n -> Percentage of ((+) Phase Normal - (+) Phase Black) out of total (+) Phase frame count. [Can be 0 to 100]
  • l -> Percentage of ((-) Phase normal - (-) Phase Black) out of total (-) Phase frame count. [Can be 0 to 100]
Examples:
  • n = 0 , l = 0 -> Badness: -infinity . Which means this is the best it can get.
  • n = 100 , l = 100 -> Badness: +infinity . Means this is the worst situation it can get.
  • n = 50, l = 50 -> Badness: 0 . Neither bad nor good.
  • n = 1, l = 99 -> Badness: 0 . I think it should not be 0 here but I am not sure. It seems to me that n = 50 , l = 50 should be lower compared to this.
  • n = 2, l = 98 -> Badness: 0 . Same as above.
The function needs to get fixed and calibrated but for now it shows something. I do not know if this is coincidence.

n = 0 , l = (-) 100 means r_strobe 3 and that means the frame sequence is Normal - Black - Black - Black .

When r_strobe = 3 and phase swapping is active (r_strobe_swapinterval 1), n and l becomes;

n = (-) 50 , l = (-) 50 at infinity.

Normal - Black - Black - Black means that the actual brightness gets lowered to 1/4 -> 25% .

If square root function is used to measure estimate value of perception, one can see that perception will be lowered 50%.

Using the above function the output will be 0.00 which points to the middle point; 50%.

---

Also I have tested the phase swapping algorithm more.
  • r_strobe 1 (Normal - Black - Normal - Black ...) + r_strobe_swapinterval 0 ;

    Image

    n will be 100%, l will be -100% at infinity. (Becomes 95% in a few seconds)
    Badness becomes infinity. (the worse it can get)
  • r_strobe 1 (Normal - Black - Normal - Black ...) + r_strobe_swapinterval 1 ;

    Image Image Image

    n and l will oscillate between -10% and 10% respectively. They will be 0% at infinity while they will be 1-2% after few minutes.
    Badness oscillates between -3 and 3. (-infinity when n and l are both 0).
P.S. Sorry for the yellow cast on the images. I have been working for hours and it is 8 AM in here now so flux app is doing its best :)

User avatar
Chief Blur Buster
Site Admin
Posts: 6485
Joined: 05 Dec 2013, 15:44

Re: Another software strobing implementation on a 3D engine

Post by Chief Blur Buster » 25 Jan 2018, 11:16

Fantastic! For me, I don't bother, I just use a fixed interval timer, but a badness trigger is interesting!

You could even do it on a per pixel basis. But that is waaaaay overkill.

Remember, for defeating the LCD voltage balancing (inversion) built into monitors, some monitors never burn in, others burn in slowly, and some do burn in fast. So you need a configurable modifier for phase switch velocity (e.g. 2x more frequent, 2x less frequent, etc), even in the worst case scenario, you dont need to phase swap more often than once a minute (so slow down your badness algoritm for those burnin-resistant monitors). So because of this, I never bothered doing an advanced guess of phase-swap necessity. But that can help make phase swaps less frequent!

Also, clever algorithms (e.g. two 50% opacity frames instead of two 100% opacity frames) can eliminate the "bright" flicker of phase swaps done via the repeat-frame technique. You can also for example mathematically calculate opacities, like if you need to flickerlessly convert frame-dark-dark into frame-dark-dark-dark (A 33% lengthening of a cycle), you instead do frame-33%frame-dark-dark. Basically a dark duplicate frame of one-third brightness.

The goal is make sure same number of photons are hitting human eyeballs per BFI cycle, even when the BFI cycle is varying (due to a phaseswap). That reduces flicker of random BFI shortenings/lengthenings.

In other words: Instead of trying to reduce frequency if phase swaps, why not simply make phase swaps invisible instead? Simpler math and easier for other code maintainers.

It maintains the same average number of photons hitting the eyeballs. This algorithm, IMHO, is more useful than complicated "badness" algorithms (useful as they are): Remember, keep it easy for other programmers.

Or keep both (badness algorithm AND phaseswap flicker reduction).

Also, try to detect missed frames (e.g. double vsync) and keep in phase with the monitor, not the game frame. Otherwise you're out of sync, burn in is occuring when you think it is not. Done via extrapolation from microsecond clock to guess correct phase to resync after accidentally missed frames (double VSYNC's, system freezes, etc). Basically, freeze interval divided by known rolling-average vsync interval (remember: when updating a vsync interval estimate, discard the outlier odd vsync-intervals to prevent mucking-up vsync guess). Dividing the freeze interval (get microsecond timestamp right after return of next blocking pageflip call) by the known vsync interval, returns a nearly-integer, and gives you a very good guess of what voltage inversion phase the monitor is currently in. Declare good number if near integer (within 0.1 of being an integer). Ignore if result doesn't look integer. If good, then if odd number, you are in sync, if even number, update your phase flag. (Don't need to phase swap at that instant). Other formulas may be better but, this is the easiest lowest common denominator for guessing the monitor inversion polarity after a long system freeze.

For the burnin prevention "phase-swaps":
My personal opinion in order of priority, IMHO, is:

- Phase swap on fixed integer trigger (user configurable) - maybe even just simple count of unbalanced frames
- Phase swap flicker elimination algorithm - simple opacity (alpha) formula
- Strong Phase resync logic after system freezes.
- More advanced phase swap algorithms (e.g. more complex badness factors).

I am very glad you confirm this solved the software BFI burn in problem.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

Post Reply