ULMB(lighboost) & G-sync? [What are technical challenges?]

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
Trip
Posts: 157
Joined: 23 Apr 2014, 15:44

ULMB(lighboost) & G-sync? [What are technical challenges?]

Post by Trip » 23 Apr 2014, 16:45

I have been browsing the forums a bit. And I see that the combination of both lightboost and g-sync is not a possibility at the same time. But what I think is strange is that Mark says it would not be possible via software (http://forums.blurbusters.com/viewtopic.php?f=5&t=564). It seems rather weird that I see applications which can control strobe rate for benq monitors made by blur busters and it is obvious that you know frame render time on a computer. It seems to me a combination of those two could easily create a strobing rate which can dynamicly change with the frame rate. The formula for the required strobe length adjustment doesnt seem so difficult either.
(Max refresh / fps) * starting strobe time = new strobe time.
Where starting strobe time is the strobe time used when setting up brightness / blur reduction.
Also when the frame render time has exceeded a certain threshold say 11ms = ~91fps. It will just stop using the strobe lighting so it will not cause annoying flicker.
Is the software just not fast enough or does it not have enough strobe rate intervals to do this dynamicly as I do not OWN this monitor and therefore cant test it. But it seems nvidia can easily manipulate these timings in an efficient way right? So a software solution seems really feasable but I am no engineer. I would really want to screw around myself with this when the first OEM g-sync monitor is for sale. Any thoughts on this?

Small edit in the formula with regards to clarity.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

ULMB(lighboost) & G-sync? [What are technical challenges?]

Post by Chief Blur Buster » 23 Apr 2014, 18:16

Good questions, that said, I observe that there are two confusions at play here.
Trip wrote:I have been browsing the forums a bit. And I see that the combination of both lightboost and g-sync is not a possibility at the same time. But what I think is strange is that Mark says it would not be possible via software (http://forums.blurbusters.com/viewtopic.php?f=5&t=564). It seems rather weird that I see applications which can control strobe rate for benq monitors made by blur busters and it is obvious that you know frame render time on a computer.
My statement about it not being possible by software does not mean it's possible by hardware. Blur Busters Strobe Utility is a software utility that reconfigures the timings of a hardware based strobe backlight. The BENQ monitor still controls the strobe backlight, the Utility just reconfigures some hardware registers on the monitor, much like changing picture settings. The new V2 firmware makes the hardware strobe backlight much more programmable and flexible, allowing external 3rd party utilities to tweak it around.

Also, when I say "not possible", I really mean "not possible without visible issues". You could do it anyway & try it anyway, but would not have a seamless strobe. (e.g. annoying flicker issue)
Trip wrote:It seems to me a combination of those two could easily create a strobing rate which can dynamicly change with the frame rate. The formula for the required strobe length adjustment doesnt seem so difficult either.
Strobe length needs to be ultra precise, so software controlling exactly when the light comes on, and when the light comes off, is extremely hard. With 1 millisecond persistence, a 10 microsecond mistake in the strobe length causes a 1% brightness variances. (1/100th of 1 millisecond flash, equals 10 microseonds). So if you vary the strobe length by software inaccuracies, there is noticeable flickering (it actually happened in my testing). If half of the strobes are randomly 1% longer and half of the strobes are randomly 1% shorter, you've got a candlelight-flicker effect (1% increase/decrease in average brightness). Do you have the real-time programming knowledge to time a software-driven strobe backlight (driven via a logic or GPIO line directly from software), with less than a 10 microsecond error?
Trip wrote:(Max refresh / fps) * starting strobe time = new strobe time.
Where starting strobe time is the strobe time used when setting up brightness / blur reduction.
Also when the frame render time has exceeded a certain threshold say 11ms = ~91fps. It will just stop using the strobe lighting so it will not cause annoying flicker.
Are you aware of my document, Electronics Hacking: Creating a Strobe Backlight, and its existing section on variable refresh rates?
Trip wrote:Is the software just not fast enough
Timing precision. 10 microsecond variances create actual human-visible effects, based on tests. 1% longer strobe length (10 microsecond added to 1 millisecond flash) averages out to 1% more photons hitting the eyes over a time period (e.g. 1 second), meaning 1% brighter. Yes, it's hard to believe that microseconds actually leads to visual issues, but multiple strobes occur per second (100+) and all those microseconds adds up. The math tells you that if you add 1% to 1 microsecond, that's 10/1000ths millisecond, or 10 microsecond. And that's 1% more photons being output. If you run a few strobes 10us longer, and then a few strobes 10us shorter, you'll see the brightness suddenly change. Continually do that randomly, and you see a candle-light-flicker effect in the backlight. Strobe length variation creates brightness udulation issues, so you must have extreme, ultra-high precision.

Only hardware can easily achieve such precision of strobe length control on the fly. Even so, that's not the chief issue. There are numerous issues at play that makes it difficult to do it in a human-vision-comfortable manner. The short answer is yes, it is possible with hardware. The key phrase is... "Is it possible to do variable rate strobing WITHOUT human visible effects?". The short answer is it's difficult. There are human-visible effects with random-rate flicker even at high speeds, which needs creative on-the-fly ultrahigh-precision timing adjustments to keep the average number of photons hitting the eye relative constant over during a human's flicker fusion threshold. Even with hardware precision instead of software precision, there's still a high risk of human-visible effects. But I think it's possible, with a well-done hardware-based variable-rate strobe backlight operating in concert with GSYNC, to do it in a way that's practically seamless (at least to a "majority of users" except the sensitive ones). But I don't see that being possible in software (at least in terms of "letting" software direct-drive the ON/OFF pulses directly).

We welcome Blur Busters tweakers to make mods of their monitors. There are some monitor modders in the Area 51 Forum, including cirthix, who might be quite interested in hacking a GSYNC monitor to do some strobing. But you'll likely need an external circuit to do so, or some monitor firmware / FPGA programming skills. And I'd bet NVIDIA has already tried to do variable-rate strobing, but found some quality issues (e.g. flicker).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Trip
Posts: 157
Joined: 23 Apr 2014, 15:44

Re: ULMB(lighboost) & G-sync?

Post by Trip » 24 Apr 2014, 11:37

Chief Blur Buster wrote: Strobe length needs to be ultra precise, so software controlling exactly when the light comes on, and when the light comes off, is extremely hard. With 1 millisecond persistence, a 10 microsecond mistake in the strobe length causes a 1% brightness variances. (1/100th of 1 millisecond flash, equals 10 microseonds). So if you vary the strobe length by software inaccuracies, there is noticeable flickering (it actually happened in my testing). If half of the strobes are randomly 1% longer and half of the strobes are randomly 1% shorter, you've got a candlelight-flicker effect (1% increase/decrease in average brightness). Do you have the real-time programming knowledge to time a software-driven strobe backlight (driven via a logic or GPIO line directly from software), with less than a 10 microsecond error?
I didnt thought software would be that inaccurate. But now that you mention it nvidia must have had a reason g-sync does not work with pre-kepler cards since the timing issue must have played a role there too. Though I really hope they can use that que from g-sync for the strobe lights as well so people do not have to buy a maxwell card in case they could get dynamic strobing working. But indeed I dont think neither nvidia or a very smart modder could fix the issue with software or monitor firmware most likely the monitor will require some extra hardware to change the strobe rate in real time. And no sadly I do not have any experience at all with low level programming though I really wanna start with it some day and this seems so interesting to do. I do have a few years of programming behind my back now but all of it was relativly high level and none of it was related to drivers and other low-level hardware.
Chief Blur Buster wrote: Are you aware of my document, Electronics Hacking: Creating a Strobe Backlight, and its existing section on variable refresh rates?
Yes I did read it but that involved a combination of constant backlight or pwm in combination with a strobe light right? What I meant was to only use the strobe light but just longer and longer strobes if the refresh rate goes down. But maybe that would worsen the situation even more but I thought that would only become apparent at below 85hz (hence the 11ms limit I mentioned). Since that was the threshold I believe most people thought that was flicker free for a crt monitor. I assumed the brigthness would stay constant as the time of the strobe just increases and thus the amount of light stays the same since the strobes are longer but not as frequent. Altough your idea looks a lot better it does involve to use two different lighting techniques both pwm and strobe or both constant and strobe.
So I thought it would be a lot more complex and maybe impossible on current monitors and since I thought it was possible via software I just came up with this solution mentioned above. Since strobe lighting is at least available and either pwm or pwm free lighting when strobe lighting is switched off.
Chief Blur Buster wrote: We welcome Blur Busters tweakers to make mods of their monitors. There are some monitor modders in the Area 51 Forum, including cirthix, who might be quite interested in hacking a GSYNC monitor to do some strobing. But you'll likely need an external circuit to do so, or some monitor firmware / FPGA programming skills. And I'd bet NVIDIA has already tried to do variable-rate strobing, but found some quality issues (e.g. flicker).
Great man I will follow the scene but not actively yet. Like I said if a payable (college student :P) monitor with g-sync and lightboost comes along I will jump it. Great job you are doing with this forum/site. I think nvidia really is interested to finally try to solve the display issues people were having since the end of the crt age (motion blur) and even improve on it with g-sync. Love this technology way more then the 3d hype that started a few years back.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: ULMB(lighboost) & G-sync?

Post by Chief Blur Buster » 24 Apr 2014, 12:11

Trip wrote:Yes I did read it but that involved a combination of constant backlight or pwm in combination with a strobe light right? What I meant was to only use the strobe light but just longer and longer strobes if the refresh rate goes down.
This may be one of the techniques that works but it's subject to exactly the same issues/rules required. Example: Provided you lower the strobe height as you increase the strobe length, to maintain average brightness, because longer strobes leads to an average-brighter image. The goal is to attempt to maintain a constant-average number of photons over a time period (e.g. 1/60sec or 1/75sec or 1/85sec) that roughly represents a human's flicker fusion threshold. Once high-frequency random flicker is fast enough to allow constant averages over these time periods, the random flicker of random framerates would cease to be noticeable. That's the key -- maintaining constant average light light output over flicker-fusion-threshold timescales; regardless of how it's accomplished.
Trip wrote:But maybe that would worsen the situation even more but I thought that would only become apparent at below 85hz (hence the 11ms limit I mentioned).
It's not as simple as that. Suddenly stopping/starting strobe would be worse than simply gradually blending strobing versus strobefree as the framerate ramps up/down. But changing strobe width on the fly could work, as long as there's ultra-precise control of voltage. The problem is that sometimes voltage control creates some minor color differences as LED spectrum sometimes emit slightly different spectrum when undervolted versus overvolted.

Using the strobewidth method will require voltage control (e.g. lowering strobe peak while widening strobe, to maintain average light output) which has the aboveforementioned problem; or you simply chop the pulse with a HiPWM algorithm (and making sure the rise/falls are fast enough not to distort average-light-output calculations).

The HiPWM method of blending (already diagrammed), assuming fast rise/falls, avoids this color-change-by-voltage-modulation problem, and is far simpler electronically (and you can use a simpler accumulator to estimate how much light was output in the trailing 1/75th second, for example, and use that information (along with current frame rate timings) to calculate subsequent pulses in order to maintain constant light output.

To top it off, strobewidth manipulation is very problematic for strobe crosstalk (e.g. double image effect caused by incomplete GtG leaking between refreshes, caused by a large strobe width too big to fit in the VBLANK between refreshes; leaving no time for GtG to finish between refreshes). Rather than doing that, it's better to blend in continuous-output light; so you've got simple motion blur gradually returning, rather than blurfree -> transitioning to ugly double image effect (on longer strobes) at medium frame rates -> transitioning to PWM-free at low frame rates.

Strobe crosstalk becomes worse on BENQ Z-Series in Blur Busters Strobe Utility at larger persistence settings, because the strobe flash is longer than the pause between refreshes (blanking interval), so you've got a backlight pulse showing two refreshes; creating a double image effect, similiar to: http://www.blurbusters.com/wp-content/u ... pdated.jpg
Real-world 1ms GtG creates strobe crosstalk about 1/8th screen height on an 8ms refresh cycle (1/120sec refresh)
Real-world 2ms GtG creates strobe crosstalk about 1/4th screen height on an 8ms refresh cycle (1/120sec refresh)
Real-world 4ms GtG creates strobe crosstalk about half screen height on an 8ms refresh cycle (1/120sec refresh)
Now, the most common method of eliminating strobe crosstalk on a strobe backlight is to time the strobe flash between refreshes. However, that also means (1) a pause long enough between refresh cycles; and (2) enough time left to flash the backlight.
If you can't squeeze GtG and the strobe pulse between refreshes, you've got strobe crosstalk occuring. That's why people do the large blanking interval tricks (e.g. Vertical Total 1350 or Vertical Total 1500) with BENQ Z-Series to reduce strobe crosstalk.
LightBoost already does this automatically via an accelerated LCD panel scanout (e.g. refreshing the panel top-to-bottom in ~1/200sec during 120Hz, to create the necessary long-enough pause between refreshes to let the pixel momenum of GtG transitions finish to a completely refreshed image (after the electronic refresh finished) before finally flashing the backlight on a fully-refreshed frame. You can see high speed videos of LCD refreshing, which are simply high speed videos of http://www.testufo.com/flicker to allow studying the GtG (pixel transiton) behavior of an LCD.

What the above knowledge means, is gradual lengthening of strobe width, is, in other words, a very bad idea -- from a visual artifacts perspective. When the strobe flash length becomes longer than the VBLANK (the pause between refreshes), you get the strobe crosstalk (double-image effects that look like 30fps@60Hz or 60fps@120Hz) caused by the same backlight strobe illuminating the final part of the previous refresh, and the first part of the next refresh cycle. So you must keep the motion-blur-eliminating strobe short enough to essentially fully fit between the refresh cycles, to eliminate the strobe crosstalk artifacts. Instead, you really simply want to gradually blend the PWM-free to strobing.

Thusly, for algorithms to combine GSYNC and strobing:
BAD - Strobewidth manipulation towards PWM-free: Crosstalk problems will occur at middle framerates
BAD - Sudden transition between PWM-free and strobing: Sudden changes in motion clarity as framerate fluctuates
OK - Strobing gradually blending to PWM-free: Clear motion gradually fading into motion blurred motion as framerate falls
OK - Strobing gradually blending to HiPWM: Clear motion gradually fading into motion blurred motion as framerate falls

The ownership and purchase of a BENQ Z-Series V2 monitor is very educational to researchers wanting to study the visual phenomena of a variable-persistence monitor that also lets you customize strobe timings (e.g. make strobe early, late, bigger, smaller) and even lets you use ToastyX CRU or NVIDIA Custom Resolution to create mammoth blanking intervals (e.g. 1500-scanline refresh cycle for 1080p) to create the faster scan followed by huge pauses between refreshes beneficial to let GtG settle between refreshes before strobing (at a Vertical Total of 1500 scanlines per refresh during 1920x1080 including blanking interval, 1080/1500ths of a refresh cycle is spent refreshing, and 420/1500ths of a refresh cycle is spent pausing between refreshes). Some monitors, such as the Z-Series scan out to the panel in realtime from the input, so it works well with Vertical Total tweaks. While LightBoost does the fast-scanout in hardware (slightly buffers the refresh coming from the cable, and then does a faster-than-cable scanout to LCD).

This is why I left the strobe-width idea out of my document; because of my expert understanding of visual artifacts of various strobe-related phenomenae. Either way, there are really many methods that could be used to pull off variable-rate strobing successfully, however, it will be require extremely high precision.
Trip wrote:I thought it was possible via software I just came up with this solution mentioned above.
Software can be used to guide the hardware in a general manner (e.g. setting thresholds, preferred curve to blend between PWM-free and PWM, etc). But the hardware embedded in the display will need to be responsible for controlling the precision of timing, the voltage, the pulses of the backlight. External control of strobe ON / strobe OFF just isn't precise enough.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Trip
Posts: 157
Joined: 23 Apr 2014, 15:44

Re: ULMB(lighboost) & G-sync? [What are technical challenges

Post by Trip » 24 Apr 2014, 13:08

Great stuff I can really see you love this subject. And I also see how little I actually know about the technology. Especially the vblank stuff and it being not being long enough to allow these increasing strobe times. I had actually no clue about those colours I assumed an led would remain correctly in its spectrum at all times. Are there no led's that can do this or are those just too expensive.
The ownership and purchase of a BENQ Z-Series V2 monitor is very educational to researchers wanting to study the visual phenomena of a variable-persistence monitor that also lets you customize strobe timings (e.g. make strobe early, late, bigger, smaller) and even lets you use ToastyX CRU or NVIDIA Custom Resolution to create mammoth blanking intervals (e.g. 1500-scanline refresh cycle for 1080p) to create the huge pauses beneficial to let GtG settle between refreshes before strobing. Some monitors, such as the Z-Series scan in realtime, so it works well with Vertical Total tweaks. While LightBoost does the fast-scanout in hardware (slightly buffers the refresh coming from the cable, and then does a faster-than-cable scanout to LCD).
Does this also mean lcd response time is effectivly improved since all the pixels are activated earlier? Can a lot of monitors do this btw or is it specific to just this monitor. I have tweaked these settings with my monitor and it would not let me do this without an out of bound error.
It seems like a very interesting monitor but I also wanna try out g-sync and since they are just around the corner I will probably wait a little longer. Hope nvidia will also allow this freedom benq gives you with these monitors since it is there scalar board.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: ULMB(lighboost) & G-sync? [What are technical challenges

Post by Chief Blur Buster » 24 Apr 2014, 13:42

Trip wrote:Does this also mean lcd response time is effectivly improved since all the pixels are activated earlier?
No difference to GtG.
By asking this question, it sounds like you still are confused.
So, let me re-explain.

The large vertical totals are tweaks that simply helps the "sweep refresh" finishes quicker, and pause longer between refreshes. See the sweep effect in high speed video of an LCD refreshing. That's the LCD being scanned.

phpBB [video]


The goal in strobe backlight science is to let the sweep completely finish (in total darkness) before flashing the backlight, before beginning the next refresh cycle. If you don't do the sweep fast enough (e.g. the slow refresh scan of a 2007 LCD), you are beginning to refresh the top edge of the next refresh cycle before the GtG momentum of the bottom edge finishes during the previous refresh scan.

5ms LCD at 60Hz, during regular 1/60sec scanspeed per individual refresh
Top edge of screen: refreshed at T+0ms. Pixels become fully visible at T+5ms
Bottom edge of screen: refreshed at T+1/60sec. Pixels become fully visible at T+(1/60sec)+5ms
See the problem? It takes longer than a refresh cycle for the pixels at the bottom edge to become visible. Your GtG transitions are perpetually ghosting between refreshes, like in the 2007 LCD high speed video.

5ms LCD at 60Hz, but at accelerated 1/120sec scanspeed per individual refresh, while still running at only 60Hz
Top edge of screen: refreshed at T+0ms. Pixels become fully visible at T+5ms
Bottom edge of screen: refreshed at T+1/120sec. Pixels become fully visible at T+(1/120sec)+5ms = less than 1/60sec total
Now you've finished your GtG transition in less than 1/60sec. You've finally created enough time to do a clean strobe backlight flash, without creating strobe crosstalk caused by incomplete GtG

Now, to let the pixels settle and finish its GtG momentum (pixel transition momentum still occuring after the LCD panel is electronically refreshed) -- you want to finish refreshing the whole panel sooner in a refresh cycle, before you strobe the backlight, before beginning the next refresh cycle.
Trip wrote:Can a lot of monitors do this btw or is it specific to just this monitor. I have tweaked these settings with my monitor and it would not let me do this without an out of bound error.
There is nearly no benefit to doing the Vertical Total tweak on most monitors, except on strobe-backlight monitors. It does not affect pixel response, except the timing of refreshing the top edge versus refreshing the bottom edge. When I mean "faster", I mean a "faster sweep" or "faster scan" -- but the individual pixels don't have faster GtG.

For those who don't understand vertical blanking intervals, that's the pause between refreshes, between finishing the bottom of the previous refresh and starting the top of the next refresh. It's also called "VSYNC", or "vertical synchronization interval", and analog TV users will understand it as the rolling black bar during rolling picture (VHOLD adjustment on an old 1970s analog TV -- that black bar between images is the vertical blanking interval. Still being used today in video signals, even on DVI/DisplayPort).

LightBoost does it on the monitor-hardware side (creates the fast scanout and large interval within the monitor) rather than via the dotclock on the GPU side (faster scanout on the video signal side). For information on the monitor-side fast-scanout reverse engineering discovery, it's listed on StrobeMaster's Display Corner page (he's a researcher that modded a LightBoost display for research purposes). For information on the GPU-side fast-scanout tricks, read up the large Vertical Total tricks (1350 or 1500) in the BENQ forum on Blur Busters. This information is really only useful for strobe backlights, and won't benefit non-strobe-backlight use; it just simply affects the pixel refreshing sequence (since pixels are refreshed one at a time, top to bottom), speeding up the sequence but the GtG is unaffected. This is really all an academic matter useful to understanding strobe backlight operation.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Trip
Posts: 157
Joined: 23 Apr 2014, 15:44

Re: ULMB(lighboost) & G-sync? [What are technical challenges

Post by Trip » 24 Apr 2014, 14:19

I think I might have formulated it a bit wrong by saying the response time will be improved. What I meant is that the pixel is activated earlier meaning it has more time to transition between colors right? But if this is the case what about ips displays that in general have slow response times. By using this technique you can pretty much eliminate the problem since it all happens on the dark side of the moon right. The fully scanned out frame will only be shown after the strobe has been passed. Eliminating the problem of slow response times (GtG). I dont see any high hz displays which are also ips displays I quess the biggest reason is the slow pixels. Wouldnt this technique help to actually mask the slow pixels.
I can also see some potential in those overclockable 120 hz 2560 x 1440 displays since the sweep is happening faster the pixels have more time to transition properly before updating. Resulting in properly showing there colors instead of just transitioning to another subsequent color. These displays are running out of spec and I cant imagine the pixels arent affected.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: ULMB(lighboost) & G-sync? [What are technical challenges

Post by Chief Blur Buster » 24 Apr 2014, 20:08

Trip wrote:I think I might have formulated it a bit wrong by saying the response time will be improved. What I meant is that the pixel is activated earlier meaning it has more time to transition between colors right? But if this is the case what about ips displays that in general have slow response times. By using this technique you can pretty much eliminate the problem since it all happens on the dark side of the moon right. The fully scanned out frame will only be shown after the strobe has been passed. Eliminating the problem of slow response times (GtG). I dont see any high hz displays which are also ips displays I quess the biggest reason is the slow pixels. Wouldnt this technique help to actually mask the slow pixels.
Yes, it will help. But the problem is that IPS displays often are rated 8-10ms+, so it's more like 40ms real-world if you have no response-time-acceleration algorithms added to the IPS display. Long real-world GtG creates multiple-refresh ghosting:

Image

When there exists worst-case IPS transitions much longer than a refresh cycle, strobing isn't going to noticeably help as much, the strobing just simply "chops up" continuous motion blur into an annoying chopped motion blur (multiple-image effect). Strobe crosstalk can actually occur over more than 2 refresh cycles too (e.g. super-bad strobe crosstalk looks similar to doing 30fps @ 120Hz -- test http://www.testufo.com on a 100Hz+ LightBoost or 100Hz+ CRT display to understand how ugly 30fps can look compared to 100fps-120fps). The count of multiple-images is (refreshrate/framerate), so 30fps@120Hz strobed creates a quadruple-image effect. For strobe crosstalk, the count of multiple images is approximately (time of average GtG divided by time of refresh cycle), so (25ms real-world GtG / 8ms refresh cycle) = about 3-image effect when you try to strobe through that, though the strobe crosstalk of refresh cycles is progressively fainter.

The strobe crosstalk effect of large GtG's is easily observed on a very cold-temperature EIZO FG2421 strobed monitor (stored in a cold room then suddenly turned on with Turbo240 enabled while looking at http://www.testufo.com/ghosting ) because worst-case VA-panel pixel transitions are several times slower when cold; it has a nasty several-image ghosting effect during strobing during framerate-refreshrate synchronized motion, until it warms up and GtG progressively gets faster from 20ms->15ms->10ms->7ms->4ms as the LCD panel glass warms up and GtG pixel transitions get faster and faster; and the count of multiple-image effects of strobe crosstalk goes down and down; until pixel transitions are pretty much fast enough to squeeze between refresh cycles; and you only see a CRT-clarity image. You're probably familiar that cold LCDs can refresh more slowly -- ever forgotten your smartphone in a cold car in the middle of the winter, as an example? Or an old Timex LCD watch in a car -- you've noticed that digits fade slowly when cold. Same thing happens to computer monitors too; e.g. cold room in winter then turned on; they ghost noticeably more in ghosting motion tests. Not as noticeable with TN, but far more noticeable with VA. It is quite educational to watch strobetalk behavior changes as the GtG changes dynamically (speeds up as the panel warms up).

Also, the QNIX stops noticeably improving beyond 96Hz, meaning 96Hz and 120Hz motion looks almost identical. This is because you've hit the GtG transition speed limit and the LCD is now simply ghosting over multiple refresh cycles. So driving an 8ms LCD at 500Hz will not look better than driving an 8ms LCD (~10+ real-world) at 96Hz, because the perpetual ghosting creates a hard "guaranteed minimum persistence" effect. Once GtG goes faster than a refresh cycle, your motion blur is no longer bottlenecked by GtG, but is rather bottlenecked by persistence of the continuously shining backlight (just as 1ms vs 2ms GtG transition LCDs make virtually no human-visible difference, but 1ms versus 2ms strobe persistence makes a bigger human-visible difference, aka LightBoost 10% vs 100%) -- as explained by educational motion animations at http://www.testufo.com/eyetracking and http://www.testufo.com/blackframes

An 8ms IPS could probably be strobed at reasonable quality at ~60Hz-75Hz, if you design some REALLY good response-time-acceleration, and add a Y-axis component (to accelerate transitions faster at the bottom edge of the screen, versus top edge of screen), because pixels at the top edge has had more time to settle (finish its GtG transition momentum) than pixels at the bottom edge before strobing; so it's hugely beneficial to do response-time-acceleration algorithms differently for the bottom edge of the screen, like LightBoost does.
Trip wrote:I can also see some potential in those overclockable 120 hz 2560 x 1440 displays since the sweep is happening faster the pixels have more time to transition properly before updating. Resulting in properly showing there colors instead of just transitioning to another subsequent color. These displays are running out of spec and I cant imagine the pixels arent affected.
A faster sweep will generally degrade color. This is because you're essentially running the sweep at a higher refresh rate. A 1/120sec accelerated scanout at 60Hz, is exactly the same color quality as regular 1/120sec normal scanout at 120Hz. The scanout is exactly the same speed. That's the same amount of time to inject voltage into each pixel. Faster scanouts means less time to send voltage into each pixel. So accelerating the scanout, degrades colors, regardless of whether the next refresh cycle begins right away (e.g. 1/120sec scanout at 120Hz) or there's a pause before the next scanout (e.g. 1/120sec scanout, done at 60Hz or 85Hz or 90Hz or whatever).

Slow 8ms IPS LCDs with often 10-20ms+ real-world GtGs, do not easily meet this criteria, except with highly optimized response-time-acceleration and at extremely low refresh rates / low strobe rates (uncomfortably flickery). You would have to hack-on an enhanced response time acceleration algorithm to a QNIX LCD, and heavily overclock the scanout (e.g. scanout speed equivalent to overclocking to 120-130Hz) while refreshing at only 60-75Hz, in order to get anything resembling reasonable-quality strobing in a hacked QNIX QX2710. You could program a GPU-based software-based response time acceleration (similar to "ATI Overdrive" in the past), or you could modify the monitor firmware. Then you'd electronically modify the backlight of a QNIX QX2710 to strobe in sync with the refresh cycles (with a bit of strobe length / strobe timing phase adjustment -- could be analog potentiometers -- easily screwdriver-calibrated while watching a motion test similiar to the motion test used for Blur Busters Strobe Utility for BENQ Z-Series). Then, only then, you'll successfully get a roughly acceptable (more strobe crosstalk, but possibly acceptable) near-zero-motion-blur on a home-brew modded IPS-based QNIX QX2710. You could try and forgo the custom response time acceleration, but that will greatly worsen strobe crosstalk for a lot of worst-case color pairs, and lower the Hz you can strobe with low crosstalk -- possibly to 60Hz or less. And when crosstalk free strobing only occurs at less than 60Hz, then you've pretty much got an unstrobeable LCD panel. Right now, official IPS monitor panels are not even official 120Hz yet (only unofficially / only overclocked), and even at that, they still blur roughly similiarly at 96Hz and at 120Hz, since they already kind of hit their motion blur limits at approximately ~96Hz with greatly diminished points of returns (only a very tiny single digit percentage improvement in motion blur by jumping from 96Hz->120Hz), unlike for TN panels with much faster GtG that are a tiny fraction of a refresh cycle, and motion blur on newer TNs are thus bottlenecked by persistence instead of bottlenecked by GtG (which is why 1ms-vs-2ms-vs-3ms GtG makes no noticeable visual difference. Note that milliseconds GtG is not the same as milliseconds in persistence).

Faster scanout = higher dotclock = less time to refresh each pixel at a time = poorer colors
That's why running at 144Hz often has poorer quality than at 120Hz. That's also partially why LightBoost 120Hz has poorer colors than non-LightBoost 120Hz, because LightBoost monitors internally accelerate the scanout (partially buffers and then does an accelerated scanout, as explained in a researcher webpage, StrobeMaster's Display Corner ). BENQ Z-Series do it differently; by being able to be tweaked to be done from the GPU side (via putting large blanking intervals in the signal output), and it's also observed that colors degrade slightly when using Custom Resolution Utility (NVIDIA or ToastyX) Vertical Total 1350 and Vertical Total 1500 tweaks, due to the higher dotclock & faster scanout; although not nearly as much as LightBoost color degradation.

TL;DR: Color quality is dependant on scanout speed (pixel dotclock), and not necessarily the actual refresh rate (number of scanouts per second).

TL;DR #2: Good strobe backlights requires fast LCDs that have real-world GtGs fast enough to fit within the pause between refreshes (a.k.a. an enlarged "blanking interval" big enough to fit GtG trasitions).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Trip
Posts: 157
Joined: 23 Apr 2014, 15:44

Re: ULMB(lighboost) & G-sync? [What are technical challenges

Post by Trip » 25 Apr 2014, 10:28

Ah I get it I think, the only pixels that will benefit from these larger blanking intervals are the bottom pixels. So at a speed of 144hz = 7ms scanout which is likely the limit for the benq I assume. The bottom pixels then have an advantage of 3ms if you compare it to 100hz = 10ms. The blanking interval will then end 3ms later which is when the strobe will be turned on and the pixels will be visible. I read through quite a few articles from the display corner site and it looks like eizo is also using a similar technique. Scanning at 240hz speed then pausing to just before the end of the blanking interval and then strobing the display. I see the eizo monitor will likely use buffering to achieve this as otherwise it cannot scan this fast since the bandwith is otherwise not sufficient this is also mentioned in the article about the eizo. Now it makes me wonder though totally unrelated to response time when the bufferswap in the gpu takes place does it take place earlier if you use this technique since technically the vblank takes place ealier. In the case of the eizo monitor this will give the gpu 4 ms more time since the vblank happens this much earlier then if it would just scan at a speed of 120hz with a shorter vblank time. Does this result in more performance in the gpu section since it can render the next frame earlier? Just wondered.

flood
Posts: 929
Joined: 21 Dec 2013, 01:25

Re: ULMB(lighboost) & G-sync? [What are technical challenges

Post by flood » 25 Apr 2014, 21:02

Is it possible to force ulmb + gsync? so that the strobe length is constant

This would not cause visual problems if the game's framerate is exactly consistent. For example csgo can easily be capped to exactly 120fps

Post Reply