Technical Limitations Preventing Ultra High Hz OLED screens

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by Chief Blur Buster » 16 Mar 2021, 19:38

theTDC wrote:
16 Mar 2021, 17:55
So basically if I'm understanding this correctly, the physics of sending the correct voltage down the wires is such that we can only do one pixel at a time for each "channel" we have, with some minor exceptions. However, the number of channels that we can have are theoretically arbitrary as long as we are fine with extremely wide bezels. Maybe some other (minor?) issues such as increased voltage/power consumption per second in setting the pixels.

Currently the scanout speed appears to be limited to a maximum of ~8ms for an entire 4k display when only using a single channel. Assuming this scales linearly, 2 channels would give us 240Hz, 4 channels 480Hz, and 16 channels could give us potentially 1,920 Hz. If that's all true, then what's holding back Ultra High Hz monitors, at least as a niche product, is not the scanout speed. For LCDs it's the GtG speed, and for OLEDs and LCDs it's the monitor cable, plus the arguable lack of content to justify such high Hz displays.
Also, gaming monitor LCDs are usually hand-me-down LCD fabs.

Like those old micrometer-league fabs still in use to fabricate simpler chips like microcontrollers, old LCD fabs that has spent its time in other industries, are handed down to manufacture low-quantity LCD runs.

Unlike television models that sell millions of units, gaming monitors sell in the tens of thousands and hundreds of thousands unit counts for most models. Some short-run or one-off run models only sell single-digit thousand (e.g. probably the house brands such as Computer Upgrade King house brand gaming monitor as well as groupon style deals like Massdrop Vast.

This makes it hard to spend the necessary R&D to create custom LCDs. We have to wait until technology filters down. Blur Busters is playing a very important role in “Popular Science” style point of view for the gaming monitor industry — de-mystifying a lot of the confusing stuff and mythbusting 1000 Hz.

Five years ago, even many researchers were laughing boisteriously about 1000 Hz, but Blur Busters has successfully stopped those people laughing. The old greybeard engineers who grew up in the CRT days — sometimes don’t quite understand sample-and-hold quite as well as Blur Busters does, and how actually simple display motion blur physics actually can be if viewed from a new perspective — and is actually very easy to experimentally verify.

Also, early high-Hz algorithms were very bad (e.g. Sony Motionflow 960) where you combined interpolation and strobing to emulate the motion clarity of a 960 fps 960 Hz display. But there was interpolation artifacts, soap opera effect related defects (unnatually smooth motion without retroactive fixes to source-based motion blur) AND strobe crosstalk. So a lot of engineers have been turned off by the high-Hz bleep. But today, LCD GtG’s and OLED performance is fast enough to make 240 Hz and 360 Hz work pretty well. Even though they push GtG limits — the benefits are pretty clear.

Blur Busters Law is very simple — 1ms of static pixel visibility time translates to 1 pixel of motion blur during 1000 pixels/sec panning motion. Anybody who owns a good 240 Hz display has already noticed how doubling frame rate halves motion blur — 60fps -> 120fps -> 240fps, and it is pretty easy to extrapolate continued benefits. And they are indeed there.

Now, there needs to be a business case for higher-performing display that can sell by the millions, to keep the costs three-figures instead of four-figures. That’s a big challenge.

Change is afoot, however. VR researchers and others finally more papers that the laws of diminishing returns don’t disappear for a while (aka retina refresh rates don’t occur until well beyond kilohertz refresh rates!), as you already read at Blur Busters Law: The Amazing Journey To Future 1000 Hz Displays — the very article that convinced ASUS of their road to 1000 Hz, too (one insider confirmed to me I had a major role in this advocacy). Also, VR has only boomed in recent months with Quest 2 currently selling far faster than many game consoles (Nintendo GameCube, Sega DreamCast, Nintendo Wii U, etc), with Facebook now earning almost $1B per quarter just from VR, and the science of VR headsets is helping advertise to other engineers the need for retina refresh rates (eventually). Regardless of a person’s stance on Facebook, the work on the VR LCD is superlative lately in its motion performance. Quest 2 is the first really-good VR headset easier than an iPad to set up and can be used by any nursing home or non-computer-user, with far more comfortable 3D than polarized cinema glasses, and with many non-dizzying apps such as simply sitting on a virtual beach that is so good during a pandemic, in a way that’s far better than a Viewmaster or Cardboard VR toy. The superlative big-money-engineered performance put into certain VR displays will eventually filter to larger displays. The Quest 2 strobe-crosstalk-free LCD performs far better than the ASUS VG259QN 360 Hz LCD in terms if its perfect 100% GtG-hide with zero strobe crosstalk. Now, understandably it is only 72Hz or 90Hz by default, but well-strobed at 0.3ms MPRT which requires a 3333 fps at 3333 Hz sample-and-hold display to match.

Nontheless, it is taking time. But Blur Busters is doing their goddamndest best to raise high-Hz advocacy and education to speed up the refresh rate race. This is part of why TestUFO was built — a microphone drop one click away. I create new TestUFO tests to prove high-Hz doubters wrong.

Blur Busters’ website layout has been redesigned with a “Research” button, to make all this high-Hz education much easier to access worldwide. Interestingly, TestUFO now get more traffic from the Asia region than the North America region, with a very clear Monday-to-Friday 9am-to-5pm (their time zone) traffic peaks in the Analytics graphs. It’s suggestive of many companies milking TestUFO to test their displays. Rumor has it is that Huawai may beat North American companies to the first 240Hz HDR local-dimming display. I suspect young fresh scientist/engineer trained minds seem to be more straight-to-math-facts than sketpical about high-Hz progress (grown up via old CRT behaviors where high Hz didn’t benefit as much).

The free information on Blur Busters accessible worldwide works to lift all boats, yet some countries seem to be more open minded to better display science — much like newer Einstein thinking versus classical Newtonian thinking.

Fortunately, many Korean and Taiwanese companies have followed suit in understanding this science — like the Samsung television researchers who cited me in their research paper last year.

Give it ten years. 1000 Hz displays will be a reality earlier than some researchers thought they would be. 2030s is a long time to wait, but many thought it would never happen in their lifetimes. ASUS claims ~2025, but to be more conservative, "by the end of this decade" is what I tell people about 1000 Hz gaming monitors.

Currently, I believe 1000 Hz will hit LCDs before they hit OLEDs.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by theTDC » 16 Mar 2021, 22:34

I certainly am quite happy with your enthusiasm. As I've said before, a blurbusters gif/example is in many ways worth 1,000 words. In many cases, worth infinite words.

But if I'm reading the lines here what you're saying really is: "there are business reasons, some good, some bad, that are why we don't already have 1,000 Hz displays." My understanding is that the technology already exists for OLED displays to get 1,000 Hz refresh rates, just by adding multiple channels to the scanout.

I guess what I'm saying is, if some multi-billion dollar company made it a priority, we could have had these years ago, right?

EDIT: Also, why LCDs before OLEDs? It seems easier to me for the OLEDs, so is this just economic reasons? GtG times really do appear to be a harsh limiter for LCD refresh speeds.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by Chief Blur Buster » 16 Mar 2021, 23:57

theTDC wrote:
16 Mar 2021, 22:34
As I've said before, a blurbusters gif/example is in many ways worth 1,000 words. In many cases, worth infinite words
TestUFO is also used by researchers, thanks to its relatively-trusted VSYNC accuracy -- it will immediately warn about stutter. (There's no HTML5 APIs to detect frame drops so I've resorted to heuristics that works well in most browser engines). This allows people to trust results from pursuit camera (www.testufo.com/ghosting) or overclock tests (www.testufo.com/frameskipping) or other framepacing-critical tests.

It's easy to create a motion test. But it's very hard to create a trusted motion test.
theTDC wrote:
16 Mar 2021, 22:34
But if I'm reading the lines here what you're saying really is: "there are business reasons, some good, some bad, that are why we don't already have 1,000 Hz displays." My understanding is that the technology already exists for OLED displays to get 1,000 Hz refresh rates, just by adding multiple channels to the scanout.
Unfortunately, if you know a thing about circuit board fabrication, you have to add more layers to panel fabrication to allow sufficient wire-overs. You know, how circuit board traces have to hop over each other without short circuiting?

Circuit boards and chips do that by additional lithography passes. So do panel screens. Screens aren't as easy to fabricate as a 128-layer SSD chip, you start to have to throw away panels from lithography failures (dead pixels, short-circuited pixel rows, etc). Even if the extra layers are only needed for the screen edges, it still is additional lithography passes. Your computer's motherboard is often an 8 layer circuit board, so circuit traces can go above/under each other without short circuiting. For a lot of interleaving, you need a lot of circuit-trace-over-circuit-trace, and beyond a specific number of channels, forces a horrendous complex increase in number of lithography layers....

Now imagine you have to do this with a giant glass circuit board that's etched like one gigantic lithographed silicon chip! (that's what an LCD and OLED screen is). Increasingly, printing process can be done instead since inkjetting an OLED is now fine enough to create transitorized flexible screens on plastic substrates, but most LCD/OLED screens (including gaming monitors) are still executed using traditional lithography, the same process as computer chips. Yes, your computer monitor is essentially one massive glass computer chip -- metaphorically. It's a very complex glass sandwich after all...

Image

Now look at the TFT backplane? The backplane itself is typically a multi-layer transparent integrated circuit layer, created using lithography layering semiconductors onto a large piece of glass. Screens are the biggest integrated circuits (aka chip) being manufactured today -- the closest parallel is that an LCD/OLED screen is one gigantic transparent DRAM chip. Albiet a write-only DRAM chip, and using liquid crystals as its "memory storage" medium with analog-like amounts of bitflip (amount of rotation of the LCD molecules for a specific subpixel).

It's borderline miraculous we're already doing 1080p 240Hz for less than a million dollars per unit. There is several orders of magnitude more computing power built into a 240 Hz desktop monitor (WITHOUT a computer connected), than a 1980s Cray Supercomputer. Refreshing 1.5 billion subpixels per second with complex math calculations per pixel (including precisely-calculated variable refresh rate overdrive algebra in a G-SYNC chip and/or multilayer 4D or 5D lookup tables that are interpolated into each other using a formula that takes into account multiple variables such as original color, destination color, time interval between current and last refresh cycle, time interval between last refresh cycle and the one before it, etc), using FPGAs.... I easily estimate about 1 trillion math calculations per second is being done in your 240 Hz NVIDIA G-SYNC Native display.

Now, I have a beef. 17x17 OD LUTs are too common.

Even the displays that skimp to 17x17 OD LUTs instead of proper 256x256 OD LUTs. Researchers are stuck on old fashioned research papers advocating 17x17 OD LUTs when it's already proven in the Blur Busters Laboratory that 256x256 OD LUTs are vastly superior for strobed operation and for panels with very strong localized GtG hotspots (e.g. VA panels), and most OD LUTs are skimped as 1-layer deep (current and prev refresh) when multi-refresh OD LUT is superior (current refresh and the previous few refresh cycles). Some more complex OD LUTs greatly improve this, and I've come up with some new overdrive formulas that can do a superior job. Also, some LCDs already are smart and use 256x256 (or better) OD LUTs such as the Oculus Quest VR LCD. Facebook's Oculus VR department got with the program and abandoned old-fashioned interpolated 17x17 LUTs which is limiting in the 1000 Hz future. AOC / MSTAR / TPV / etc need to upgrade their scaler chips to easily do 256x256s so I can do a better Blur Busters Approved strobe tuning job sometime.

Proper 256x256 OD LUTs + proper multirefresh overdrive + 1024-level OD Gain + possibly Y-axis OD (stronger OD for bottom screen due to less time before strobe flash), can practically double the Hz that I can strobe crosstalk-free at. But this is a helluvalot software engineering time -- the last 10% of overdrive improvement is 90% of the cost. But because I understand this, I can do all of this more cheaply than some vendors can do simple overdrive. So some screen manufacturers consult me on software solutions to improving LCD GtG... Low lying apples aren't being picked in China yet because they don't (yet) understand the stuff I do, or are trained on 10-year-old research papers...

But is still cheaper than additional lithography passes. Do we want the panel maker to spend the money, or do we want the scaler/TCON vendor to spend the money? They both should, but there's gains to be milked towards the 1000 Hz LCD.

Now, you want to add more channels? You may horrendously increase the complexity of the edge electronics that now needs additional lithography layers -- imagine the complexity difference between a 8-layer flash versus 16-layer flash then 32-layer, then 64-layer.... It can scale geometrically as you add channels.

Now imagine that for screens. Ouch. Additional lithography passes = more defect risk. Imagine throwing away 50%+ of your yield (they already do today), now think about throwing away 90% of your yield. 10 scrapped screens for every 1 good screen.... ouch.

The alternative to avoid this is to subdivide the screen as multiple zones (concurrent multiscan architecture), as seen in Concurrent OLED Scans.

Image

(Can be used for LCD)

You treat the screen as if it was separate subdivided screens, and have ribbon connectors at the left edge too (a bit of a bezel increase, but might be an acceptable "cost" for extra Hz). So you're refreshing different sections of the screens independently of each other (without interleaving), so you don't need additional lithography passes. But you dramatically increase the number of screen-edge ribbon connectors, which adds bulk/cost, but may be cheaper than additional lithography passes (that can go defective on expensive LCD glass stock).

One big problem is that this is latency-incompatible with current scanout systems, since you kinda need to transmit 8 refresh cycles concurrently, even though the GPU can only render the 8 refresh cycles sequentially. At 1000Hz, 8ms of input lag isn't much (far less than plasma and DLP), but it's still a human-noticeable downgrade from the classic top-to-bottom synchronized cable:panel scanout.

Let's ignore panel performance and cables for now.

There are a lot of pick-poision engineering decisions that interact with each other:
- Software solutions vs hardware solutions (including low-lying apples not yet plucked)
- Bezel size
- Lithography layers
- Cost of supporting chips
- Cost/bulk of additional ribbon connections and supporting electronics (e.g. LVDS ribbon interfaces from the monitor motherboard (Scaler/TCON) to the glass panel edge-circuits)
- Latency (of successfully avoiding extra lithography layers or thick bezels)
- Reliability of the fab cheap enough to produce desired screen
- Etc.
theTDC wrote:
16 Mar 2021, 22:34
EDIT: Also, why LCDs before OLEDs? It seems easier to me for the OLEDs, so is this just economic reasons? GtG times really do appear to be a harsh limiter for LCD refresh speeds.
It's quicker (for a fixed amount of money) to speed up LCD GtG than to lower cost of many OLED processes in the 24" screen size scale, and then bring either tech up to 1000 Hz.

However, it is possible OLED may beat LCD to 1000 Hz. But right now, behind the scenes, it's looking like LCD may cross the 1000 Hz finish line first.

We only need to speed GtG by 3x (not impossible) to get 1000Hz with similar GtG error margins as today's 360 Hz LCDs. I've seen 50ms LCDs (500ms realworld) turn into 1ms LCDs (10ms realworld) in an incredible GtG speedup of lore. I see 0.1ms GtG90% TN/IPS by 2030 -- no problem. Some manufacturers already successfully achieved that for a cherrypicked color on some TN LCDs (with some side effect such as overdrive overshoot). It's not going to be perfect GtG100%, but will produce amazingly clear 240Hz and 480Hz, with very usable 1000Hz (with minor GtG streaking between refresh cycles). Remember that 0.1ms GtG90% is 1-2ms realworld GtG100%, which means crosstalk-free strobing at ~500Hz-ish in theory (GtG hiding trick in dark periods).

Don't forget there are new panel technologies (other than TN, VA, IPS) that are capable of microsecond-GtG response times, too.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by theTDC » 17 Mar 2021, 01:35

Clicked on that link, read it to the best of my abilities. Trying to type this out quick before bedtime. The Rolling OLED scanout technique you mentioned made it pretty clear that this was for multiple different frames being scanned out at the same time, top to bottom, where you divide the screen into 8 different chunks for 8 simultaneous scans.

What I want is the SAME frame being scanned out at the same time. So if we have a, to make the math very easy, 1024x1024 monitor, with 16 scanning channels, we just divide the screen into 1024x64 chunks, and simultaneously scan out the frame for all those chunks at around 1ms. Is this not possible?

I don’t think that having to buffer a frame is much of a big deal when we are using this to get ~1,000 Hz.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by Chief Blur Buster » 17 Mar 2021, 02:18

theTDC wrote:
17 Mar 2021, 01:35
What I want is the SAME frame being scanned out at the same time. So if we have a, to make the math very easy, 1024x1024 monitor, with 16 scanning channels, we just divide the screen into 1024x64 chunks, and simultaneously scan out the frame for all those chunks at around 1ms. Is this not possible?
This Generates Tearing Artifacts

Yes -- but it generates tearing artifacts, unfortunately. You need a scanout sweep to be assigned its unique contiguous/continuous frame. So if you do multiple concurrent disconnected scanouts, you must have multiple unique frames assigned to each scanout position, to avoid zigzag sawtooth artifacts.

Clearly, you missed this important post in that thread (in your rush read...heh)

You cannot concurrent-multiscan without zigzag artifacts:

Image

This is a well known artifact on many LED marquee signs that did concurrent multiscan (like what you described). It's already in many old research papers including this one.

Want to see more proof, in a see-for-yourself fashion? This is easier to show-and-tell with a display capable of intentionally simultaneous-multiscanning (like a LED Marquee that can double as a computer monitor), but you can see the laws of physics and deduce/extrapolate artifacts: Here's a good TestUFO of scanskew: www.testufo.com/scanskew -- please view this on a true-60Hz-only display such as a DELL 60 Hz, HP 60 Hz monitor, or Apple 60 Hz monitor. An iPad works too (but test both horizontal and vertical orientations -- some of those screens are sideways scanning along its long axis rather than short axis like most LCDs). Observe how the line tilts?

Now if you're refreshing two halves of a display concurrently, you will see two separate tilts with a stationary tearline in the middle. This was confirmed on a display with simultaneous concurrent multiscan.

Now if you're refreshing 8 segments of a display concurrently, you will see eight separate tilts with a stationary tearline in the middle. (This also happens to some JumboTrons and LED signs due to their multirow matrix refresh techniques -- so zigzag artifacts from concurrent multiscan is unfortunately a well-known science to me).

There's also motion-quality reasons why we keep returning to a single single-scanout raster sweep -- keeping pixel refresh timing differences in adjacent pixels as small as possible, is easiest with a single raster sweep (only side effect is the tilt artifact).

TL;DR: For artifact-free multiscan, scanout sweeps must be assigned its own unique frame permanently through its entire sweep. That means you have N frames being refreshed simultaneously for N concurrent separated image-generating scanout sweeps).

TL;DR2: There's a reason why single-truecolor-scanout-sweep is the only way to do a mostly-artifactless (except simple tilt www.testufo.com/scanskew) non-global refresh. There's a reason why DLP and plasma generated artifacts from its non-contiguous-non-truecolor output. Any temporal flaws (out-of-sequence refreshing, whether for temporal dithering or concurrent scanouts of the same frame), will create noticeable image decoupling of various kinds of artifacts (DLP graininess, DLP rainbows, plasma christmas tree noise, plasma contouring artifacts, multiscan tearing, etc) diverging further away from analog real-life motion.

For motion quality, it is best to keep temporal differences between adjacent pixels of a specific frame as small as possible, to minimize human-visible time-divergence artifacts of various kinds.
  • Color sequential (i.e. single chip DLP) = rainbow artifacts / color banding in motion blur
  • Multiscan from same frame = zigzag tearing artifacts during horizontal pan
  • Temporal dithering techniques (especially single chip DLP, plasma, etc) = noise, contouring, etc
  • Continuous top to bottom sweep of same frame = only minor tilt, thanks to extremely tiny temporal difference between adjacent pixel rows for every single pixel row on screen;
What we've found is we can eliminate multiscan tearing artifacts, but only if we assign a unique frame to its own unique sweep. Fpr a concurrent multiscan that has no zigzag tearing artifacts, each screen slice "hands over" its sweep to the next screen slice, to emulate one continuous sweep as if it was a single screen. So you've got 8 continuous sweeps going on for an 8-channel concurrent multiscan.

Anyway, there's many ways to subdivide refreshing, each with their pros/cons, but the immutable constant remains: For any non-global refresh (where last pixel is refreshed much later than first pixel), the single-sweep is always the least-motion-artifacts method of non-global refreshing. You could do a spiral sweep or a outwards-sweep (from center and refreshing two pixel rows) too, but that produces different motion artifacts (like a tunnel stretch effect, or a bidirectional tilt effect). So full circle, a simple tilt is better for a single unidirectional sweep from one edge to other edge. It's the best of a pick-poison choice of a non-global refresh. Just watch www.testufo.com/scanskew on a 60Hz DELL or HP or Apple iPad (try both rotates).

Also happens to window dragging (drag window left/right medium-fast on a 60 Hz slow-scan office monitor) while simultaneously eyetracking the window dragging. e.g. white Notepad window on a dark Windows desktop wallpaper. You'll be seeing parallelograms instead of rectangles -- the shape of window during fast horizontal dragging (at about 1000-3000 pixels/sec, about the edge of your eye tracking speed) -- that's the scanskew effect you just saw during window dragging. At 120Hz, the tilt is halved, and at 240Hz, the tilt is quartered (now almost imperceptible, but you can still notice 240 Hz scanskew if you stand 10 feet away from TestUFO).

Now, imagine seeing tearing during window dragging -- that's the problem of concurrent multiscan from the same frame, since the bottom edges of each multiscan area will have large refresh time differences at the boundaries between screen slices. This happens to some displays already (e.g. certain older LED matrix signs and certain older LED JumboTrons that utilize multiscanned refresh.

Many of them subsequently fixed this by framebuffering per-module and switching to 600 Hz (10 refreshes per frame) to 1920 Hz (32 refreshes per frame) -- for fast scanout velocity per LED matrix module (e.g. 32x32), which more mimics global refresh and avoids zig zag tearing. Now this theoretically makes Jumbotrons an easy test case for retina refresh rates (in theory anyway), if you gave the modules unique image data every refresh, and kept the "one frame per contiguous continuous full screen edge-to-edge scanout sweep" algorithm (which may mean you need to execute 50 simultaneous different refresh cycles of 50 frames, for a 50-module-tall JumboTron). I describe the potential retina refresh rate Jumbotron in the other thread, as they're probably the easiest technology to turn into retina refreshrates, with only minor module modifications.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by theTDC » 17 Mar 2021, 13:09

I did wonder if that would produce human visible tearing artifacts. Shame.

But now I'm sort of back to square one. Instead of having 8 channels updating 1/8th of the screen at the same time, have the circuitry to have the top 8 rows of pixels updating, then the next 8, and so on. I suspect this might also cause some very minor artifacts, since people can now see bands of pixels updating at the same time, but if I had to guess I would say it's fine.

However, I don't understand the actual physical hardware enough to know the limitations of this, and you seem to be saying that updating multiple contiguous rows has some very serious hardware/manufacturing complications that make this quite difficult.

If so, that's quite the shame, since we could quite easily take extant OLED monitors and increase their refresh rates arbitrarily.

EDIT:Then again, update enough pixels in a chunk at once and we are going to get some weird block artifacts in our jello effect. So to use your excellent Test UFO example with the vertical lines and the single line refresh, as it is now the pixels blend into each other. If you updated too many at a time, you'd get the equivalent of screen tearing for the screen, as the top, say, 16 lines of pixels are updated and now don't match the next 16 lines of pixels.

However, I think that effect would probably be vastly preferable to low frame rate single line refresh. Screen tearing would be less of a big deal at 1,000 Hz, especially if it's only for 16 lines of pixels. I also think that we could get this speedup for free when using retina displays and 2 channels, because the pixels are so small in the first place.

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by theTDC » 17 Mar 2021, 14:49

Actually, this would be perfectly fine for a strobing LCD screen right? If we turn off the strobe backlight, then we can just update the screen with however many channels we want. 8, 16, 32, whatever. We don’t have to worry about the jigsaw effect, or whatever it’s called, because the backlight is off.

Now, the only thing that limits us is the GtG times. You said that there are some technologies where the GtG times on LCDs could potentially be sub ms. If that’s worst case scenario, that’s fantastic. Even if we have something like a 1ms worst case scenario GtG switch time, we could then fairly easily make something approaching a 500 Hz monitor without ghosting.

0.5 ms backlight on. 0.5 ms scanout (possible due to ~16-32 channels). 1ms pixel switching time. 500 Hz refresh rate.

Is this why you said that LCDs are more likely to get to ultra high refresh rates before OLEDs?

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by Chief Blur Buster » 17 Mar 2021, 15:37

theTDC wrote:
17 Mar 2021, 13:09
But now I'm sort of back to square one. Instead of having 8 channels updating 1/8th of the screen at the same time, have the circuitry to have the top 8 rows of pixels updating, then the next 8, and so on. I suspect this might also cause some very minor artifacts, since people can now see bands of pixels updating at the same time, but if I had to guess I would say it's fine.
I already wrote that screens already do this -- and artifacts on this isn't noticeable.
theTDC wrote:
17 Mar 2021, 13:09
However, I don't understand the actual physical hardware enough to know the limitations of this, and you seem to be saying that updating multiple contiguous rows has some very serious hardware/manufacturing complications that make this quite difficult.
It requires a lot of circuit-over-circuits. Trying to interleave 8 or 16 or 32 channels can force additional lithography layers (the backplane of LCD screens are just a like a big computer chip / DRAM chip -- the whole screen surface is the/under each other, like wires that interleave with each other, and extra parallel wires without thicker bezels, etc.

It scales very messy -- 2 and 4 channel is easy enough to do but becomes more and more sphagetti as you keep doubling channels....
theTDC wrote:
17 Mar 2021, 13:09
EDIT:Then again, update enough pixels in a chunk at once and we are going to get some weird block artifacts in our jello effect. So to use your excellent Test UFO example with the vertical lines and the single line refresh, as it is now the pixels blend into each other. If you updated too many at a time, you'd get the equivalent of screen tearing for the screen, as the top, say, 16 lines of pixels are updated and now don't match the next 16 lines of pixels.
It will be very faint at current contemporary motionspeeds so going several more channels is acceptable -- but there will come a point where artifacts from block-updating becomes an issue... But at 4-channels and 8-channel refresh (even with OLED GtG speeds), it generally isn't an issue. That said, even 8-channel block update may an issue be for retina 180-degree VR screens though (untested), given the Vicious Cycle Effect has a way of amplifying visibility of ever-tinier artifacts.

At that point, it becomes better to update with multiscan (with the anti-zigzag algorithm of assigning the continuous scan sweep its own unique refresh), which creates interesting frame pipelining issues from the GPU to keep lag low. One could theoretically do frame rate amplification concurrently on 8 separate time-shifted real world renders, and scan-line-interleave the frame delivery from PC to GPU (deliver 8 frames / refresh cycles simultaneously in a stacked roof-shingle fashion -- each frame offset by 1/960sec, each frame taking 1/120sec to complete delivery, and cable is delivering 8 frames simultaneously). It would parallelize well to 8-card SLI too, if you wanted to architecture the frame rendering workflow that way. You'd have 1/120sec rendering latency for 960Hz but it would be consistent and you'd avoid frame buffering latency (unless individual frames rendered faster than 1/120sec). There are ways to creatively deliver frames via custom modifications of a video signal to eliminate the vertical blanking interval (or make it VBI vestigal - say make it minimum, only 3 pixels, to keep backwards compatibility with DisplayPort/HDMI specs) and putting binary data inside the overscan (porch areas) indicating which refresh cycle that pixel row belongs to. It'd work fine over commodity HDMI and DisplayPort without spec modification, but would require custom mods at the GPU end (like pre-interleaving the frame buffers) and dispaly end (like de-interleaving the frame buffers), in a continuous rolling-window fashion.

I've seen some vendors framepack video signals unusually (like the Viewpixx 1440 Hz DLP projector -- they pull it off by framepacking multiple low-bit refresh cycles into one frame, and transmit one refresh cycle to execute multiple refresh cycles on the DLP projector), so it's already being done to use HDMI/DisplayPort as simply bitpacked data transports. So there's no reason why temporally-shingled 8-concurrent-refresh-delivery-over-cable can happen (it's just simply time-division-multiplexing of pixel rows!) to deliver 8 refresh cycles simultaneously (temporally shifted 1/8 Hz apart) over the cable, to solve the framebuffering-latency problem of zigzag-artifact-free concurrent multiscan.
theTDC wrote:
17 Mar 2021, 13:09
However, I think that effect would probably be vastly preferable to low frame rate single line refresh. Screen tearing would be less of a big deal at 1,000 Hz, especially if it's only for 16 lines of pixels.
Yes, this is true. 8 and 16 channel should be OK for higher Hz. You can generally double the number of channels to get double Hz. So if you're doing 4-channels to do 240Hz, going 16-channels to do 1000Hz won't yield worse artifacts, because the time differential between refresh blocks is actually identical. The scanskew is half at double Hz, so this buys you headroom to double channels without anything becoming noticeable. Currently, 4-channel artifacts aren't noticeable at all at 240Hz, so 16-channel artifacts won't be noticeable at 960Hz. So it scales well visually (if only electronics fabbing was simple for extra channels...)
theTDC wrote:
17 Mar 2021, 13:09
I also think that we could get this speedup for free when using retina displays and 2 channels, because the pixels are so small in the first place.
The problem with retina-resolution displays is there's more pixels to refresh, so it kind of pulls down Hz before quality degrades.

Metaphorically, you can't spend too little time kicking soccer balls without a proper running start, hurried kicking many balls too quickly.

GtG becomes crap when you don't take your time giving running starts (a long enough electricity surge) to kick pixel GtG fast enough: Not enough time spent loitering on specific pixels pulsing specific pixel's active matrix transistor gate because the electronics is rushing to refresh too many pixels too briefly. You get the washed-color, bright grey blacks, and faded-ghosting effect of a massively overclocked LCD where GtG becomes 2x to 10x worse due to weak rushed kicks per pixel.

Active Matrix Transistors Made Things Easier Here, But...

The invention of the thin film transistor active matrix screen (LCD, OLED) made the GtG kick a lot easier by letting the transistor gate latch the GtG kick much longer and harder with a briefer original voltage pulse, it still reaches laws of physics -- you still need enough of a voltage kick along a long microwire to get the transistor gate to quickly to its new state (new color) or reset state back (from its fade-to-equilibrium), since we're using the transistor gate as an analog control rather than digital control, since we're treating the LCD/OLED pixel active matrix transistor like an analog house chandelier dimmer (you know, rotating-knob dimmer) rather than a binary ON/OFF transistor. Precise analog transistor gate control is extremely hard to do perfectly across millions of pixels.

Even transistors fade back to equilibrium eventually and a heavily overclocked active matrix LCD begins to revert to the bad smearing low-contrast bright-blacks appearance of an old passive matrix screen (like an old 1980s laptop screen).

An old 1024x768 LCD had 50x more pixel refresh time (to do 50ms GtG) than a 4K 120Hz screen does (to do 1ms GtG), so it's borderline miraculous we're getting longer tinier wires to kick tinier transistors faster with much briefer voltage pulses.

Many engineering tricks are done (that are far beyond the scope of this message and that I cannot answer, because of patents and all), but these are generic row-column addressing science, combined with Ohm's Law of tinier wires, and the science of transistors, all transpiring to control pixels (whatever they may be, LCD or OLED based). But you can see a lot of laws of physics are being pushed and worked around to pull of miracles today -- 1080p 60Hz is already metaphorically an engineering work of art, but we want more.

Contrast ratio may fall to 250:1 or 500:1 instead of normal 1000:1 (IPS) -- This is because some colors require particularly strong GtG kicks. Those LCD molecules has spring-back forces: The liquid crystal molecules wants to rotate away from its electrostatically-forced position. That's why a powered-off LCD usually fades to white, or fades to black (within a second or few) depending on panel tech. (It's also one of the major causes of VRR flicker during sudden framerate changes 30fps -> 240fps -> 30fps so you kinda need different gamma curves for different frame interval times to compensate for the screen's fade-to-equilibrium behavior between GtG kicks).... It's essentially a tilted soccer/football field, where you're trying to kick soccer balls uphill too!). Weak kicks reduce the dynamic range / contrast ratio as a result, because you're fighting against the LCD"s desire to go back to its unpowered equilibrium.

Even OLED has artifacts/problem caused by too-brief pixel refresh, worse during dim picture settings where the analog-control of AMOLED pixel transistors are a bit too difficult to do at ultrafine granularity, and some had to resort to PWM for dimmer picture, to avoid a worse pick-poison artifact (like worse streaky blacks or noisy colors) -- Samsung/iPhone OLEDs often resort to PWM at their dimmer settings to keep color quality high -- adjacent transistors don't make perfectly identical dimmers -- they behave slightly different from each other at the tiny scales.

Solutions to get high-resolution + high=refresh is successfully being done, but it's an engineering problem squared essentially when it comes to amount of time you can fire a pixel GtG kick. Double Hz, double X resolution, double Y resolution, means 1/8th the time you need to execute a running start (voltage kick to the pixel).

Some areas have hit a cliff (like a screen-tech equivalent of CPU GHz speeds no longer going up) and other workarounds are being done such as adding channels to multiple transistors per subpixel, screen subdivision, and more. There's still a lot more left to milk out of LCD with increasing number of workaround (quantum dot full array MicroLED / MiniLED backlights FTW!)

Nontheless, it is a major cube scaling problem...
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

theTDC
Posts: 25
Joined: 09 Mar 2021, 00:13

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by theTDC » 25 Apr 2021, 19:14

Great stuff, and it’s made me realize some of my earlier concerns were wrong. To address my incorrect concern with updating multiple sequential channels simultaneously causing some weird block update artifacts, we can use a few numbers.

1) Assume a display with a resolution of 1,000 pixels vertically.
2) Running at 1,000 Hz
3) 100 us OLED GtG time.
4) Therefore a 1 us line scanout time

If this monitor had just 1 line update at a time, they would need to run at 1 microsecond, or 1 us. What that would mean is that, perhaps incorrectly assuming a linear GtG response, when the next line starts its update, the first line is just 1% of the way done. This is probably humanly visible, but I doubt it’s much of a concern.

However, it is extremely ambitious that we could have such a blazingly fast scanout. Instead we could make a different assumption.

5) 10 scanout channels (ignore power of 2 because it makes the math harder)
6) Therefore each scanout can take 10 us, because we do 10 simultaneously to fit in our 1 ms total screen refresh rate window.

In that case, again assuming a linear GtG response, the 10 line “block” above is 10% finished by the time the 10 line block below starts. 10% of a GtG response is almost certainly humanely visible, but does it matter?

I guess this raises the interesting question of, what is the maximum percentage of GtG response that can be finished before the next scan block starts, before humans start rejecting the results? Furthermore, considering that we’d be running at 1,000 Hz instead of 100 Hz, the difference in the image would be lesser, which should help us cover up these problems anyway.

Of course, the manufacturing difficulties might render this all completely moot regardless.

One thing which you’ve shot down, and which sounds potentially quite bad, is the non-sequential scanout, where instead of scanning out a block of, say, 16 lines, we split the screen up into chunks of 16, and do them independently. I know you’d get the nasty billboard/tearing effect, but I would like to see what this looks like in person, especially if you could potentially get much higher refresh rates to lessen the effect.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Technical Limitations Preventing Ultra High Hz OLED screens

Post by Chief Blur Buster » 28 Apr 2021, 13:42

theTDC wrote:
25 Apr 2021, 19:14
Great stuff, and it’s made me realize some of my earlier concerns were wrong. To address my incorrect concern with updating multiple sequential channels simultaneously causing some weird block update artifacts, we can use a few numbers.
Did you click the link I gave you? ;)

You must have missed my earlier link to my algorithm that completely fixes this problem:
Chief Blur Buster wrote:Oh, by the way, there's an important caveat when multiscanning. If you do it wrong, it will sawtooth:

Image
(Sawtooth artifact caused by multi-scanning of the SAME refresh cycle -- more info in this thread)

To prevent sawtooth artifact problems caused by old-style multiscanning, (A) each concurrent scanout pass must be complete top-to-bottom, and (B) each individual scanouts must correspond to a complete, separate, consecutive refresh cycle (frame). To avoid the dreaded multiscan sawtooth, you must avoid multiscan of the same frame (refreshing the SAME refresh cycle in separate scanouts at the same time on different parts of the screen). Each scanout must be of separate frames.. That way, there can be safely be multiple scanouts concurrently on the same screen, with NO sawtooth artifacts at all.

Slower scanouts will have more skewing (e.g. See your computer monitor scan skewing here: http://www.testufo.com/scanskew ...) but this is mostly insigificant at 1/120sec scanout velocities. And we all know, that a minor skew is not nearly as noticeable as sawtooth.

Then you can 8-way multiscan the same 120Hz OLED, to achieve 1000fps@1000Hz with absolutely ZERO sawtoothing.
Essentially you're "handing over" the scanout sweep from the previous segmented-display above, to the NEXT segmented-display below, for its NEXT refresh cycle.

By keeping each (of the multiple) scanout sweeps its own assigned contiguous refresh cycle framebuffer, you eliminate the zig-zag artifact.

So emulating a single display scanout sweep despite using multiple concurrently-multiscanned sub-displays / segmented-displays.

Another related application of sub-displays, is jumbotron modules (32x32 or 64x64 RGB LED modules, internal refresh rates 600-1920Hz as repeat-refresh of 60Hz frames -- but theoretically can be modified to do unique frames per refresh).
Chief Blur Buster wrote:Possible Application: Jumbotrons & MicroLED Displays

LED Jumbotron modules are already 600 Hz to 1920 Hz
That's the screens you see in stadiums today (as of 2019, 2020, pre-COVID). Many LED Jumbotron modules are 32x32 or 64x64 pixels in square-shaped modules. They run at a frame frequency of 60 Hz but refresh each LED 10 times, so there's a LED refresh frequency of 600 Hz. Some minor modifications in the modules allow them to run at a frame frequency matching refresh frequency = retina refresh rate JumboTrons!

MicroLED panels are similar architecture
The MicroLED panels found on LED cinema screens such as "The Wall" and other MicroLED displays are also similarly modular -- they are essentially miniaturized, higher-density versions of giant stadium LED Jumbotrons. In this COVID-pandemic, a boom of LED Jumbotrons are appearing. Now, we have UltraHFR developments -- www.blurbusters.com/ultrahfr -- which could become popular by the 2030s.

The multiscaning algorithms are perfect for this
The custom-OLED-rolling-scan algorithms I've posted in this thread is perfect for these modular LED approaches. Including the zig-zag artifact problem of concurrent multiscanning. That said, the engineering goal is that you need one unique refresh per module scanout, and each row of LED panels would need to use the algorithm that prevents sawtooth tearing artifacts from multiscanning.

That said, if there's many rows, you're going to have to stack the scanouts at a fixed velocity controlled by a LED module's individual scanout velocity. That's the unchangeable constant (if you don't want scan artifacts). For example, if the scanout is 1/1200sec (1200Hz capability), and your screen is about 32 modules tall (32 rows of LED modules), you will need to spend 32/1200sec sweeping the full screen height, while following this algorithm, to prevent sawtooth / tearing / zigzag / combing motion artifacts. You'd have 32 concurrent scanouts assigned its unique frame/refresh sweep, that transfers between those as the frame's sweep reaches the end. Your refresh rate will still be the module refresh, regardless of screen height. You won't have zigzag artifact problems with www.testufo.com/scanskew -- (just a bit of line-tilting which is not a major problem, assuming a reasonably fast global sweep, as this has existed on almost all 60Hz screens due to the slow scan).

The important thing is that your zigzag-artifact multiscan artifact is fixed with this algorithm, and the technology becomes scalable (from Jumbotrons to MicroLEDs). If your screen requires many modules in height, you may want to speed up individual-module scanout velocity, so that your total screen height is quickly scanned in one unidirectional sweep (as the individual module scanouts cascades their scanout to the next LED module).

Flexibility
The bottom line is that this provides a potential modular engineering path towards:
- Very scalable
- This thread is perfect for modular LED
- Retina refresh rates in modular LED (jumbotrons, MicroLED, Wall displays, etc)
- Modular LED is very friendly to this algorithm I've written in this thread
- Modular LED screens that can be a variable number of modules tall
- Could be compatible with future projectorless cinema UltraHFR screens, a perfect market for UltraHFR at www.blurbusters.com/ultrahfr

Mathematically, your scanout sweep will be [module strip count]/[scanout cycles per module], so 24 rows of 1000Hz jumbtron modules would require 24/1000th of a second for a global top-to-bottom sweep, and you would need 24 concurrent scanouts for 1000fps at 1000Hz. Or 12 concurrent scanouts (separation of scanout pixel rows would be two modules apart vertically) for 500fps at 500Hz, alls sweeping in the same direction (typically from top edge to bottom edge).
This applies to any displays that are treated as multiple stacked sub displays -- whether slice displays like the 960 Hz OLED concept I suggested, or the modular block displays like a typical LED JumboTron.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply