CRT-like Emulation on Low Persistence display?

Everything about displays and monitors. 120Hz, 144Hz, 240Hz, 4K, 1440p, input lag, display shopping, monitor purchase decisions, compare, versus, debate, and more. Questions? Just ask!
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: CRT-like Emulation on Low Persistence display?

Post by Chief Blur Buster » 17 May 2014, 00:30

skrueck wrote:We were almost exactly on the same page and then I think we drifted apart again. Probably my fault for poor explanation and improper terminology used. If so, my apologies. I'll try one last time if you will indulge me.
No worries! Discussions can get quite complex on Blur Busters.
skrueck wrote:In other words, "stream" temporally accurate image fragments (rows, for example) into the eye instead of trying to flash entire images with much longer black intervals in between as we do now. Do this in order to greatly reduce overall latency of rendering, transmitting and displaying pixels.
This does go back 180 degrees back to the http://www.testufo.com/interlace -- you notice that the top UFO is temporally accurate since new positional information occurs even during subsequent interlace passes. But this gets temporally messed up (by the eye-tracking motion) if you don't run at framerate matching the refresh rate (number of interlace passes per second).
skrueck wrote:Could an OLED display currently handle 500 Hz? I think it probably could. If not, I think it could be made to if light output is already good enough that persistence is possible at 2 ms.
At 350cd/m2 during full persistence (16.7ms) for a typical computer monitor, you can still get ~50cd/m2 during 2ms persistence. A bit dim, but usable at night in a dark room. You could also use boost voltage pulses to compensate (like done in some strobe backlights), but that may not be safe for OLEDs. LightBoost hits almost 100cd/m2 while Turbo240 hits about 250cd/m2.
skrueck wrote:Can a current PC or transmission technique (HDMI cable, for example) handle this?
Yes, you could do 4-way interlacing at 480Hz over HDMI, or 2-way interlacing at 480Hz over DisplayPort. There's enough bandwidth. You don't even need to stoop as low as 6-way interlacing during 450Hz.
skrueck wrote:At 1/450 seconds: render, transmit and display only rows 1, 7, 13, ...
At 2/450 seconds: render, transmit and display only rows 2, 8, 14, ...
At 3/450 seconds: render, transmit and display only rows 3, 9, 15, ...
At 4/450 seconds: render, transmit and display only rows 4, 10, 16, ...
At 5/450 seconds: render, transmit and display only rows 5, 11, 17, ...
At 6/450 seconds: render, transmit and display only rows 6, 12, 18, ...
At 7/450 seconds: render, transmit and display only rows 1, 7, 13, ...
Yes, that's what I call 6-way interlacing.
A slow motion software-based equivalent of this would be http://www.testufo.com/interlace#interleave=6
which does exactly this, but at a slower refresh rate than 450Hz.


(If it doesn't look correct, make sure you use a supported web browser)

The 450Hz equivalent would be 7.5 times more rapid than this running at 60Hz, or 3.75 times more rapid than this running at 120Hz. It certainly would produce usable high-framerate, low-latency, low-persistence.
skrueck wrote:Concern was also expressed with vertical motion if using rows of pixels. I'm not sure how bad that would be at 450 Hz rows compared with 75 Hz full frames? But I would assume there might be some ways to alleviate this if it were much of a problem.
When you try to do the same thing for vertical motion, you get some resolution-loss artifacts when vertical motion goes close to the same speed as the interlacing. Imagine motion going 450 pixels/second vertically; you're going to have times where you never display certain rows of pixels.
The slow motion equivalent (60 pixels/second at 60Hz or 120 pixels/sec at 120Hz) would be the following:
www.testufo.com/#interleave=6&test=interlace&direction=vert&pps=60

So you still the vertical motion interlace artifact problem when vertical motionspeed in pixels per second matches near a multiple of the refresh rate. In this situation, certain parts of the objects never get rendered, and thus, never become visible.
skrueck wrote:Again, apologies if I was not clear in my explanation or if I used any terms improperly. And thank you Chief for your time and to anyone else who reads this or comments.
You're welcome!
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

mdzapeer
Posts: 73
Joined: 14 Feb 2014, 03:22

Re: CRT-like Emulation on Low Persistence display?

Post by mdzapeer » 17 May 2014, 03:13

Interlacing is the devil's work son! (Just kidding not being rude)

You make good points skrueck, and its the exact same reason which led to interlacing being used in TV because of lack of bandwidth. No reason it cant be used again for the same purpose.

BUT

you have to live with the trade-offs which the chief already mentioned in his detailed post.

although interlacing starts to look better at VERY high resolutions (4k+), that is why you still have standards like 1080i (bandwidth/detail of 720p). As long as there are bandwidth limitations interlacing remains a viable option.

Personally I think you lose detail per frame and when you deinterlace the original image to try to recover the lost detail you get its own set of problems http://en.wikipedia.org/wiki/Deinterlacing

skrueck
Posts: 5
Joined: 14 May 2014, 17:36

Re: CRT-like Emulation on Low Persistence display?

Post by skrueck » 18 May 2014, 10:12

It certainly would produce usable high-framerate, low-latency, low-persistence.
Ok, now we're on the same page. :)
The 450Hz equivalent would be 7.5 times more rapid than this running at 60Hz, or 3.75 times more rapid than this running at 120Hz.
This "6-way interlace" that you show in your incredibly useful TestUFO utility, seems to demonstrate the feasibility of this approach. In the 60 Hz simulation, each pixel is only being turned on 10 times per second. The difference between that and 450 Hz (7.5 times as you pointed out) would be tremendous. The same magnitude difference as a playing a game at 20 FPS versus playing at 150 FPS.

So if rendering and transmission time can be cut by 6 times in this example, this would likely be preferable to many people in many applications. In applications such as VR where motion latency can cause nausea and headaches, it would seem that some graphical artifacts might be preferable if that means dropping overall display latency from 30 ms to 5 ms.

And of course the graphical artifacts demonstrated are also intended to show near worst-case scenarios and are being demonstrated continuously versus more real-life scenarios where a video scene is rarely panning up or down and at a constant rate of speed for extended moments in time with bright objects on a black background (other than, say, space flight sims, where this would obviously have more potential to occur).

But in order to mitigate these artifacts, why limit this approach to rows displayed in sequence? Or rows at all?

So instead of rows being displayed in sequence:

1,7,13...
2,8,14...
3,9,15...
4,10,16...
5,11,17...
6,12,18...
1,7,13...

Use instead:

1,7,13...
4,10,16...
2,8,14...
5,11,17...
3,9,15...
6,12,18...
1,7,13...

or

1,7,13...
6,12,18...
2,8,14...
5,11,17...
3,9,15...
4,10,16...
1,7,13...

Or use a 2-way checkerboard method:
(showing a 4x4 segment of the display with numbers representing when pixels refresh)

1212
2121
1212
2121

3434
4334
3434
4334

Or a 4-way block:

1212
4343
1212
4343

5656
8787
5656
8787

In the case of 3D, where each eye can receive different information, maybe use rows for one eye and columns for the other. Or use different sets for each eye. Or for the 2-way checkerboard example for 2 eyes:

1212 2121
2121 1212
1212 2121
2121 1212

Or 4-way block:

1212 3434
4343 2121
1212 3434
4343 2121

And although any of these techniques will introduce some type of artifacts, at very high Hz they may not be too much of a problem. And the increased speed might also decrease certain currently-existing problems like flickering.
Yes, you could do 4-way interlacing at 480Hz over HDMI, or 2-way interlacing at 480Hz over DisplayPort. There's enough bandwidth. You don't even need to stoop as low as 6-way interlacing during 450Hz.
From an artifact perspective, it might be seen as "stooping". But it could be seen instead as "aspiring" from a latency reducing perspective.

And you are talking about needing to produce far more pixels per second than I was referring to. The example I started with assumed a computer capable of sustaining 1920x1080 @ 75 FPS. From a practical cost perspective, this would seem a good current target. The PS4 and Xbox One can't even always handle 60 FPS, let alone 75. And of course there are much faster GPUs, and GPUs will continue to get even faster. But once resolution is upped to 4k or 8k or whatever, this entire conversation could simply be scaled up to bigger numbers.

Thanks again Chief for the excellent comments and TestUFO utility, and to anyone else reading or commenting.

iopq
Posts: 48
Joined: 23 May 2020, 10:06

Re: CRT-like Emulation on Low Persistence display?

Post by iopq » 23 Jun 2021, 10:24

Chief Blur Buster wrote:
14 May 2014, 20:28
Alas, there are nasty temporal motion artifacts caused by interlacing, including 3-way interlacing and 4-way interlacing. I added some undocumented adjustments (&eastergg=1) to the www.testufo.com -> Tests -> Interlace animation to demonstrate this, from a motion artifacts perspective. The temporal offsetting of each refresh creates "venetian blinds" style artifacts.

TestUFO Demos of Interlacing
- Regular interlacing Animation
- 3-Way Interlacing Animation
- 4-Way Interlacing Animation
- 6-Way Interlacing Animation

For convenience, I've embedded the regular 2-way interlacing animation:



So, this is why you _really_ want progressive scan, even on a CRT.
The funniest thing is at 280Hz it looks AMAZING, like BFI, as long as freesync is on. I would totally go for more FPS and interlaced if a game had this setting, as long as it's not deinterlaced, but actually just showing alternating lines

iopq
Posts: 48
Joined: 23 May 2020, 10:06

Re: CRT-like Emulation on Low Persistence display?

Post by iopq » 27 Jun 2021, 01:18

skrueck wrote:
18 May 2014, 10:12
It certainly would produce usable high-framerate, low-latency, low-persistence.
Ok, now we're on the same page. :)
The 450Hz equivalent would be 7.5 times more rapid than this running at 60Hz, or 3.75 times more rapid than this running at 120Hz.
This "6-way interlace" that you show in your incredibly useful TestUFO utility, seems to demonstrate the feasibility of this approach. In the 60 Hz simulation, each pixel is only being turned on 10 times per second. The difference between that and 450 Hz (7.5 times as you pointed out) would be tremendous. The same magnitude difference as a playing a game at 20 FPS versus playing at 150 FPS.

So if rendering and transmission time can be cut by 6 times in this example, this would likely be preferable to many people in many applications. In applications such as VR where motion latency can cause nausea and headaches, it would seem that some graphical artifacts might be preferable if that means dropping overall display latency from 30 ms to 5 ms.

And of course the graphical artifacts demonstrated are also intended to show near worst-case scenarios and are being demonstrated continuously versus more real-life scenarios where a video scene is rarely panning up or down and at a constant rate of speed for extended moments in time with bright objects on a black background (other than, say, space flight sims, where this would obviously have more potential to occur).

But in order to mitigate these artifacts, why limit this approach to rows displayed in sequence? Or rows at all?

So instead of rows being displayed in sequence:

1,7,13...
2,8,14...
3,9,15...
4,10,16...
5,11,17...
6,12,18...
1,7,13...

Use instead:

1,7,13...
4,10,16...
2,8,14...
5,11,17...
3,9,15...
6,12,18...
1,7,13...

or

1,7,13...
6,12,18...
2,8,14...
5,11,17...
3,9,15...
4,10,16...
1,7,13...

Or use a 2-way checkerboard method:
(showing a 4x4 segment of the display with numbers representing when pixels refresh)

1212
2121
1212
2121

3434
4334
3434
4334

Or a 4-way block:

1212
4343
1212
4343

5656
8787
5656
8787

In the case of 3D, where each eye can receive different information, maybe use rows for one eye and columns for the other. Or use different sets for each eye. Or for the 2-way checkerboard example for 2 eyes:

1212 2121
2121 1212
1212 2121
2121 1212

Or 4-way block:

1212 3434
4343 2121
1212 3434
4343 2121

And although any of these techniques will introduce some type of artifacts, at very high Hz they may not be too much of a problem. And the increased speed might also decrease certain currently-existing problems like flickering.
Yes, you could do 4-way interlacing at 480Hz over HDMI, or 2-way interlacing at 480Hz over DisplayPort. There's enough bandwidth. You don't even need to stoop as low as 6-way interlacing during 450Hz.
From an artifact perspective, it might be seen as "stooping". But it could be seen instead as "aspiring" from a latency reducing perspective.

And you are talking about needing to produce far more pixels per second than I was referring to. The example I started with assumed a computer capable of sustaining 1920x1080 @ 75 FPS. From a practical cost perspective, this would seem a good current target. The PS4 and Xbox One can't even always handle 60 FPS, let alone 75. And of course there are much faster GPUs, and GPUs will continue to get even faster. But once resolution is upped to 4k or 8k or whatever, this entire conversation could simply be scaled up to bigger numbers.

Thanks again Chief for the excellent comments and TestUFO utility, and to anyone else reading or commenting.
So basically checkerboard rendering, but instead of interpolation of the missing pixel data you show a black pixel.

Instead of playing at 100 FPS, you could be getting 170 FPS (or something similar depending on CPU load) and getting the better MPRT, lower input lag, etc. At the cost of lower brightness (unless this mode is tuned for by the manufacturer)

I would like a checker board rendered UFO, lol

Post Reply