Would a universal ML based CRT shader be preferable to something like DLSS?

Talk to software developers and aspiring geeks. Programming tips. Improve motion fluidity. Reduce input lag. Come Present() yourself!
Post Reply
blurfreeCRTGimp
Posts: 42
Joined: 28 May 2020, 20:36

Would a universal ML based CRT shader be preferable to something like DLSS?

Post by blurfreeCRTGimp » 23 Mar 2021, 16:50

I have seen a lot of positive impressions of Nvidia technology like DLSS or AMD's CAS, and I just had a thought.

I use CRT shaders quite often when using emulation, and I even watch low resolution youtube content through retroarch so I can use the CRT shader to get a vastly improved quality output.

Wouldn't it be possible, and even maybe beneficial to use machine learning hardware in the latest GPUS to create a very fast low performance impact CRT shader that could be used in a content agnostic way without dev specific implementation?

You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.

Something like you can do in reshade, but without the high performance cost?

Am I being naive here somehow, or would this be worthwhile?

MCLV
Posts: 43
Joined: 04 Mar 2021, 15:04

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by MCLV » 23 Mar 2021, 18:05

blurfreeCRTGimp wrote:
23 Mar 2021, 16:50
You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.
You can't kill motion blur by applying a shader on a display with high persistence. CRTs exhibit low persistence because they flicker and the bright phase is very short. So if you want to replicate CRT look and motion clarity, you have to replicate this aspect. You can do that via techniques like backlight strobing and black-frame insertion. BFI can be done in software as well but current LCD displays are not fast enough to emulate CRTs in this way. And the software approach is very straightforward so you wouldn't need machine learning for it anyway.

blurfreeCRTGimp
Posts: 42
Joined: 28 May 2020, 20:36

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by blurfreeCRTGimp » 23 Mar 2021, 18:29

I think I wasn't clear. I know we would still need BFI and backlight strobing. I was curious as to this idea being used to solve UPSCALE blur, IE non native resolution content looking bad. When I said this could be used "in one go" I meant a ML CRT shader to solve upscaling alongside BFI and backlight strobing that already exist.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 23 Mar 2021, 20:25

blurfreeCRTGimp wrote:
23 Mar 2021, 16:50
You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.
It is too much cart-before-horse:

Eliminating motion blur is a well known science in my head and you don’t need artificial intelligence to eliminate motion blur.

Theoretically AI could be a good autoconfigurator for the pixel structure and persistence (“Hello AI, please duplicate this specific CRT’s characteristics, to the best of the limited abilities of this particular digital display. Thank you!”). It could just essentially reprogram configuration data, with the best faithful parameters possible.

But the underlying science is just abstract configuration, both spatially (like MAME HLSL) and temporally (temporal extensions to HLSL). Before we apply AI, we must hit it by formula.

I know how to program a temporal HLSL, via abstract math (no AI needed), as persistence-improving extensions to MAME HLSL, but it doesn’t put food on the table like helping manufacturers improve hardware motion blur reduction backlights.

I have github items for temporal extensions to spatial CRT filters, one for RetroArch and one for MAME HLSL improvements.

Another way AI could do this is, we will have AI programmer assistants that can help us write custom software/functions that is a bit difficult to write. But auto-completing your skill gaps that is not exactly the same thing as you apply...

Right AI Tool for the Right AI Job (Medium Term)
Now that being said, there’s another area of AI, to do frame rate amplification of retro content. Perfect lagless interpolation of Sonic Hedgehog to 240fps 240Hz or even to 1000fps 1000Hz with pixel-perfect faithfulness in blurless sample-and-hold. Now that, is probably proper right AI tool for the right AI job. It’s another form of lowering persistence but by frame rate amplification rather than via BFI. The trick is doing it arcade-faithfully laglessly, and pixel-perfect intermediate frames. Mathematically, it’s possible for many 8-bit games, since a human can PhotoShop perfect intermediate frames, and if that can be done, than an AI can do it in theory instantaneously. The AI would play the game, learn the parallax reveal effects, and pretrain its frame rate amplification.

Training sets would be downloadable on a per-game basis, running the original ROM. Then the AI would framerate amplify the 60fps to unlimited frame rate of your choice, pixel-perfectly. The technology is here today, the RTX 3080 has enough computing power to frame rate amplify 60fps 8-bit retro games to 240fps pixel-perfectly, with the right AI training sets.

The holy grail of flawless “interpolation”/“extrapolation”/“reprojection”/whatever you call it — doesn’t matter if it’s done laglessly & pixel perfect. You could even move a ghost moving 2 pixels/60fps frame into 0.5 pixels/240fps frame, by double-pixelling the whole screen grid and generating those intermediate pixel positions, even floating-point pixel co-ordinates, even for objects going over 8-bit pixelated backgrounds.

CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).

Frame rate amplification algorithms are a great fit for ML. It can’t be done pixel perfect for complex material (3D graphics...) but it should be possible for a lot of 8bit and 16bit retro 2D material — of Pac Man through Sonic Hedgehog lore.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Kamen Rider Blade
Posts: 61
Joined: 19 Feb 2021, 22:56

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Kamen Rider Blade » 25 Mar 2021, 14:48

Chief Blur Buster wrote:
23 Mar 2021, 20:25
CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).
Is the Hexagonal RGB Sub-Pixel arrangement for PC CRT's the reason that old PC CRT's allowed multiple resolutions to look good compared to traditional scalars on LCD's?

https://en.wikipedia.org/wiki/Pixel#Subpixels

Would there be anyway a modern LCD panel be able to re-create the CRT level of multi-resolution scaling while maintaining sharpness of modern LCD panels? Or would it require a brand new Sub-Pixel arrangement?

If it needs a new Sub-Pixel arrangement, would a Hexagonal pattern where each Hexagon handles RGB completely be useful by sub-dividing each Hexagon into 6x Isoceles Triangles that handle RGB twice or RGB + CYM or some other sub-pixel color combo?

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 25 Mar 2021, 15:05

Kamen Rider Blade wrote:
25 Mar 2021, 14:48
Chief Blur Buster wrote:
23 Mar 2021, 20:25
CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).
Is the Hexagonal RGB Sub-Pixel arrangement for PC CRT's the reason that old PC CRT's allowed multiple resolutions to look good compared to traditional scalars on LCD's?

https://en.wikipedia.org/wiki/Pixel#Subpixels

Would there be anyway a modern LCD panel be able to re-create the CRT level of multi-resolution scaling while maintaining sharpness of modern LCD panels? Or would it require a brand new Sub-Pixel arrangement?

If it needs a new Sub-Pixel arrangement, would a Hexagonal pattern where each Hexagon handles RGB completely be useful by sub-dividing each Hexagon into 6x Isoceles Triangles that handle RGB twice or RGB + CYM or some other sub-pixel color combo?
It's more the analog position of the electron beam than the hexagonal shadowmask phosphor layout. Any pixel mask (shadowmask, aperturegrille, grid like LCD, offset grid like arcade tubes, pentile, even randomized) can be analog-pixel-position with fuzzy beam-spot overlaps. Resolved resolution is simply a function of the density of the phosphor.

Subpixel-aware graphics scaling is already being done on many panels, such as OLED VR headsets. The original OLED Oculus Rift did the equivalent of CRT-style multiresolution scaling. CRT-style subpixel scaling works wonderfully well on improving Pentile displays, but it was out of mother-of-necessity, because....pentile and its low DPI.

You can program your own subpixel-aware scaler as the math is pretty simple to me, but most programmers don't really understand how to create a proper good CRT filter or a subpixel-aware scaling algorithm (it's just merely an image-version of ClearType).

MAME HLSL is also capable of doing it too, you can create custom CRT masks and get analog-like scaling on LCDs/OLEDs. Doesn't have to be hexagonal. Looks better on HDR OLEDs so that the simulated phosphors can be brighter to make up for the dimming/darkness of the black gaps between emulated phosphor dots.

It's not the best way to get retina resolution, but it gives you an analog-like scaling. VR headsets have to do this out of necessity because you can tilt your head in any direction and the scenery inside has to counter-tilt to compensate, and that requires clever scaling algorithms to prevent aliasing artifacts especially on blown-up smartphone-style panels turned into IMAX screens -- so subpixel scaling algorithms (CRT style) are being used here because of mother-of-necessity, etc.

Digital version of CRT scaling can be done via:
  • Overkill resolution, enough to emulate phosphor dots
  • Phosphor dotmask (alpha channel) reformatted to subpixel-aware for destination display
  • Scanlines mask (alpha channel) reformatted to subpixel-aware for destination display
  • Use any resolution image, and high-quality scale it separately, ideally with subpixel-aware scaling.
  • Mask the image with subpixel-aware scanlines mask; then
  • Mask the image with subpixel-aware dotmask
Subpixel-aware alpha channels can be done as 3 separate images on the R, G, B channels of the original image.

This is pretty much simple resolution-independent CRT scaling emulation in a nutshell (without geometrics like bow/pincushion/astig, for simplicity). There are some low lying apple stuff that's easy to do with bitmap math operations (add/subtract/blend/xor/and/or/etc two images together), the problem to do the best quality is being original subpixel-aware (dotmask) and destination subpixel-aware (like ClearType logic, but on images). Pentile displays have been long been doing subpixel-aware scaling algorithms, so MAME HLSL output to a Pentile display probably looks reasonably "analog-decent", as long as you've got an approximately 4-to-8x oversampling factor (source:destination resolution ratio).

Many CRT filters blur the dots significantly to avoid the dimming effect, since the black gaps between simulated phosphor dots can make it quite difficult to accurately emulate.

It's all stuff already being done today if you cherrypick the hardware/software technology. Just not done on traditional things like Windows desktop or PC games on desktop 2D panels. So it's probably unbeknownst to some readers' knowledge.

The main key is overkill resolution, sufficient brightness, and great color. Thus, bright retina 4K OLEDs make a great starting point for emulation of analog CRT scaling algorithms.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by thatoneguy » 27 May 2021, 11:04

If you have enough resolution to render the phosphor mask of say a FW900 then I suppose you could do that. Not sure how much would be needed for that.
For a standard consumer CRT TV for example it can be above 8K resolution to render every phosphor.

As Chief puts it however overkill resolution is the bruteforce solution to mimicking a CRT's variable resolution functions.
But with overkill resolution you could probably even emulate various dot pitches I'm guessing. For example if you want a coarse dot pitch like those found in many arcade cabinets for old low res games that could be done in the future.

You could probably even program some kind of shader that even changes TVLines based on the resolution of the content. E.G in a game like Symphony of the Night for the PSX you might want to simulate 240 TVLines for the 240p gameplay and 480 TVLines for the 480i Menu for example and you could do that if you program a shader like that.
Could even render all sorts of pixel shapes including perfectly circular pixels like in LED Matrices and make them as big or as little as you want(within the limits of your own resolution).

With CRTs you take what you get whereas with shaders there's a lot of untapped potential to do all sorts of things that CRTs cannot.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 01 Jun 2021, 16:07

thatoneguy wrote:
27 May 2021, 11:04
But with overkill resolution you could probably even emulate various dot pitches
Yep. You can already configure this in MAME HLSL successfully and get pretty accurate variable dot pitch of many VGA CRTs with just a 4K display.
thatoneguy wrote:
27 May 2021, 11:04
You could probably even program some kind of shader that even changes TVLines based on the resolution of the content. E.G in a game like Symphony of the Night for the PSX you might want to simulate 240 TVLines for the 240p gameplay and 480 TVLines for the 480i Menu for example and you could do that if you program a shader like that.
Things will look worse that way. You need an oversampling factor of at least 2x to compensate for nyquist scaling factors to prevent it from looking odd (non-CRT-esque).

TVLines is an arbitrary horizontal resolution measurement, from a test pattern of vertical lines. TVLines has nothing to do with scanlines or vertical resolution! So 320 pixels wide at only 240TVLines will look godawful. TVLines is simply a human-read number of an analog-test-pattern, and would be non-sequitur to emulate exactly. TVLines is never read by an electronic device, it's a test pattern card put in front of a camera (or an analog test pattern generator). It has no basis on anything pixel related (digital pixels). TVLines is an analog continuum.

What is actually 240TVL is often visible at 250TVL (at 10% blurring) and visible at 230TVL (as crisply as 240TVL). It's an analog continuum. So what is human-measured as "240TVL" may actually very well be 242TVL (clearly visible lines), with analog fadeoff (blend to solidness) to about 284TVL. There's no exact step points of TVLines (which was a physical test pattern card held up in front of a camera! Of vertical lines to measure horzontal resolution -- unlike scan lines which is horizontal lines creating vertical resolution).

BTW, good related article Making Of: Why Are TestUFO Display Motion Tests 960 Pixels Per Second? which disses old-fashioned test patterns and analog-era-invented measurement methods!

So, get it out of your mind that TVLines makes any sense here. ;)
thatoneguy wrote:
27 May 2021, 11:04
Could even render all sorts of pixel shapes including perfectly circular pixels like in LED Matrices and make them as big or as little as you want(within the limits of your own resolution).
Yes, with overkill resolution.

The holy grail CRT for console emulation is often a Sony PVM CRT. Those are coveted in the used market, picking up old broadcast monitors for use with consoles.

One can emulate a NTSC PVM CRT to roughly retina levels with a desktop OLED 4K panel with sufficient HDR headroom to compensate for the loss of brightness caused by the black gap between phosphor dots, and only illuminating primary colors on separate OLED pixels.

Now, FW900....That probably indeed will require an 8K HDR OLED to correctly retina-emulate. However, one can do a very good approximate fascimile of a FW900 with some creative adjustments on a desktop 4K OLED or laptop 4K OLED - by permitting some slight subpixel defocussing to hide the resolution limitation.

For example, MAME HLSL contains a huge number of options that can look really kick-ass on a 4K OLED such as LG HDTVs -- brightening the image and hiding resolution limitations a bit by defocussing the subpixels a bit to blend into each other, utilizing more of the light-emission room of a panel, and masking some of the resolution limitation.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by thatoneguy » 24 Jun 2021, 04:40

^Well technically the higher the horizontal resolution(TVL) the more pronounced the scanlines look in 240p games(think your typical Sony BVM/PVM CRT's with chunky scanlines which look like scanlines on emulators)

Anyways, I meant that you could do lower TVL for those that want that consumer CRT look with no noticeable scanlines(a lot of those were even lower like 150TVL), but I suppose you may be right that something like 320TVL would make more sense(and probably 640 for 480i menus). Point I was trying to make is that you could do variable things(like the moment you would have 480i content on screen the shader would flick to simulating a 640TVL screen and when in gameplay it would be doing 320) like that which you wouldn't be able to do with a CRT(or at least you would be a lot more limited even in the most advanced CRT monitors).

That said I do think old lower res games do look smoother with more coarse dot pitches and I do think there's such a thing as an optimal horizontal resolution for particular resolutions. When you're playing a 240p or 480p game console in say a High End Computer CRT Monitor it looks too sharp to me(especially for 240p). Like the games weren't made with those kind of dot pitches/horizontal resolutions in mind.

The TVLines thing is mostly for emulators for old games, it's kinda irrelevant for everything else of course.

Actually I'm not sure about FW900 needing only 8K. This standard Sony Trinitron Consumer CRT would require more than 8K to emulate the mask according to this
https://twitter.com/ruuupu1/status/1356955170551156739
Seems like the horizontal resolution of 8K is more than enough but you need about ~5405 pixels vertical which 4320 of 8K falls short.
Now I don't know if something like FW900 has an even more detailed mask or if it uses similar masks but if it does have a more detailed mask then we're looking at possibly an even bigger resolution to emulate it. Time will tell.

But for my money it's not even worth it to emulate the high end CRT monitors(unless you really want that multisync ability for some 720p locked games that are stuck on PS3/360 and maybe some rare PS4-only game that's stuck on 1080p or something) since modern displays are just straight out better for high resolutions.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 25 Jun 2021, 15:52

thatoneguy wrote:
24 Jun 2021, 04:40
Actually I'm not sure about FW900 needing only 8K. This standard Sony Trinitron Consumer CRT would require more than 8K to emulate the mask according to this
https://twitter.com/ruuupu1/status/1356955170551156739
Seems like the horizontal resolution of 8K is more than enough but you need about ~5405 pixels vertical which 4320 of 8K falls short.
Now I don't know if something like FW900 has an even more detailed mask or if it uses similar masks but if it does have a more detailed mask then we're looking at possibly an even bigger resolution to emulate it. Time will tell.
Possibly true -- but it's borderline because you can simply blend the limitation into retina resolution instead, i.e. soften to the point where it's still retina resolution, but don't soften beyond. Like softening an 16K image to 8K -- it's still very sharp, much like downconverting 4K to 1080p. And you can do subpixel-aware algorithms / CRT filters as an additional resolution-enhancer during the downconversion.

For a 22" screen at 8K, it's already approximately twice an average human's angular retina resolution at arm's length viewing distance. Even with nyquist sampling issues, things still look good with a downconversion halving.

In that 8K video versus 8K video downconverted to 4K video is very similar to most humans because of camera/source limitations. It's not native 8K anymore, but "native 8K downconverted to 4K" often looks far better looking than native 4K camera.

Likewise, the original CRT-targeted material being emulated (240p-1080i), through the high-resolution CRT filter, gives it an additional advantage of being the dominant underlying resolution seen by the human eyes; so a 16K CRT filter effectively smartly algorithmically downconverted (with subpixel-aware scaling) to 8K, will probably look indistinguishable to the majority of human eyes versus a 16K 22" display.

The reason I know this effect is because I've tested CRT filters on a 4K OLED laptop display (some top of the line laptops have 4K OLED screens) and it's almost retina already, despite "not being enough", thanks to pixels already being small, and the downconversion-indistinguishability factor when downconversions of material beyond retina resolution, only downconvert to something still above retina resolution (e.g. downconverting from 2x beyond retina to 1x retina). In theory, one only need to display the scaled result up your maximum vision's angular resolution (aka "retina resolution"), and it'd still serve the purpose of looking the same.

Obviously if you lean close or use a magnifying glass...
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply