Would a universal ML based CRT shader be preferable to something like DLSS?

Talk to software developers and aspiring geeks. Programming tips. Improve motion fluidity. Reduce input lag. Come Present() yourself!
Post Reply
blurfreeCRTGimp
Posts: 26
Joined: 28 May 2020, 20:36

Would a universal ML based CRT shader be preferable to something like DLSS?

Post by blurfreeCRTGimp » 23 Mar 2021, 16:50

I have seen a lot of positive impressions of Nvidia technology like DLSS or AMD's CAS, and I just had a thought.

I use CRT shaders quite often when using emulation, and I even watch low resolution youtube content through retroarch so I can use the CRT shader to get a vastly improved quality output.

Wouldn't it be possible, and even maybe beneficial to use machine learning hardware in the latest GPUS to create a very fast low performance impact CRT shader that could be used in a content agnostic way without dev specific implementation?

You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.

Something like you can do in reshade, but without the high performance cost?

Am I being naive here somehow, or would this be worthwhile?

MCLV
Posts: 38
Joined: 04 Mar 2021, 15:04

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by MCLV » 23 Mar 2021, 18:05

blurfreeCRTGimp wrote:
23 Mar 2021, 16:50
You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.
You can't kill motion blur by applying a shader on a display with high persistence. CRTs exhibit low persistence because they flicker and the bright phase is very short. So if you want to replicate CRT look and motion clarity, you have to replicate this aspect. You can do that via techniques like backlight strobing and black-frame insertion. BFI can be done in software as well but current LCD displays are not fast enough to emulate CRTs in this way. And the software approach is very straightforward so you wouldn't need machine learning for it anyway.

blurfreeCRTGimp
Posts: 26
Joined: 28 May 2020, 20:36

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by blurfreeCRTGimp » 23 Mar 2021, 18:29

I think I wasn't clear. I know we would still need BFI and backlight strobing. I was curious as to this idea being used to solve UPSCALE blur, IE non native resolution content looking bad. When I said this could be used "in one go" I meant a ML CRT shader to solve upscaling alongside BFI and backlight strobing that already exist.

User avatar
Chief Blur Buster
Site Admin
Posts: 9189
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 23 Mar 2021, 20:25

blurfreeCRTGimp wrote:
23 Mar 2021, 16:50
You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.
It is too much cart-before-horse:

Eliminating motion blur is a well known science in my head and you don’t need artificial intelligence to eliminate motion blur.

Theoretically AI could be a good autoconfigurator for the pixel structure and persistence (“Hello AI, please duplicate this specific CRT’s characteristics, to the best of the limited abilities of this particular digital display. Thank you!”). It could just essentially reprogram configuration data, with the best faithful parameters possible.

But the underlying science is just abstract configuration, both spatially (like MAME HLSL) and temporally (temporal extensions to HLSL). Before we apply AI, we must hit it by formula.

I know how to program a temporal HLSL, via abstract math (no AI needed), as persistence-improving extensions to MAME HLSL, but it doesn’t put food on the table like helping manufacturers improve hardware motion blur reduction backlights.

I have github items for temporal extensions to spatial CRT filters, one for RetroArch and one for MAME HLSL improvements.

Another way AI could do this is, we will have AI programmer assistants that can help us write custom software/functions that is a bit difficult to write. But auto-completing your skill gaps that is not exactly the same thing as you apply...

Right AI Tool for the Right AI Job (Medium Term)
Now that being said, there’s another area of AI, to do frame rate amplification of retro content. Perfect lagless interpolation of Sonic Hedgehog to 240fps 240Hz or even to 1000fps 1000Hz with pixel-perfect faithfulness in blurless sample-and-hold. Now that, is probably proper right AI tool for the right AI job. It’s another form of lowering persistence but by frame rate amplification rather than via BFI. The trick is doing it arcade-faithfully laglessly, and pixel-perfect intermediate frames. Mathematically, it’s possible for many 8-bit games, since a human can PhotoShop perfect intermediate frames, and if that can be done, than an AI can do it in theory instantaneously. The AI would play the game, learn the parallax reveal effects, and pretrain its frame rate amplification.

Training sets would be downloadable on a per-game basis, running the original ROM. Then the AI would framerate amplify the 60fps to unlimited frame rate of your choice, pixel-perfectly. The technology is here today, the RTX 3080 has enough computing power to frame rate amplify 60fps 8-bit retro games to 240fps pixel-perfectly, with the right AI training sets.

The holy grail of flawless “interpolation”/“extrapolation”/“reprojection”/whatever you call it — doesn’t matter if it’s done laglessly & pixel perfect. You could even move a ghost moving 2 pixels/60fps frame into 0.5 pixels/240fps frame, by double-pixelling the whole screen grid and generating those intermediate pixel positions, even floating-point pixel co-ordinates, even for objects going over 8-bit pixelated backgrounds.

CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).

Frame rate amplification algorithms are a great fit for ML. It can’t be done pixel perfect for complex material (3D graphics...) but it should be possible for a lot of 8bit and 16bit retro 2D material — of Pac Man through Sonic Hedgehog lore.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

User avatar
Kamen Rider Blade
Posts: 38
Joined: 19 Feb 2021, 22:56

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Kamen Rider Blade » 25 Mar 2021, 14:48

Chief Blur Buster wrote:
23 Mar 2021, 20:25
CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).
Is the Hexagonal RGB Sub-Pixel arrangement for PC CRT's the reason that old PC CRT's allowed multiple resolutions to look good compared to traditional scalars on LCD's?

https://en.wikipedia.org/wiki/Pixel#Subpixels

Would there be anyway a modern LCD panel be able to re-create the CRT level of multi-resolution scaling while maintaining sharpness of modern LCD panels? Or would it require a brand new Sub-Pixel arrangement?

If it needs a new Sub-Pixel arrangement, would a Hexagonal pattern where each Hexagon handles RGB completely be useful by sub-dividing each Hexagon into 6x Isoceles Triangles that handle RGB twice or RGB + CYM or some other sub-pixel color combo?

User avatar
Chief Blur Buster
Site Admin
Posts: 9189
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 25 Mar 2021, 15:05

Kamen Rider Blade wrote:
25 Mar 2021, 14:48
Chief Blur Buster wrote:
23 Mar 2021, 20:25
CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).
Is the Hexagonal RGB Sub-Pixel arrangement for PC CRT's the reason that old PC CRT's allowed multiple resolutions to look good compared to traditional scalars on LCD's?

https://en.wikipedia.org/wiki/Pixel#Subpixels

Would there be anyway a modern LCD panel be able to re-create the CRT level of multi-resolution scaling while maintaining sharpness of modern LCD panels? Or would it require a brand new Sub-Pixel arrangement?

If it needs a new Sub-Pixel arrangement, would a Hexagonal pattern where each Hexagon handles RGB completely be useful by sub-dividing each Hexagon into 6x Isoceles Triangles that handle RGB twice or RGB + CYM or some other sub-pixel color combo?
It's more the analog position of the electron beam than the hexagonal shadowmask phosphor layout. Any pixel mask (shadowmask, aperturegrille, grid like LCD, offset grid like arcade tubes, pentile, even randomized) can be analog-pixel-position with fuzzy beam-spot overlaps. Resolved resolution is simply a function of the density of the phosphor.

Subpixel-aware graphics scaling is already being done on many panels, such as OLED VR headsets. The original OLED Oculus Rift did the equivalent of CRT-style multiresolution scaling. CRT-style subpixel scaling works wonderfully well on improving Pentile displays, but it was out of mother-of-necessity, because....pentile and its low DPI.

You can program your own subpixel-aware scaler as the math is pretty simple to me, but most programmers don't really understand how to create a proper good CRT filter or a subpixel-aware scaling algorithm (it's just merely an image-version of ClearType).

MAME HLSL is also capable of doing it too, you can create custom CRT masks and get analog-like scaling on LCDs/OLEDs. Doesn't have to be hexagonal. Looks better on HDR OLEDs so that the simulated phosphors can be brighter to make up for the dimming/darkness of the black gaps between emulated phosphor dots.

It's not the best way to get retina resolution, but it gives you an analog-like scaling. VR headsets have to do this out of necessity because you can tilt your head in any direction and the scenery inside has to counter-tilt to compensate, and that requires clever scaling algorithms to prevent aliasing artifacts especially on blown-up smartphone-style panels turned into IMAX screens -- so subpixel scaling algorithms (CRT style) are being used here because of mother-of-necessity, etc.

Digital version of CRT scaling can be done via:
  • Overkill resolution, enough to emulate phosphor dots
  • Phosphor dotmask (alpha channel) reformatted to subpixel-aware for destination display
  • Scanlines mask (alpha channel) reformatted to subpixel-aware for destination display
  • Use any resolution image, and high-quality scale it separately, ideally with subpixel-aware scaling.
  • Mask the image with subpixel-aware scanlines mask; then
  • Mask the image with subpixel-aware dotmask
Subpixel-aware alpha channels can be done as 3 separate images on the R, G, B channels of the original image.

This is pretty much simple resolution-independent CRT scaling emulation in a nutshell (without geometrics like bow/pincushion/astig, for simplicity). There are some low lying apple stuff that's easy to do with bitmap math operations (add/subtract/blend/xor/and/or/etc two images together), the problem to do the best quality is being original subpixel-aware (dotmask) and destination subpixel-aware (like ClearType logic, but on images). Pentile displays have been long been doing subpixel-aware scaling algorithms, so MAME HLSL output to a Pentile display probably looks reasonably "analog-decent", as long as you've got an approximately 4-to-8x oversampling factor (source:destination resolution ratio).

Many CRT filters blur the dots significantly to avoid the dimming effect, since the black gaps between simulated phosphor dots can make it quite difficult to accurately emulate.

It's all stuff already being done today if you cherrypick the hardware/software technology. Just not done on traditional things like Windows desktop or PC games on desktop 2D panels. So it's probably unbeknownst to some readers' knowledge.

The main key is overkill resolution, sufficient brightness, and great color. Thus, bright retina 4K OLEDs make a great starting point for emulation of analog CRT scaling algorithms.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

thatoneguy
Posts: 89
Joined: 06 Aug 2015, 17:16

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by thatoneguy » 27 May 2021, 11:04

If you have enough resolution to render the phosphor mask of say a FW900 then I suppose you could do that. Not sure how much would be needed for that.
For a standard consumer CRT TV for example it can be above 8K resolution to render every phosphor.

As Chief puts it however overkill resolution is the bruteforce solution to mimicking a CRT's variable resolution functions.
But with overkill resolution you could probably even emulate various dot pitches I'm guessing. For example if you want a coarse dot pitch like those found in many arcade cabinets for old low res games that could be done in the future.

You could probably even program some kind of shader that even changes TVLines based on the resolution of the content. E.G in a game like Symphony of the Night for the PSX you might want to simulate 240 TVLines for the 240p gameplay and 480 TVLines for the 480i Menu for example and you could do that if you program a shader like that.
Could even render all sorts of pixel shapes including perfectly circular pixels like in LED Matrices and make them as big or as little as you want(within the limits of your own resolution).

With CRTs you take what you get whereas with shaders there's a lot of untapped potential to do all sorts of things that CRTs cannot.

User avatar
Chief Blur Buster
Site Admin
Posts: 9189
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Would a universal ML based CRT shader be preferable to something like DLSS?

Post by Chief Blur Buster » 01 Jun 2021, 16:07

thatoneguy wrote:
27 May 2021, 11:04
But with overkill resolution you could probably even emulate various dot pitches
Yep. You can already configure this in MAME HLSL successfully and get pretty accurate variable dot pitch of many VGA CRTs with just a 4K display.
thatoneguy wrote:
27 May 2021, 11:04
You could probably even program some kind of shader that even changes TVLines based on the resolution of the content. E.G in a game like Symphony of the Night for the PSX you might want to simulate 240 TVLines for the 240p gameplay and 480 TVLines for the 480i Menu for example and you could do that if you program a shader like that.
Things will look worse that way. You need an oversampling factor of at least 2x to compensate for nyquist scaling factors to prevent it from looking odd (non-CRT-esque).

TVLines is an arbitrary horizontal resolution measurement, from a test pattern of vertical lines. TVLines has nothing to do with scanlines or vertical resolution! So 320 pixels wide at only 240TVLines will look godawful. TVLines is simply a human-read number of an analog-test-pattern, and would be non-sequitur to emulate exactly. TVLines is never read by an electronic device, it's a test pattern card put in front of a camera (or an analog test pattern generator). It has no basis on anything pixel related (digital pixels). TVLines is an analog continuum.

What is actually 240TVL is often visible at 250TVL (at 10% blurring) and visible at 230TVL (as crisply as 240TVL). It's an analog continuum. So what is human-measured as "240TVL" may actually very well be 242TVL (clearly visible lines), with analog fadeoff (blend to solidness) to about 284TVL. There's no exact step points of TVLines (which was a physical test pattern card held up in front of a camera! Of vertical lines to measure horzontal resolution -- unlike scan lines which is horizontal lines creating vertical resolution).

BTW, good related article Making Of: Why Are TestUFO Display Motion Tests 960 Pixels Per Second? which disses old-fashioned test patterns and analog-era-invented measurement methods!

So, get it out of your mind that TVLines makes any sense here. ;)
thatoneguy wrote:
27 May 2021, 11:04
Could even render all sorts of pixel shapes including perfectly circular pixels like in LED Matrices and make them as big or as little as you want(within the limits of your own resolution).
Yes, with overkill resolution.

The holy grail CRT for console emulation is often a Sony PVM CRT. Those are coveted in the used market, picking up old broadcast monitors for use with consoles.

One can emulate a NTSC PVM CRT to roughly retina levels with a desktop OLED 4K panel with sufficient HDR headroom to compensate for the loss of brightness caused by the black gap between phosphor dots, and only illuminating primary colors on separate OLED pixels.

Now, FW900....That probably indeed will require an 8K HDR OLED to correctly retina-emulate. However, one can do a very good approximate fascimile of a FW900 with some creative adjustments on a desktop 4K OLED or laptop 4K OLED - by permitting some slight subpixel defocussing to hide the resolution limitation.

For example, MAME HLSL contains a huge number of options that can look really kick-ass on a 4K OLED such as LG HDTVs -- brightening the image and hiding resolution limitations a bit by defocussing the subpixels a bit to blend into each other, utilizing more of the light-emission room of a panel, and masking some of the resolution limitation.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

       To support Blur Busters:
       • Official List of Best Gaming Monitors
       • List of G-SYNC Monitors
       • List of FreeSync Monitors
       • List of Ultrawide Monitors

Post Reply