blurfreeCRTGimp wrote: ↑23 Mar 2021, 16:50
You could train an AI to create a shader optimized to a specific panel's pixel structure, and kill both persistence blur and upscale blur in one go without a dev needing to specifically program their game.
It is too much cart-before-horse:
Eliminating motion blur is a well known science in my head and you don’t need artificial intelligence to eliminate motion blur.
Theoretically AI could be a good autoconfigurator for the pixel structure and persistence (“Hello AI, please duplicate this specific CRT’s characteristics, to the best of the limited abilities of this particular digital display. Thank you!”). It could just essentially reprogram configuration data, with the best faithful parameters possible.
But the underlying science is just abstract configuration, both spatially (like MAME HLSL) and temporally (temporal extensions to HLSL). Before we apply AI, we must hit it by formula.
I know how to program a temporal HLSL, via abstract math (no AI needed), as persistence-improving extensions to MAME HLSL, but it doesn’t put food on the table like helping manufacturers improve hardware motion blur reduction backlights.
I have github items for temporal extensions to spatial CRT filters, one for
RetroArch and one for
MAME HLSL improvements.
Another way AI could do this is, we will have AI programmer assistants that can help us write custom software/functions that is a bit difficult to write. But auto-completing your skill gaps that is not exactly the same thing as you apply...
Right AI Tool for the Right AI Job (Medium Term)
Now that being said, there’s another area of AI, to do frame rate amplification of retro content. Perfect lagless interpolation of Sonic Hedgehog to 240fps 240Hz or even to 1000fps 1000Hz with pixel-perfect faithfulness in blurless sample-and-hold. Now that, is probably proper right AI tool for the right AI job. It’s another form of lowering persistence but by frame rate amplification rather than via BFI. The trick is doing it arcade-faithfully laglessly, and pixel-perfect intermediate frames. Mathematically, it’s possible for many 8-bit games, since a human can PhotoShop perfect intermediate frames, and if that can be done, than an AI can do it in theory instantaneously. The AI would play the game, learn the parallax reveal effects, and pretrain its frame rate amplification.
Training sets would be downloadable on a per-game basis, running the original ROM. Then the AI would framerate amplify the 60fps to unlimited frame rate of your choice, pixel-perfectly. The technology is here today, the RTX 3080 has enough computing power to frame rate amplify 60fps 8-bit retro games to 240fps pixel-perfectly, with the right AI training sets.
The holy grail of flawless “interpolation”/“extrapolation”/“reprojection”/whatever you call it — doesn’t matter if it’s done laglessly & pixel perfect. You could even move a ghost moving 2 pixels/60fps frame into 0.5 pixels/240fps frame, by double-pixelling the whole screen grid and generating those intermediate pixel positions, even floating-point pixel co-ordinates, even for objects going over 8-bit pixelated backgrounds.
CRTs don’t have digital pixel positions so analog pixel positions are a perfect fit for CRT too — so you could even do it hybrid too, if you hate flicker. Spatial could be kept as a CRT filter, and temporal could be frame rate amplification (instead of CRT flicker).
Frame rate amplification algorithms are a great fit for ML. It can’t be done pixel perfect for complex material (3D graphics...) but it should be possible for a lot of 8bit and 16bit retro 2D material — of Pac Man through Sonic Hedgehog lore.