I'm already able to run single-player Half Life 2 at over >1000fps on my overclocked 1080 Ti on my overclocked 5 GHz system. But that's too low-detail. Instead, we want to use a "frame rate amplification technology" on Ultra-detail scenery instead, if possible.BurzumStride wrote:As it turns out, in some better optimised and less-demanding games it is ALREADY possible to hit not only 1000FPS, but well over 2000FPS:
With good frame rate amplification technologies, detail levels shouldn't need to be reduced.BurzumStride wrote:However, I would hope that the incremental performance increases following each microarchitecture's release, (with some optimisation from the game developers) would allow us to see something closer to 1000FPS (at low graphical settings) by the end of 2019's Ice Lake release.
A very good metaphor is video, and the progression from H.120 in 1984 all the way to H.265 recently. That's the video format now used in newer smartphones and 4K Netflix.
Back in the 1970s-1980s, scientists weren't sure we could transmit TV-quality video in a 1.5 Mbps pipe (at the time, roughly the bandwidth that was believed to be theoretically possible on a wideband phone line -- long before DSL modems were developed!). At the time, compression technologies were very rudimentary, and many scientists didn't believe we could get video at an average of less than 1 bit per pixel per frame.
Uncompressed 4K video at 60 frames per second (24-bit per pixel, no blanking intervals) requires a bandwidth of 12 gigabits per second (1.5 gigabytes per second). Yet, we have miraculously achieved that closer to barely above 12 megabits per second, a 1000:1 compression ratio. That's less than 0.03 bits per pixel -- which would be shockingly amazing to video compression scientists of the early 1980s. Chips today that decode H.265 video today, are much more powerful than 1980s supercomputers -- as a testament to the sheer brute force we now carry in our pockets (smartphones).
If we make it lightly compressed -- it's still amazingly well compressed: Doing 4K video H.265 at lighter compression ratios (e.g. 100:1 ratio of 120 megabits per second -- often a bigger bitrate than used in some of the best movie theaters), still is less than 0.3 bits per pixel. And now becomes indistinguishable to human eye from an uncompressed 4K video stream. While not 1000 times smaller like Netflix, it's still 100 times smaller than uncompressed but looks like uncompressed.
How is this possible? Video compression standards often use few full frames (e.g. I-Frames) and interpolate between them (P-Frames and B-Frames). H.264 video may often only have 1 frame per second of I-Frames, filling the frames in between using P-Frames and B-Frames. The intermediate frames simply "translates" (shifts around, modifies, etc) the data from the fullly rendered frames (I-Frames).
This is done so amazingly seamlessly, we manage not to even see interpolation artifacts of this H.264 technique. All we see is perfectly smooth 24fps movies, even if only 1fps are fully rendered frames! Whether you're watching House of Cards (on Netflix), or watching Star Trek Discovery (on CBS All-Access) -- the video compression is only delivering very few fully-encoded frames per second, often just 1 frame per second. The video compression decode process -- is visually flawlessly filling-in the frames -- thanks to the magic of advanced predictive frames ("P-Frames" and "B-Frames").
Tomorrow (rapid progress over a decade), this will move into three dimensions (GPU), to help add extra full-Ultra-detail frames in between the real rendered frames. This is the breakthrough towards blurless sample-and-hold (strobeless ULMB), 3D motion that looks more like real life, by achieving low-persistence via ultra-high-framerates without unobtainium GPUs. 1000fps@1000Hz in less than one human generation is coming!
H.265 does migrate to coding tree units instead of macroblocks, but the principle is similar -- there's full frames at low frequency, and compression tricks interpolate between the full frames. (This is different from realtime video-processing interpolation like Motionflow which produces more artifacts. Video compression use motion-vector-aware interpolation techniques to do artifact-free "interpolation" for the strict purposes of video compression). Obviously, too much compression creates blockiness, but even lightly compressed video (where you can't see compression artifacts) at the electronic cinema is still often 100+ times smaller than the uncompressed video.
Metaphorically, GPU/3D frame rate amplification technologies (translation, timewarping, etc) actually has some metaphorical equivalents 2D video compression tricks (I/B/P-Frame methodology)
- Instead of macroblocks or coding tree units (found in H.264), you're working with 3D object units instead. Textures, polygons and geometry.
- The GPU equivalent of video I-Frames (independent full frames) would be the fully rendered frames
Today: 45fps out of 90fps (and lots of artifacts)
Tomorrow: 100fps out of 1000fps (and no parallax artifacts!)
- The GPU equivalent of video P-Frames (predicted frames) would be the interpolated/translated frames (e.g. the frames between fully rendered frames).
The GPU won't do full polygonal rendering for most frames. But the game/drivers can still stream 6dof (6-degrees) data from the engine at 1000Hz to the GPU -- the intermediate frames may require less than 100 kilobytes of partial 6dof data -- in order for flawless/lagless interpolation to occur (frame rate amplification technology). Basically, real-time translate the geometry of the existing previous 3D scene accordingly at all depths/all planes/all layers, with full parallax compensation. This solves obscure-and-reveal artifacts of early timewarping/reprojection technologies just occuring only now. Within a decade, it is likely we will have virtually artifact-free 3D interpolation, because you're (virtually flawlessly) interpolating in three dimensions using various forms of future advanced translation/timewarping technologies. Sufficient render overlap would allow enough translation room to eliminate translation artifacts -- and interpolation artifacts to be pushed below human-visibility floor -- much like it can be for today's video compression (when not overcompressed).
Sidetrack prediction: While frame rate amplification technologies is *strictly* for computer/game stuff, it's possible that the GPU+video universes will merge eventually. A wild prediction by about year ~2050, future video codecs (e.g. H.268 or H.269) probably will be a framerateless GPU codec rather than traditional macroblock/coding tree codecs, compressing video as true 3D geometry instead that can be mapped to any 2D plane (of any shape, pocket, smartphone, TV, wall) or any 3D display (of any type, stereo, holographic, VR, etc) at any custom framerate. The video would be compressed with no frame rate, and the display can choose to display the video at any frame rate of your choosing. Cameras may even become timecoded photon cameras, so that video capture also eventually becomes framerateless then, too. Who knows?
This a lot of advanced researcher work that's ongoing in the industries, and if not all of the GPU manufacturers are doing it yet -- several more of them will be very soon. Virtual reality forced that hand, and it's an innovation path that will take ten years to occur before the true artifact-free 1000fps@1000Hz interpolation occurs from a 100fps source.
Complications can arise (e.g. ultra-fast motion of teleporting from one location X to another location Y) -- much like complex movie scene-cross-fades can confuse 2D video compression -- but those can be worked around (or even just simply letting frame rate dip during 1000Hz VRR whenever motion is too fast for successful frame rate amplification). That said, for plain FPS rapid speed running through dungeons and arenas, 100fps->1000fps ideally should be fairly easy to frame-rate-amplify because differences between adjacent frames are small enough to easily interpolate using future advanced 3D-geometry-aware translation algorithms. It won't be an easy journey for GPU manufacturers, but it is indeed technologically possible.
Yep. Diminishing points of returns.BurzumStride wrote:Higher framerates offer diminishing returns in terms of things like image persistence and motion clarity.
You need to jump huge jumps, 120fps->240fps->480fps->960fps just to halve motion blur of the previous frame rate.
(Assuming the pixel response stayed away from becoming the bottleneck factor, of course)
However, the trip to retina-VR, pretty much ensures we need ~0.1ms persistence to pass the "Holodeck Turing Test" for fast motion experiments. This is an incredibly difficult problem to solve since this is likely going to need to be solved through other advanced means other than trying to achieve 0.1ms persistence via a 16K retina 180-degree display running at 10,000fps@10,000Hz. Peripheral vision resolution is less important, so you might have higher Hz and higher detail only at eye-gaze point (eye-tracked displays) as a workaround to doing ultra-high-Hz for the full screen.
Lots of creative experiments are being done by VR headset researchers, but it's already known that stupendously high quintuple-digit Hz is sometimes required for several use cases (see NVIDIA's 16,000 Hz AR demonstration). Such brute force certainly solves many problems.
It can -- but not necessarily -- many factors complicate this.BurzumStride wrote:The good news is that the performance required to draw each consecutive frame, scales inversely with the framerate (each consecutive frame takes less time to render).
That said, indeed, frame rate amplification technologies aims to reduce the silicon cost per fps, and that's the big goal here, as one of the breakthroughs towards future "blurless sample-and-hold displays" (1000fps @ 1000Hz) to occur within less than one human generation.
Surprisingly, we hit 480Hz sooner than I expected, so I now view it as eminently very realistic reach 1000fps@1000Hz within a decade, thanks to VR forcing a new kind of innovation (Oculus' Timewarp was just the Wright Brothers beginnings!).