New term: "Frame Rate Amplification" (1000fps in cheap GPUs)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 05 Oct 2017, 12:31

BurzumStride wrote:As it turns out, in some better optimised and less-demanding games it is ALREADY possible to hit not only 1000FPS, but well over 2000FPS:
I'm already able to run single-player Half Life 2 at over >1000fps on my overclocked 1080 Ti on my overclocked 5 GHz system. But that's too low-detail. Instead, we want to use a "frame rate amplification technology" on Ultra-detail scenery instead, if possible.
BurzumStride wrote:However, I would hope that the incremental performance increases following each microarchitecture's release, (with some optimisation from the game developers) would allow us to see something closer to 1000FPS (at low graphical settings) by the end of 2019's Ice Lake release.
With good frame rate amplification technologies, detail levels shouldn't need to be reduced.

A very good metaphor is video, and the progression from H.120 in 1984 all the way to H.265 recently. That's the video format now used in newer smartphones and 4K Netflix.

Back in the 1970s-1980s, scientists weren't sure we could transmit TV-quality video in a 1.5 Mbps pipe (at the time, roughly the bandwidth that was believed to be theoretically possible on a wideband phone line -- long before DSL modems were developed!). At the time, compression technologies were very rudimentary, and many scientists didn't believe we could get video at an average of less than 1 bit per pixel per frame.

Uncompressed 4K video at 60 frames per second (24-bit per pixel, no blanking intervals) requires a bandwidth of 12 gigabits per second (1.5 gigabytes per second). Yet, we have miraculously achieved that closer to barely above 12 megabits per second, a 1000:1 compression ratio. That's less than 0.03 bits per pixel -- which would be shockingly amazing to video compression scientists of the early 1980s. Chips today that decode H.265 video today, are much more powerful than 1980s supercomputers -- as a testament to the sheer brute force we now carry in our pockets (smartphones).

If we make it lightly compressed -- it's still amazingly well compressed: Doing 4K video H.265 at lighter compression ratios (e.g. 100:1 ratio of 120 megabits per second -- often a bigger bitrate than used in some of the best movie theaters), still is less than 0.3 bits per pixel. And now becomes indistinguishable to human eye from an uncompressed 4K video stream. While not 1000 times smaller like Netflix, it's still 100 times smaller than uncompressed but looks like uncompressed.

How is this possible? Video compression standards often use few full frames (e.g. I-Frames) and interpolate between them (P-Frames and B-Frames). H.264 video may often only have 1 frame per second of I-Frames, filling the frames in between using P-Frames and B-Frames. The intermediate frames simply "translates" (shifts around, modifies, etc) the data from the fullly rendered frames (I-Frames).

This is done so amazingly seamlessly, we manage not to even see interpolation artifacts of this H.264 technique. All we see is perfectly smooth 24fps movies, even if only 1fps are fully rendered frames! Whether you're watching House of Cards (on Netflix), or watching Star Trek Discovery (on CBS All-Access) -- the video compression is only delivering very few fully-encoded frames per second, often just 1 frame per second. The video compression decode process -- is visually flawlessly filling-in the frames -- thanks to the magic of advanced predictive frames ("P-Frames" and "B-Frames").

Tomorrow (rapid progress over a decade), this will move into three dimensions (GPU), to help add extra full-Ultra-detail frames in between the real rendered frames. This is the breakthrough towards blurless sample-and-hold (strobeless ULMB), 3D motion that looks more like real life, by achieving low-persistence via ultra-high-framerates without unobtainium GPUs. 1000fps@1000Hz in less than one human generation is coming!

H.265 does migrate to coding tree units instead of macroblocks, but the principle is similar -- there's full frames at low frequency, and compression tricks interpolate between the full frames. (This is different from realtime video-processing interpolation like Motionflow which produces more artifacts. Video compression use motion-vector-aware interpolation techniques to do artifact-free "interpolation" for the strict purposes of video compression). Obviously, too much compression creates blockiness, but even lightly compressed video (where you can't see compression artifacts) at the electronic cinema is still often 100+ times smaller than the uncompressed video.

Metaphorically, GPU/3D frame rate amplification technologies (translation, timewarping, etc) actually has some metaphorical equivalents 2D video compression tricks (I/B/P-Frame methodology)
  • Instead of macroblocks or coding tree units (found in H.264), you're working with 3D object units instead. Textures, polygons and geometry.
  • The GPU equivalent of video I-Frames (independent full frames) would be the fully rendered frames
    Today: 45fps out of 90fps (and lots of artifacts)
    Tomorrow: 100fps out of 1000fps (and no parallax artifacts!)
  • The GPU equivalent of video P-Frames (predicted frames) would be the interpolated/translated frames (e.g. the frames between fully rendered frames).
    The GPU won't do full polygonal rendering for most frames. But the game/drivers can still stream 6dof (6-degrees) data from the engine at 1000Hz to the GPU -- the intermediate frames may require less than 100 kilobytes of partial 6dof data -- in order for flawless/lagless interpolation to occur (frame rate amplification technology). Basically, real-time translate the geometry of the existing previous 3D scene accordingly at all depths/all planes/all layers, with full parallax compensation. This solves obscure-and-reveal artifacts of early timewarping/reprojection technologies just occuring only now. Within a decade, it is likely we will have virtually artifact-free 3D interpolation, because you're (virtually flawlessly) interpolating in three dimensions using various forms of future advanced translation/timewarping technologies. Sufficient render overlap would allow enough translation room to eliminate translation artifacts -- and interpolation artifacts to be pushed below human-visibility floor -- much like it can be for today's video compression (when not overcompressed).
The incredible interpolation/predictive innovations -- that hit video compression in a visually lossless way -- is finally slowly migrating to the 3rd dimension. The VR industry has begun working on this out of sheer necessity (timewarping, reprojection, etc) and graphics card vendors have noticed.

<FUTURIST>
Sidetrack prediction: While frame rate amplification technologies is *strictly* for computer/game stuff, it's possible that the GPU+video universes will merge eventually. A wild prediction by about year ~2050, future video codecs (e.g. H.268 or H.269) probably will be a framerateless GPU codec rather than traditional macroblock/coding tree codecs, compressing video as true 3D geometry instead that can be mapped to any 2D plane (of any shape, pocket, smartphone, TV, wall) or any 3D display (of any type, stereo, holographic, VR, etc) at any custom framerate. The video would be compressed with no frame rate, and the display can choose to display the video at any frame rate of your choosing. Cameras may even become timecoded photon cameras, so that video capture also eventually becomes framerateless then, too. Who knows?
</FUTRIST>


This a lot of advanced researcher work that's ongoing in the industries, and if not all of the GPU manufacturers are doing it yet -- several more of them will be very soon. Virtual reality forced that hand, and it's an innovation path that will take ten years to occur before the true artifact-free 1000fps@1000Hz interpolation occurs from a 100fps source.

Complications can arise (e.g. ultra-fast motion of teleporting from one location X to another location Y) -- much like complex movie scene-cross-fades can confuse 2D video compression -- but those can be worked around (or even just simply letting frame rate dip during 1000Hz VRR whenever motion is too fast for successful frame rate amplification). That said, for plain FPS rapid speed running through dungeons and arenas, 100fps->1000fps ideally should be fairly easy to frame-rate-amplify because differences between adjacent frames are small enough to easily interpolate using future advanced 3D-geometry-aware translation algorithms. It won't be an easy journey for GPU manufacturers, but it is indeed technologically possible.
BurzumStride wrote:Higher framerates offer diminishing returns in terms of things like image persistence and motion clarity.
Yep. Diminishing points of returns.
You need to jump huge jumps, 120fps->240fps->480fps->960fps just to halve motion blur of the previous frame rate.
(Assuming the pixel response stayed away from becoming the bottleneck factor, of course)

However, the trip to retina-VR, pretty much ensures we need ~0.1ms persistence to pass the "Holodeck Turing Test" for fast motion experiments. This is an incredibly difficult problem to solve since this is likely going to need to be solved through other advanced means other than trying to achieve 0.1ms persistence via a 16K retina 180-degree display running at 10,000fps@10,000Hz. Peripheral vision resolution is less important, so you might have higher Hz and higher detail only at eye-gaze point (eye-tracked displays) as a workaround to doing ultra-high-Hz for the full screen.

Lots of creative experiments are being done by VR headset researchers, but it's already known that stupendously high quintuple-digit Hz is sometimes required for several use cases (see NVIDIA's 16,000 Hz AR demonstration). Such brute force certainly solves many problems.
BurzumStride wrote:The good news is that the performance required to draw each consecutive frame, scales inversely with the framerate (each consecutive frame takes less time to render).
It can -- but not necessarily -- many factors complicate this.

That said, indeed, frame rate amplification technologies aims to reduce the silicon cost per fps, and that's the big goal here, as one of the breakthroughs towards future "blurless sample-and-hold displays" (1000fps @ 1000Hz) to occur within less than one human generation.

Surprisingly, we hit 480Hz sooner than I expected, so I now view it as eminently very realistic reach 1000fps@1000Hz within a decade, thanks to VR forcing a new kind of innovation (Oculus' Timewarp was just the Wright Brothers beginnings!).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

mello
Posts: 251
Joined: 31 Jan 2014, 04:24

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by mello » 23 Nov 2017, 10:16

Chief, i was just thinking... wouldn't this tech create a problem for both, game devs and gpu manufacturers ? First, game devs would need to program for high fps scenarios (which means additional work), also, there are many games which perform poorly (broken phisics for example) at high fps. Second, if someone would be able to play at 1000fps@1000Hz on mid-range GPU, then in my opinion it would change a GPU market (the one we know today) in a major way. This means that switching GPUs wouldn't be necessary, and you could use the same GPU for an x number of years, even for newer games. GPU manufacturers would get less sales overall and progress in GPU power (at least as far as gaming is concerned) wouldn't be that exciting and important anymore. Hell, this would probably affect whole gaming market, including CPU, MOBO and RAM manufacturers, because there would be less necessity for an upgrade. What is your take on this ?

User avatar
masterotaku
Posts: 436
Joined: 20 Dec 2013, 04:01

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by masterotaku » 24 Nov 2017, 03:18

Your base fps would need to be high to get good interpolation. Base 30fps would create an artifact mess. And some people would prefer to have pure real fps.
CPU: Intel Core i7 7700K @ 4.9GHz
GPU: Gainward Phoenix 1080 GLH
RAM: GSkill Ripjaws Z 3866MHz CL19
Motherboard: Gigabyte Gaming M5 Z270
Monitor: Asus PG278QR

User avatar
lexlazootin
Posts: 1251
Joined: 16 Dec 2014, 02:57

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by lexlazootin » 24 Nov 2017, 04:02

Also it should affect physics since it's only creating 'fake' interpolated frames, not real frames. Most developers know not to tie physics to FPS, they were doing all the way back in 1999 with Half-Life but some developers still develop for console which sticks to 30 or 60 and since they have time restraints they just tie it to fps. Most of them know but because of dumbs things like this they do it anyways.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 24 Nov 2017, 17:11

mello wrote:Chief, i was just thinking... wouldn't this tech create a problem for both, game devs and gpu manufacturers ? First, game devs would need to program for high fps scenarios (which means additional work), also, there are many games which perform poorly (broken phisics for example) at high fps. Second, if someone would be able to play at 1000fps@1000Hz on mid-range GPU, then in my opinion it would change a GPU market (the one we know today) in a major way. This means that switching GPUs wouldn't be necessary, and you could use the same GPU for an x number of years, even for newer games. GPU manufacturers would get less sales overall and progress in GPU power (at least as far as gaming is concerned) wouldn't be that exciting and important anymore. Hell, this would probably affect whole gaming market, including CPU, MOBO and RAM manufacturers, because there would be less necessity for an upgrade. What is your take on this ?
While it's possible for Frame Rate Amplification Technologies (FRAT) to be a separate box after the GPU, I wouldn't recommend that due to input lag.

A very good, geometry-aware FRAT will require a multi-layered Z-Buffer to have full access to graphics/texture/etc behind objects (parallax artifact removal), and that will require some insane integration between the GPU and the FRAT.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 24 Nov 2017, 17:19

lexlazootin wrote:Also it should affect physics since it's only creating 'fake' interpolated frames, not real frames.
To be fair, what's fake and what's not fake is a little muddy these days even with video and 3D graphics.

For example, antialiasing, anisotropic filtering, and texture compression can often remove details. Video enhancement and edge enhancement add "artificial" (fake) detail to what is missing.

Also most detail are lost from MPEG1, MPEG2, MPEG4, H.265, H.266, etc compression (even electronic cinema), they use interpolated motion vectors "P-Frames" and "B-Frames" between 1-frame-per-second "I-Frames". Yeah, video you watch is actually really massive amounts of interpolations between frames already -- and if compression ratio is sufficiently high enough, it looks very "fake" (even without additional interpolated frames above-and-beyond prescribed frame rate).

Even today, a high-bitrate MPEG4 stream interpolated 60fps->120fps looks "less fake" than a low-bitrate MPEG4 stream. Both have a lot of interpolation due to the nature of MPEG4 in a totally unavoidable manner.

The fact that players does interpolation (P-Frames, B-Frames) as part of decoding, eliminates a lot of interpolated artifacts, and protocols exists to prevent artifacts during parallaxing which is not possible to solve with external added interpolators. This is a problem that FRAT can solve that H.264 cannot.

TL;DR: Everytime you watch Netflix, more than 99% of the data is interpolated -- but very accurately (so it doesn't look fake) -- thanks to the magic of H.264 and H.265 compression which is a truckload full of interpolation mathematics.

FRAT to be very closely tied to the GPU -- much like how the interpolators in a video decoder is unavoidably tied close to the H.264 stream. This is why native playback does not look fake (e.g. 60fps native playback), but adding decoder-unaware interpolation (24fps->60fps without knowing the H.264 stream) adds a lot of "Soap Opera Effect" fakeness artifacts that doesn't show in native-high-framerate video. Even when the interpolator is decoder-aware, you still have missing parallax information (e.g. motion moving over backgrounds) which is still problematic.

Fortunately while unsolvable for video streams, this can be solved with geometry-aware FRAT techniques such as multi-layer Z-Buffers that keeps multiple layers of textures (pixels behind pixels!) to allow more flawless reprojection of graphics behind graphics, as things continue to parallax during FRAT. You must have geometry-awareness for flawless parallax during FRAT.

I imagine, there will be some performance/memory cost (say, 10% or 20%) in order to keep rendering compatible with FRAT (e.g. using a multi-layer Z-buffer or other advanced techniques) which then returns that performance cost in a massive framerate increase (order of magnitude, e.g. 100fps -> 1000fps) in nearly flawless FRAT ability.

That's the goal of highly accurate FRAT. Uncannily accurate artifact-free interpolation.
lexlazootin wrote:Most developers know not to tie physics to FPS, they were doing all the way back in 1999 with Half-Life but some developers still develop for console which sticks to 30 or 60 and since they have time restraints they just tie it to fps. Most of them know but because of dumbs things like this they do it anyways.
Correct.

Yes, physics should be FRAT-friendly. (FRAT = Frame Rate Amplification Technology)

So physics can more easily convert from 100fps to 1000fps using future reprojection/timewarping/interpolation techniques.

You might get polygoning for things like curved movements (e.g. like a 60-sided polygonal arc instead of a perfect curve) when doing FRAT interpolation, but if you make FRAT fully geometry-aware and tell it "I want to run this algebra formula on this baseball motion", then it will successfully add intermediate frames with 1000 different baseball positions over the period of 1 second -- for 1000fps@1000Hz -- these can be algebraic (customizable math formula attached to physics objects, like PhysX is able to do: X,Y,Z, acceleration values in 3D can create curved motion) and editors can make it easy (e.g. a curve editor in a level editor, or like a vector drawn in Adobe Photoshop).

There's an infinite number of intermediate positions that way, making physics framerate-independent, and if that data is accessible with the theoretical FRAT section of future GPU silicon, then the FRAT can in theory include these kinds of physics in its reprojection/interpolation -- at least for the most important/near-range/large objects that needs to undergo frame rate amplification.

So, there could be some performance considerations to make sure rendering/physics is FRAT compatible. In this situation, it becomes more accepable to get a theoretical 5%, or 10%, or 25% GPU rendering performance penalty to gain the ability to enable a second piece of FRAT silicon to achieve 1,000% frame rate increase without needing to decrease detail level.

To prevent performance bottlenecks, some compromises may occur: Where strafing can now be made flawless with geometry-aware FRAT, you might have a limitation that affects the framerate of a 1000-particle firecracker spray during FRAT situation (100fps converted to 1,000fps) -- the spark spray may stay untouched at its original 100fps native framerate (just reprojected globally) or the spark motion "might go polygonal" (e.g. corners in arcs every 1/100sec) -- it won't be a very hugely noticeable artifact compared to parallax artifacts (e.g. incorrectly rendered revealed backgrounds) that is common during low-quality interpolation.

There are many solutions to handle FRAT issues for things like spark showers, complex 1000-debris-piece explosions, etc. -- many achieveable with relatively unnoticeable artifacts. Also, polygoning at 100fps (e.g. a curveball over 1 second may have 100 "micro-corners") is probably not very objectionable for fast-moving non-straight-line objects. What I do know, is parallax errors (faked background reveals) is several orders of magnitude bigger a problem -- difficult to solve in video interpolation -- but more easily solvable with GPU 3D.

Whatever approach they ultimately use, will be incremental progress to ever better-and-better FRAT, much like how MPEG1 became MPEG2 became H.264 became H.265 (at the same fixed bitrate) -- better and better looking and less fake/artifacty looking.

Right now, Oculus reprojection has a lot of parallax problems (artifacts during strafing, for example) and this is one of the biggest, most noticeable artifact of Oculus' early FRAT (45fps->90fps timewarping/reprojection) -- but this will be mostly solveable (over the long term) with future FRAT especially well-integrated with the GPU. Once this nasty artifact is virtually eliminated, the straightway is wide open for larger magnitude FRAT (e.g. 100fps->1000fps instead of just 45fps->90fps) before other artifacts become as hugely noticeable. This is the breakthrough waiting in the wings over the next few years -- fully geometry-aware FRAT technologies.

In other words -- all solveable problems -- incrementally.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

open
Posts: 223
Joined: 02 Jul 2017, 20:46

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by open » 28 Nov 2017, 18:10

I think the pc market will provide alot of pressure for FRAT. High end monitors sell for a ton and people generally don't like to turn their graphics settings down but will want at least a FRAT use of their refresh rates.

Console market is an unknown. Console gamers are becoming more aware of the benefits of high fps but generally its a mass marketed slowly evolving product. It may take 3-6 years longer than pc for the console ecosystem to create strong financial incentives for FRAT. Next gen vr consoles will certainly push for FRAT in a few years though. Hopefully at the same time pcs are using 480hz+ monitors and some standard multi platform libraries and techniques can be developed for FRAT maybe even spawning some hardware oprtimizations at the same time if the gpu companies have the foresight.

Would be very nice to see some refined forms of FRAT but in my mind it will likely be many years from now mainly due to the barrier of SOFTWARE development. Game companies need more $$ to use new techniques and its less common for them to put the extra work in. Generally things like this require multiple parts of the industry to come together and write the libraries / release the tutorials so that all is free to use and less time intensive for developers. Oculus is less than optimal but at least the did all the work and it works with every occulus game. Now we need the companies to work together and make it better with the code+dlls working together.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 29 Nov 2017, 14:27

I would bet major innovative parties (NVIDIA, Intel, Microsoft, Oculus, etc) would come up with a FRAT technology and provide an easy API at the driver level, with only minimal modifications needed to most games. Possibly "profiles" could be accomplished in graphics drivers (e.g. like they do for 3D Vision, or SLI, or CrossFire) to make older games support it too. Definitely not under the FRAT name but under their own brand name.

....Aside: They will come out with a special brand (like timewarping, reprojection, 3D interpolation, or whatever lingo you will -- we just call the whole umbrella the "F.R.A.T." = "Frame Rate Amplification Technology". Or if the industry prefers, perhaps "Frame Rate Acceleration Technology" -- but either turns out to be the same FRAT acronym....

If you're a booker taking bets, I'll bet on NVIDIA by approximately ~2020-ish. And a possible brand could be "ULMB 2" or "G-SYNC 2"
(strobeless ULMB -- flickerless ULMB -- blurless sample-and-hold). With 1000Hz+FRAT, you're literally simultaneously combining the benefits of ULMB and G-SYNC and flickerless sample-and-hold simultaneously.
-- ULMB blurless look (because persistence is so low: 1ms refresh visibility times)
-- G-SYNC stutterless look (because refresh cycle granularity is so fine, stutters become invisible)
-- Flickerless sample-and-hold (eye friendly, you're not strobing a backlight)
But all simultaneously at the same time, no need to choose a mode!

Either way, NVIDIA has already done crazy stuff like 16000Hz augumented reality, and they're the inventors of LightBoost of Blur Blusters lore (now they're onto ULMB), so FRAT is a totally natural progression for them -- and it will likely happen within one or two more "upgrade cycles" (3-5 year cycles).

Nontheless, I encourage all parties to race towards the 10:1 FRAT finish line (100fps -> 1000fps amplification). There are already 1000Hz laboratory displays (Viewpixx, etc) that FRAT can be tested on today, homebrew/indie 1000Hz by 2020, and brand-name 1000Hz by 2025. Homebrew/Indies (Zisworks,cirthix) beat the brand-names to both 240Hz (2013) and 480Hz (2017), and I fully expect homebrew/indie 1000Hz to arrive before brand-name 1000Hz.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

open
Posts: 223
Joined: 02 Jul 2017, 20:46

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by open » 30 Nov 2017, 13:22

Yeah people sometimes rag on nvidia but I'm a true fan since childhood. They have a always gone the extra mile to help developers in many ways.

Realistically though to get a decent FRAT system working it will need to work within the code of the games themselves. The FRAT needs to know how to interpolate the objects and even to be set up with multiple layers that would be rendered differently than without FRAT. Theres only so much of this that can be done at the driver or graphics library level. Companies will have to put in the extra time to organize new code differently or to reorganize the old code. Any way you look at this its going to take resources. A good senior programmer or alot of extra testing and time and bug fixing. When game companies are already on crunch on most projects and there is financial pressure from within or from outside via publishers its hard to get these kind of things going. It will be in only a small portion of titles untill the time is taken to develop effective standard methods and make them well known and incorporated into engines. I would expect vr to continue to push it more than any other sector because given the performance requirements and the benefits of high framerate it has huge value. Also its a developing area so companies expect to have to develop new code automatically. I think your time window seems maybe a year or two ahead of when I would place mine based on experience in the software industry but we have to keep in mind that FRAT already exists. It will be a continous spectrum of increasing framerate amplification technology over then next many years. Looking forward to it.

BattleAxeVR
Posts: 44
Joined: 14 Dec 2017, 11:38

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by BattleAxeVR » 14 Dec 2017, 13:29

The cynic in me thinks it's unlikely that big GPU manufacturers would back this type of shortcut to improving framerates, as it would cannibalize their sales.

For example, why pay for a 1080 ti that can hit an average of 50 fps on a 4K game, which then gets interpolated to 60 fps by some additional chip, when you can just buy a 1050 which can only manage 30 fps but has the same type of chip?

See the conflict of interest here? It's an incentive to not upgrade, or when you do upgrade to a new gen GPU, you pick a much cheaper model.

Sure, some people will pay more and turn off interpolation because they prefer "real frames" (if you work in games you realize how not real most of the frames are. E.g. temporal antialiasing and reprojection already synthesize "fake frames" that you see all the time from prior frames).

But the smart bet is on them not being interested in providing this service, as it will ultimately hurt their bottom line. Heck, there are PS4 games which only try to hit 60 fps for VR games, and rely on the reprojection to hit 120 fps for display in the headsets. If you can do that for 60 -> 120 you can do it for 30 -> 60 or 45 -> 60 as well, and if the quality is decent enough, who cares if the frames are real or fake? It's academic and a pedantic exercise. Modulo interpolation artifacts, which are being reduced all the time.

The latest Pixelworks interpolation engine on recent projectors like the Optoma UHD65 is apparently nearly 100% artifact free. And latency is of course something people are also working on as well. But if you can hit, say, 120hz, then one or two frames of latency added is only 8-16ms which is not very much. Currently these engines add like 100ms or more of latency but that figure is likely to drop over time.

What I write here is not appropriate for virtual reality of course, which demands max constant framerate at least in terms of reprojecting to the current head orientation and eventually eye focus, if not actually showing updated animations and scene content too. And that will likely go into custom hardware instead of additional compositor passes as they are now, so perhaps they will open this up for non-VR games too eventually, but I actually think they have a financial incentive to not do that.

Or hold off allowing it for 2D games because what "FPS can it do" is what sells GPUs in the gamer space at least.

One thing I do hope for is that G-Sync will perish thanks to HDMI 2.1 supporting Adaptive Sync standard VRR.

Xbox One X having variable refresh rate support could make the case for TV manufacturers to add this feature to their "gamer" friendly lines, and eventually to all TVs since it'll be embedded in commodity controllers which get passed around / reused between manufacturers.

Post Reply