New term: "Frame Rate Amplification" (1000fps in cheap GPUs)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers & Advanced Display Articles on Blur Busters. The masters on Blur Busters.
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 15 Dec 2017, 00:08

Possibly, but, I think there will still be strong economic pressures in the direction of FRAT. Just will take up to a decade to borne itself out.

Really good FRAT will still need lots of horsepower and silicon. You may still need to, say, consume 50% of a powerful future Titan doing conversion of 100fps to 1,000fps. It will initially be a high-end-only feature, possibly.

It may even be accomplished by game engines instead, given sufficient GPU capability, too - increasingly improved versions of Oculus timewarping, until AMD or NVIDIA one-ups the other by introducing GPU/driver level based FRAT.

There may be unconventional algorithms in play, too. Heck, maybe even realtime framerateless beamtracing algorithms that can be denoised to any frame rates, instead.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

open
Posts: 223
Joined: 02 Jul 2017, 20:46

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by open » 16 Dec 2017, 12:04

BattleAxeVR wrote:The cynic in me thinks it's unlikely that big GPU manufacturers would back this type of shortcut to improving framerates, as it would cannibalize their sales.

For example, why pay for a 1080 ti that can hit an average of 50 fps on a 4K game, which then gets interpolated to 60 fps by some additional chip, when you can just buy a 1050 which can only manage 30 fps but has the same type of chip?

See the conflict of interest here? It's an incentive to not upgrade, or when you do upgrade to a new gen GPU, you pick a much cheaper model.

Sure, some people will pay more and turn off interpolation because they prefer "real frames" (if you work in games you realize how not real most of the frames are. E.g. temporal antialiasing and reprojection already synthesize "fake frames" that you see all the time from prior frames).

But the smart bet is on them not being interested in providing this service, as it will ultimately hurt their bottom line. Heck, there are PS4 games which only try to hit 60 fps for VR games, and rely on the reprojection to hit 120 fps for display in the headsets. If you can do that for 60 -> 120 you can do it for 30 -> 60 or 45 -> 60 as well, and if the quality is decent enough, who cares if the frames are real or fake? It's academic and a pedantic exercise. Modulo interpolation artifacts, which are being reduced all the time.

The latest Pixelworks interpolation engine on recent projectors like the Optoma UHD65 is apparently nearly 100% artifact free. And latency is of course something people are also working on as well. But if you can hit, say, 120hz, then one or two frames of latency added is only 8-16ms which is not very much. Currently these engines add like 100ms or more of latency but that figure is likely to drop over time.

What I write here is not appropriate for virtual reality of course, which demands max constant framerate at least in terms of reprojecting to the current head orientation and eventually eye focus, if not actually showing updated animations and scene content too. And that will likely go into custom hardware instead of additional compositor passes as they are now, so perhaps they will open this up for non-VR games too eventually, but I actually think they have a financial incentive to not do that.

Or hold off allowing it for 2D games because what "FPS can it do" is what sells GPUs in the gamer space at least.

One thing I do hope for is that G-Sync will perish thanks to HDMI 2.1 supporting Adaptive Sync standard VRR.

Xbox One X having variable refresh rate support could make the case for TV manufacturers to add this feature to their "gamer" friendly lines, and eventually to all TVs since it'll be embedded in commodity controllers which get passed around / reused between manufacturers.
1050 at 30 fps to 1080 ti at 50 seems a bit off. This is a necessary tech for high refresh displays. I hope gsync doesn't go away. Its better in many ways which you can read about on this forum. Presonally, it's better because of the lower input lag hit alone.

We're talking about techs that have huge performance requirements. Being able to use those refresh rates and techs is going to be a big incentive for buyers. Doubtful that people on the high end of nvidias price profiling sceme (now paying up to 3000 for increased gaming gpu performance) are going to want a low tier card where they will inevitably have to sacrifice quality at some level.

BattleAxeVR
Posts: 44
Joined: 14 Dec 2017, 11:38

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by BattleAxeVR » 18 Dec 2017, 18:57

I own a G-sync monitor, and yes, it's pretty great, but its raison-d'etre is that game engines, having a variable work-load per frame, have variable frame rendering times, and it's better to slave the display to the GPU than the other way around and deal with stuttering and all the other problems with V-sync we all know and love.

The reason I wrote what I did, is from what I learned as a VR developer, namely, that a constant, perfectly synced, max framerate is necessary to avoid motion sickness. This fact makes VRR tech inappropriate for VR. Then I started to think to myself, well, why is it acceptable for 2D games to have variable smoothness. It's unnatural. It's certainly (much) better than V-sync, but both are a hack to compensate for the engine not being able to sustain max FPS at all times.

This is the problem that FRAT is the ideal tech to solve.

It's not about the specific numbers between 1050 vs 1080 Ti or whatever, that was just an example. Variable refresh rate is better than constant refresh rate for variable frames per second, but variable frames per second is itself inferior to constant frames per second. And it's very hard to reach super high, constant frames per second, the higher the resolution and refresh rate you are talking about. It's a war that cannot be won. Not only that, but it's inefficient to render that many unique frames, since the higher the native framerate, the better interpolation works. (less magnitude error = less obvious artifacts). And there are interpolation engines now which are very good, almost artifact-free.

It's approaching a "solved problem" at least in the 120hz TV range. We're just talking about displays of the with much better refresh rates.

And the fact that current interpolation chips can barely even do 4K24 to 4K60, at best. Videogames, especially on consoles, have often sacrificed peak FPS for constant but much lower FPS, like 60 or 30fps (on consoles), in order to get better looking pixels. But we can have both.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 19 Dec 2017, 09:49

Yes, variable refresh rate is a great fix for variable frame rates.

Yes, constant framerate is really ideal for VR.

In theory, VRR tech isn't necessarily 100% inappropriate for VR if it is low-persistence VRR with constant-lag (not variable lag). Ideally via overkill framerates, e.g. VRR between 250Hz through 1000Hz for 1/250sec-to-1/1000sec persistence. However, you could also use strobed VRR too, so you can simply use lower refresh rates instead, to achieve the same persistence. Indeed, it would be far less motion-sicky than stutter.

(Those who understand persistence, know you either need strobing or overkill Hz to achieve ultra-low persistence.)

But this may be rendered mostly moot when Frame Rate Amplification Tech (FRAT) starts successfully achieving 1000fps laglessly & artifactlessly. And 1000Hz almost behaves like VRR, because persistence varies only in 1ms increments (refresh cycle granularity is so fine, you can more easily vary the persistence). Games modulating 30fps through 200fps (say, when FRAT is turned off) on a 1000Hz display would look darn near exactly like a G-SYNC display, since the refresh cycle granularity is so fine, that it behaves essentially VRR.

EDIT

___________

A Blur Busters Holiday 2017 Special Feature

Coming sooner than expected within our lifetimes in the 2020s:
Image\

Read more:
Blur Busters Law: The Amazing Journey To Future 1000Hz+ Displays
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 23 Feb 2019, 21:14

New Frame Rate Amplification Tech:
NVIDIA's new Deep Learning Super Sampling (DLSS)

There are many terminologies for various kinds of frame rate amplification technologies.
  • Asynchronous Space Warp
  • Deep Learning Super Sampling (DLSS)
  • Interpolation (especially AI interpolation)
  • Etc.
NVIDIA's RTX 2080 Ti new "Deep Learning Super Sampling" (DLSS) is a new form of frame rate amplification technology!
It is not really classic interpolation based.

And the Oculus VR feature that laglessly converts 45fps to 90fps doesn't add any lag for my VR.

Frame rate increasing techniques (that shortcuts full-resolution GPU renders for every full-resolution frame) can be non-interpolation-based and can be near lagless.

Frame rate amplification technologies (F.R.A.T.) is the genericized terminology I've invented for all the umbrella of everything that increases frame rates without doing full GPU frame renders for all frames, e.g. rendering shortcuts that reuses previous frames intelligently.

They also work best when they're well-integrated with the rendering workflow (e.g. access to Z-buffer, access to geometry data, access to 1000Hz controller data). That's what makes newer F.R.A.T. systems like NVIDIA DLSS smarter, less laggy, less artifacts, and less black-box, than classic interpolation of yesteryear.

Blur Busters is a huge believer in FRAT (Frame Rate Amplification Technologies) in the current slow Refresh Rate Race to future Retina Refresh Rates. Even if the tech may take a decade or two to achieve universally cheap quad-digit frame rates.

They use extremely different methods to create results.
  • Some use a lower-resolution base and upscale (thanks to internal AI knowledge of what a high resolution frame looks like)
  • Some fills in missing frames using AI algorithms to prevent lag
  • Some fills in missing frames using some reprojection shortcuts, to prevent lookforward lag
  • Some require both lookforward and lookbehind with a rudimentary math formula (classic interpolation)
  • Weird combinations of two more more of the above
  • Other future techniques not yet invented
There are some emerging algorithms that can later scale to 10:1 performance (e.g. 1000fps generated out of what normally requires a 100fps GPU), possibly via a co-CPU or co-silicon dedicated to frame rate amplification. So this will be an important enabler of ultra-Hz in the next human generation.

Since witnessing the results of all of these, I've become a huge believer in F.R.A.T. becoming important for "cheap 1000fps on cheap 1000Hz" in ten or twenty years from now. It's necessary achieving cheap ultra-Hz in a very slow, gradual refresh rate race to future retina refresh rates, now that we've almost maxed-out resolution and dynamic ranges. With maxed-out resolutions, come increased incentive to go boldly milk where nobody has milked -- like continuing to work towards cheaply maxing-out refresh rates.

4K displays were laboratory-only curiousities in year 2000 before the five-figure-priced IBM T-221 desktop monitor came out. Now less than twenty years later, 4K is purchased at Walmart for cheaper than you paid for your 27" RCA CRT tube in year 1985.

The refresh rate race will be slower than the resolution race. But will be a perpetually continuous pressure for the rest of the century. As almost every reputable scientist, display researcher, and vision researchers now confirm the journey is long. All the indirect effects like stroboscopics and/or persistence blur. Since we're nowhere remotely near retina refresh rates.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 23 Mar 2019, 15:01

New Frame Rate Amplification Tech:
Render Some Frames Full Resolution, And Render In-Between Frames At Lower Resolution

New paper about a frame rate amplification technology (FRAT):
https://www.cl.cam.ac.uk/research/rainbow/projects/trm/

Basically render some frames as high resolution, but other frames as low resolution.

Use the high resolution frames to help enhance the low resolution frames.

This is a very clever technique that will be useful along the long term Refresh Rate Race to Retina Refresh Rates (1000Hz Journey!)

"Our technique renders every second frame at a lower resolution to save on rendering time and data transmission bandwidth. Before the frames are displayed, the low resolution frames are upsampled (improved with intelligence of knowing the earlier high-resolution frames) and high resolution frames are compensated for the lost information. When such a sequence is viewed at a high frame rate, the frames are perceived as though they were rendered at full resolution."

The future of frame rate amplification technologies is really exciting and covers a huge universe of a lot of researchers that I have been paying attention to for several years.

Many Frame Rate Amplification Tech (FRAT) tricks are roughly resembling the "Do some fully, do others partially" workflow metaphor of video macroblock codecs (I-Frames, P-Frames). The classic macroblock codecs of -- MPEG1, MPEG2, MPEG4, H.264, H.265, etc. In your smartphone videos, your Netflix and your YouTube streams -- only a few video frames are actually fully compressed frames.

With macroblock video -- predicted frames in between real frames -- The rest of frames in between are predicted and "filled-in" based on those frames. It's such a refined art that humans don't even notice during good compression ratios, despite there being approximately 1 second in between those fully-captured frames -- at least when the video is not overcompressed.

This will even apply to real time ray tracing (GeForce RTS, DirectX RayTracing DXR, etc), which needs real time denoising. And having more rays in some frames -- will make it easier to denoise less-raytraced frames -- and saving rays in some means more framerate is achievable with real time raytracing! So the "do some fully, do some partially" workflow is potentially very universal regardless of traditional rendering or real-time raytracing or future rendering workflows.

This is very different from the way Oculus Rift converts 45fps to 90fps via the Asynchronous Space Warp algorithm. But the concept is very roughly similar in the sense that a frame rate is multiplied via "filled-in" in-between frames that are not a full-resolution GPU render -- even if via a very different technique!

The great news is that these algorithms are not necessarily mutually exclusive! It's possible to stack the techniques (2x -> 2x = 4x) or via a combined algorithm. The techniques could potentially be combined to keep increasing frame rate amplificaton ratios from today's 2x framerate to 10x frame rate, and even more artifactlessly! Especially if the intervals between full-resolution frames remain very brief, few humans will quickly notice the issues.

I now fully anticipate that many FRAT tricks will eventually combine in the future -- to utilize such principles -- to eventually achieve the 10:1 amplification ratios needed (100 frames per second full GPU renders to generate 1000fps for 1000Hz displays) by the 2020s-2030s decade.

If humans often cannot notice those predicted frames in macroblock-video despite 1 seconds in between -- then I fully anticipate that humans will easily live with well-predicted frames lasting only 0.01 seconds (at 100 frames per second of full-resolution GPU renders) -- once the various frame rate amplification technologies are more properly optimized -- to even more visually lossless and lagless state -- and also put directly into dedicated hardware into fast silicon (FRAT-specific silicon).

Then the GPU is not a limiting factor for 1000Hz monitors!
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 04 Apr 2019, 20:50

New Frame Rate Amplification Tech:
Asynchronous Space Warp 2.0 by Oculus Rift

Today, Oculus just released the new Asynchronous Space Warp 2.0 algorithm which actually now uses the depth buffer for improved parallax correction, to greatly reduce artifacts of frame rate amplification.

Also, it seems that it is now capable of frame rate amplification ratios beyond 2:1 as it reportedly will work with frame rates of less than 45fps for the 90Hz VR headset. Some tests will need to be made in the near future!
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 05 Apr 2019, 00:31

Today, presenting a new article derived from this thread.

Image

Frame Rate Amplification Technologies (FRAT)
More Frame Rate With Better Graphics For Cheaper


Can Share This: Facebook | Twitter

Hope you enjoy reading this article;
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

John2
Posts: 8
Joined: 14 Jun 2020, 17:56

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by John2 » 14 Jun 2020, 18:16

Chief Blur Buster wrote:
04 Oct 2017, 12:01
BurzumStride wrote:You guys made a valid point about producers meeting the demands of the average consumer. Before reading this I have not even dreamed of seeing non-pixelated 1000hz 1000FPS anytime soon, but with framerate amplification technologies' improved GPU frame output, the high hertz approach could appeal to a broader crowd than BlurBusters and Input lag purists such as myself (I officially coin that term haha).
Four years ago, I did not even dare think 1000fps at 1000Hz was going to be realistic within our lifetimes.

Now I've realized it's become very realistic in less than 10 years at least for high-end gaming monitor territory. Experimental 1000Hz displays are currently running in laboratories around the world, and some are actually now being sold for laboratory use (ViewPixx 1440Hz DLP) -- and since successful homebrew 480 Hz happened (Making of story) -- we now see 1000fps@1000Hz (lagless & strobeless ULMB!) becoming a reality within a decade.

Blur Busters coverage will be increasingly louder in the coming few years, to help compel GPU and monitor manufacturers work towards this goal. We'll probably reach the point where we'll begin gently shaming the websites that say 240 Hz and 480 Hz is not important -- there are many of those. :D It's necessary for the holy grail of strobless ULMB and blurless sample-and-hold, reaching closer and closer
to a real-life display that has no motion blur above-and-beyond human eye limitations.

It'll factor into our upcoming monitor tests, when we publish "Minimum persistence without strobing" benchmarks in a prominent part of our upcoming new monitor-reviews format. The only way manufacturers can reduce persistence (MPRTs) without strobing via higher Hz closer and closer to the "blur-free sample-and-hold" holy grail.

Mathematically:
120fps at 120Hz non-strobed LCD = minimum possible MPRT/persistence is 8.33ms
240fps at 240Hz non-strobed LCD = minimum possible MPRT/persistence is 4.16ms
480fps at 480Hz non-strobed LCD = minimum possible MPRT/persistence is 2.1ms
1000fps at 1000Hz non-strobed LCD = minimum possible MPRT/persistence is 1ms

Assuming pixel response is not the limiting factor. The more squarewave you can get LCD GtG pixel response to become (closer to traditional blur-reduction strobing), the MPRT measurement actually scales linearly with refresh rate.

Today, the only way manufacturers achieve 1ms MPRT (not GtG) is via strobing. When you see a manufacturer mention "MPRT" along with "1ms", that's the strobed measurement.
BurzumStride wrote:Granted, I do not know much about the frame-time limitations, and how difficult it may be to get over the final 4.17ms to 1ms gap due to things like the time it takes for the CPU and GPU to communicate etc, so please correct me if I am wrong.
My feeling is that it is probably okay to put a small amount of input lag (e.g. 2ms) into the whole pipeline if there's a huge benefit such as converting 100fps->1000fps.

Ideally, the fully rendered frames should be delivered laglessly, with the additional frames inserted laglessly in between.

To help reduce artifacts, the engine & GPU can communicate partial data about intermediate frames (not for rendering, but for better reprojection). You might have only 100 position updates per second, but the game engine could deliver 1000 low-resolution geometry positions per second (e.g. collisionbox granularity), with the GPU doing lagless geometry-aware interpolation via various tricks (multilayer Z-Buffers and other depth buffers) to allow successful artifact-free occulsion-reveals (objects behind objects) during lagless interpolation techniques such as time warping, reprojection, etc. Lots of researchers are working on this as we speak, and probably lots more in secret laboratories at places like NVIDIA or AMD.

Eventually, we'll have detailed enough buffers to allow things like artifact-free object rotations and parallax effects (artifact-free obscure/reveal effects) during some future form of frame rate amplification technology. There are many ways to do this with less GPU horsepower than a full, complete, polygonal scene re-rendering.

That's what researchers are doing, thanks to virtual reality making it critical. Any company that does not do this, risk falling behind, losing shareholder money (as VR and eSports industries rapidly grow beyond them in 10 years, etc).

Single-frame-drop stutters are only mildly bothersome on a computer monitor, but can cause sensitive people to actually puke (real barf) during virtual reality, so completely stutter-free operation is essential in making virtual reality mainstream, especially as more queasy people begin to begin wearing VR headsets.

And some people cannot wear today's VR because of 90Hz flicker (so the only way to get low-persistence without flicker is insane frame rates). Over time, so VR needs to be absolutely more and more 'perfectly stutterfree and real' in solving a lot of problems (motion blur, stutters, input lag, etc). Both AMD and NVIDIA are beginning to realize this only recently. Give them 10 years, and we'll have plenty of dedicated silicon directly on the GPU towards solving the problem of higher frame rates laglessly & artifactlessly without needing full GPU re-renders for each single frame.

It will soon become a priority project at GPU/monitor companies once all the technological jigsaw puzzles are falling in place: It's beginning to happen. Besides -- going towards insanely-high Hertz is currently the only way to pass fast-motion Holodeck Turing Tests anyway: "Wow, I didn't know I was wearing a VR headset instead of transparent ski goggles" fashion where they are not able to tell apart a VR head set and real life. This has to be achieved without motion blur and without strobing.

As explained earlier, going ultra-high-Hz (or using analog motion; going framerateless) is the only way to get closer to a true, real-life display -- because real life doesn't strobe, real life doesn't flicker, real life does not force extra motion blur above your vision limits, real life does not have a frame rate. So ultra-high-Hz is the only easy technological progress out the VR uncanny valley (and it's still difficult).

So, as a result, GPU manufacturers are forced into researching frame rate amplification technologies -- with obvious spinoff applications to cheap 1000fps@1000Hz in a decade or so -- including on desktop gaming monitors (not just VR).
BurzumStride wrote:I would love to be able to predict when we might expect to get 1000fps stable in most games
Two workarounds:

(A) Use a lower frame rate for stability, and framerate-amplify that instead.
We don't necessarily need 1000fps stable for frame rate amplification -- we can just do a lower number stable such as 50fps, 100fps or 200fps stable. Once stability is achieved, frame rate amplification is stable the rest of the way. 100fps stable = can be "frame rate amplified" to 1000fps stable. (whether by interpolation, timewarping, reprojection, or other artifact-free geometry-aware lag less technology)

(B) Use variable refresh rate even at 1000Hz
If randomness is still a problem at these levels (even a 1ms object mis-position can still be a visible microstutter in VRR) -- variable refresh rate rate can still be used in the 1000 Hz stratosphere. VRR (FreeSync, GSYNC) can eliminate visibility of random stutter as long as the randomness is perfectly synchronous. Random frame visibility times are completely stutterless as long as human visibility time is perfectly in sync with gametimes. This is by virtue that random edge-vibrations simply blends into motion blur. 90fps->112fps->93fps->108fps->91fps->118fps->104fps->95fps random frametimes look just like perfect VSYNC ON 100fps@100Hz, by virtue of the variable refresh rate technology.

Image

As long as the game is very "VRR perfect", it's theoretically possible to have a VRR-compatible frame rate amplification technology. It's much harder probably, but it's not mathematically impossible to combine frame rate amplification technology and VRR simultaneously.

Theoretically, VRR may eventually become unnecessary when well above >1000fps or when we gain better foeval "ultra-high-Hz-at-eye-gaze" rendering tricks. But VRR is still useful at 480Hz and 1000Hz, as even a 1ms microstutter is still human-eye-visible, especially in VR. During 8000 pixels/second eye tracking in a 1-screen-width-per-second 8K VR headset, a single 1ms framedrop 1/1000sec -- turns into an 8-pixel microstutter. Small but still visible to human eye.

We already know VRR still remains useful in the 1000Hz league. It's low-persistence VRR without strobing.
BurzumStride wrote:If I understand the logic correctly, the jump from generating 240FPS (~4.17ms) to 1000FPS (1ms) will only require a ~3.17ms frame-time decrease. Seeing how the the jump from 60 to 120 frames already required a 8.3ms decrease in frame-time, shouldn't the final ~3.17ms step towards 1000 frames be relatively easy?
It doesn't get easier. The mathematical difficulty is something else completely different.

From a "keep things artifact-free to human eye", it's theoretically easy if you have geometry-awareness and full artifact-free reveal (parallax / rotate / obscure / reveal effects). Human eyes still can notice things being wrong, even if briefly, as the animation below demonstrates well:

A very good animation demo of this effect is http://www.testufo.com/persistence -- it's very pixellated at 60Hz, doubles in resolution at 120Hz, quadruples in resolution at 240Hz, and octupled in resolution at 480Hz during our actual 480Hz tests.


Turn off all strobing (turn off ULMB), use ordinary LCD, and then look at the stationary UFO, and then look at the moving UFO.

That's always a full-resolution photograph being scrolled behind slits. The pixellation is caused by the sheer lowness of the refresh rate producing limited obscure-and-reveal opportunities. This horizontal pixellation artifact is caused by low Hz, that still is a problem even at 240Hz.

GPUs will need to be able to avoid this type of artifacts. Obscure/reveal artifacts. Frame rate amplification technologies will need to be depth-aware/geometry-aware with sufficient graphical knowledge behind objects.

It doesn't just apply to vertical lines! It also applies to single side-scrolling objects in front of background -- parallax side effects -- and artifacts around edges of objects -- that currently occur with today's VR reprojection technologies during 45fps->90fps operation.

Or even random obscure/reveals such as seeing through a bush, like running through dense jungles. The limited Hz produces limited obscure/reveal opportunities. Running through a dense jungle and trying to identify objects behind dense bush, is much easier in real life. This is because real life has an analog-league of infinite numbers of continual obscure-reveal opportunities that flicker-in-and-out. It all blends better than you could do with a limited-Hz display. The only way to get closer and closer to real life on this, is ultra-high Hz in this respect. Frame rate amplification technologies will need to handle this sort of stuff at least at an acceptable manner (far better than today's reprojectors).

Ordinary interpolators won't successfully guess the proper obscure/reveal effects in between real frames.

But, tomorrow, depth-aware / geometry-aware interpolators/reprojectors/timewarpers can potentially properly fill-in the proper obscure-reveals effects (at least "most of the time"), in order for you to avoid strange effects such as reduced resolution.

Researchers are working on ways to solve this type of problem, so that reprojection/interpolation can be done laglessly with even fewer artifacts -- by skipping full GPU renders for even 80% or 90% of frames -- and using frame rate amplification technologies instead.
BurzumStride wrote:Granted, I do not know much about the frame-time limitations, and how difficult it may be to get over the final 4.17ms to 1ms gap due to things like the time it takes for the CPU and GPU to communicate etc, so please correct me if I am wrong.
Currently, this isn't the main wall of difficulty at the moment.

There may still be enforced lag from the communications, but the key is that the real frames (e.g. 100fps) would be delivered with no lag relative to today. The difficulty is inserting extra frames laglessly. Which is possible with some kinds of re-projection technologies. The even bigger difficulty is to insert extra frames without interpolation artifacts -- by improving the depth-awareness / geometry-awareness of the interpolation technology. Then assuming you had proper motion vectors (which can continue at 1000Hz from the PC), you can still continuously reproject the last rendered frame, with proper "obscure-and-reveal" compensation.

That's the specific important technological breakthrough that current researchers are currently working on -- and it will successfully allow large-ratio frame rate amplification -- such as 10:1 ratios (e.g. 100fps -> 1000fps frame rate amplification). Successful reliable geometry-awareness / depth-awareness -- with obscure-and-reveal compensation during reprojection technologies -- will be very key to this, as probably being germane to the invention of "simultaneously blurless and strobeless" screen modes (blurless sample and hold) without needing unobtainium GPUs.
BurzumStride wrote:At the moment, in games like Battlefield 1 it is still very difficult to hit stable 240FPS even with an overclocked 7700k
This is very true. But you can:
1. Cap to a lower frame rate instead for stability and then framerate-amplify from there.
2. Use variable refresh rate during 1000Hz, and use a variable-framerate-aware reprojection algorithm.
So you can do either (1) or (2) or both.

Problem solved, assuming minor modifications to the game engine to give motion-vector hinting to the frame rate amplification technology (GPU silicon).

Developers probably still want to send roughly 1000 telemetry updates per second to the GPU (basically 6dof telemetry, probably at less than 10 or 100 kilobytes per interpolated frame) -- to help the geometry-aware reprojector avoid mis-guessing motion vectors to avoid back-and-fourth jumping effects sometimes seen in online gameplay during erratic latencies. The information that helps frame rate amplifier technologies might be simply depth-buffer-data level or low-resolution hitbox geometry level, and the frame rate amplification technology (reprojector) does the rest based. With data from the last fully-rendered frame instead. And the last fully rendered frame might maybe need, say, 10% more GPU rendering (not a biggie) to allow extra texture caching to occur to accomodate obscure-and-reveal compensation in succeeding reprojected frames. A small GPU cost, to allow good frame rate amplification ratios (e.g. 5:1 or 10:1).

To accomodate errors in stability, you can timecode everything with microsecond-accurate gametimes, and then simply make sure refresh visibility times stay in sync with gametimes (By using 1000Hz VRR-compatible frame rate amplification technology) and make sure all the reprojected frames are in perfect gametime-sync with predicted eye-tracking positions. Then errors/fluctuations in communications, rendertimes, CPU processing, etc, will be rendered invisible (or mostly invisible), as long as object positions stay within microseconds of refresh-cycle visibility times, even if everything is lag-shifted by a few-hundred-microseconds to de-jitter "engine-to-monitor" pipeline-flow erraticness (Many GPU drivers already do that today, to help frame pacing issues, and this is often increased much bigger when running in SLI mode). There are many technological workarounds to compensate for erraticness, none insurmountable -- what's important is gametime is in really good sync with refresh cycle visibility times. If gametime is erratic, then allow refresh cycles to become synchronously erratic to that, to avoid stutter (in the common traditional "G-SYNC reduces stutters" fashion) -- it still works at the kilohertz refresh leagues -- and still keeps the blurless sample-and-hold (strobeless ULMB/LightBoost) holy grail -- it isn't mutually exclusive.

It can also be a completely different algorithm than what I'm dreaming of.

There are actually huge numbers of ways to potentially pull this off (and we don't know who plans to do what approach) -- but I know this for sure: Researchers are currently working on this problem, indirectly in thanks to billions of of dollars being spent on virtual reality research & development -- and technology has finally caught up to making this feasible in the not-too-distant future.

_____

Oculus' timewarping (45fps->90fps) is a very good step towards true lagless geometry-aware frame rate amplification technology. Today, it's a big breakthrough.

But ultimately, eventually it'll only be just a Wright Brothers airplane. Tomorrow's algorithms will be hugely far more advanced, allow much bigger ratios (10:1 frame rate amplification) and with virtually artifact-free obscure-and-reveal capability.

All of this, will be very, very good towards making 1000fps @ 1000Hz practical by the mid 2020s at Ultra-league details on a three-figure-priced ordinary GPU.
Hello laymen here who needs things explained to me in laymen's terms so I can understand.

"Mathematically:
120fps at 120Hz non-strobed LCD = minimum possible MPRT/persistence is 8.33ms
240fps at 240Hz non-strobed LCD = minimum possible MPRT/persistence is 4.16ms
480fps at 480Hz non-strobed LCD = minimum possible MPRT/persistence is 2.1ms
1000fps at 1000Hz non-strobed LCD = minimum possible MPRT/persistence is 1ms"

Those numbers are based on a 1080p LCD display correct? And those numbers change if the resolution changes correct? For instance the motion blur is even worse on a 4k 30Hz display versus a 1080p 30Hz display right? The basic theory I'm trying to understand is this, even though two displays can have the same frames per second the one that has higher resolution will have worse motion blur, is that right?

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Post by Chief Blur Buster » 14 Jun 2020, 18:25

John2 wrote:
14 Jun 2020, 18:16
Hello laymen here who needs things explained to me in laymen's terms so I can understand.

"Mathematically:
120fps at 120Hz non-strobed LCD = minimum possible MPRT/persistence is 8.33ms
240fps at 240Hz non-strobed LCD = minimum possible MPRT/persistence is 4.16ms
480fps at 480Hz non-strobed LCD = minimum possible MPRT/persistence is 2.1ms
1000fps at 1000Hz non-strobed LCD = minimum possible MPRT/persistence is 1ms"

Those numbers are based on a 1080p LCD display correct? And those numbers change if the resolution changes correct? For instance the motion blur is even worse on a 4k 30Hz display versus a 1080p 30Hz display right? The basic theory I'm trying to understand is this, even though two displays can have the same frames per second the one that has higher resolution will have worse motion blur, is that right?
The milliseconds numbers do not change, that’s the beauty.

You do get more pixels per inch of motion blur at higher resolutions, but the milliseconds is unchanged.

1920 pixels becomes 3840 pixels over the same physical distance for double resolution. But one screenwidth per second is constant time, even if more pixels. The blur is same physical distance over increased dpi.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply