Blur Busters Forums

Who you gonna call? The Blur Busters! For Everything Better Than 60Hz™ Skip to content

New term: "Frame Rate Amplification" (1000fps in cheap GPUs)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers. The masters on Blur Busters.

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 19 Aug 2017, 21:19

thatoneguy wrote:EDIT: Could this technology be implement in a typical TV?
Because this would be one of the biggest revolutions in TV history. Good lord...just imagine finally watching Sports without motion blur in a 100+ inch tv.

Well, theoretically you could just use regular interpolation.

The problem is interpolation artifacts are harder to eliminate without geometry awareness. Those "Motionflow-type" artifacts become really ugly when you've got moving objects in front of moving objects. You get ugly effects at boundaries and edges.

With normal 2D interpolators like Sony Motionflow or Samsung ClearMotion or whatever -- they aren't perfect. You can't guess what went behind/reappears in things like http://www.testufo.com/persistence (occulsion effects), like scrolling scenery behind a picket fence -- normal 2D interpolators generally can't do that. You need 3D-geometry-awareness (what's behind objects) to fix that kind of motionflow flaw / interpolation flaw, and make frame rate acceleration technologies pratical.

<FUTURE VIDEO FORMAT>

For near-flawless frame rate acceleration / frame rate amplification you would need the essential equivalent of 3D video files (to be played back on a GPU, rather than H.264 codec) that are recorded in a framerateless manner (e.g. vectors, accelerations, curves, etc). That's why it's easier to use a geometry-aware interpolator (which I call frame rate amplification technology / frame rate acceleration technology) with 3D graphics. I imagine this would be a future video file format (e.g. H.266 or H.267 or H.268) to record things "holographically" like that, video files meant to be played back on a GPU rather than plain 2D planes. That way, you can play on anything -- map it to any 2D plane (a television), or map it to a 3D world in VR, or even map it to a future Holodeck. That would be the ultimate kind of video file, but probably many years before this type of video file is higher quality (when mapped onto a 4K display) than today's 4K H.264 files. It requires super-incredibly-detailed 3D graphics to look just like real life, but once it's there, then there's nothing stopping from switching from traditional 2D-video to 3D-geometry video files. Future displays will warrant 3D-geometry video files instead of ordinary 2D video (or two 2D streams for "fad 3D" which some of us like, but is not holographic). The 3DTV craze was kind of a fad, but that doesn't necessarily disclaim future decades that might bring true-holographic TVs, future cheap $50 Apple-Oakely chic VR sunshades, or even a future theoretical "Holodeck" glasses-less VR environment. There will always be a market for 2D TVs, but 3D can be mapped onto 2D. Like we play 3D-graphics video games on a 2D monitor. Same thing. And the same files can be played on true 3D by user choice (or not) and even for a 2D display, are much more easily frame rate amplified without interpolation artifacts. At the beginning, we would have depth information (like this paper) before we really have true full-geometry 3D video, since depth cameras are much more pratical. However, even partial depth information can still help with removing interpolation artifacts to an extent.
Depth Intra Coding for 3D Video Based on Geometric Primitives wrote:"
Abstract— This paper presents an advanced depth intra-coding
approach for 3D video coding based on the High Efficiency
Video Coding (HEVC) standard and the multiview video plus
depth (MVD) representation. This paper is motivated by the
fact that depth signals have specific characteristics that differ
from those of natural signals, i.e., camera-view video. Our
approach replaces conventional intra-picture coding for the
depth component, targeting a consistent and efficient support of
3D video applications that utilize depth maps or polygon meshes
or both, with a high depth coding efficiency in terms of minimal
artifacts in rendered views and meshes with a minimal number
of triangles for a given bit rate. For this purpose, we introduce
intra-picture prediction modes based on geometric primitives
along with a residual coding method in the spatial domain,
substituting conventional intra-prediction modes and transform
coding, respectively. The results show that our solution achieves
the same quality of rendered or synthesized views with about
the same bit rate as MVD coding with the 3D video extension
of HEVC (3D-HEVC) for high-quality depth maps and with
about 8% less overall bit rate as with 3D-HEVC without
related depth tools. At the same time, the combination of
3D video with 3D computer graphics content is substantially
simplified, as the geometry-based depth intra signals can be
represented as a surface mesh with about 85% less triangles,
generated directly in the decoding process as an alternative
decoder output..."
http://ieeexplore.ieee.org/stamp/stamp. ... er=7051251

And, some of us are quite fascinated by Kinect depth videos & by things like Microsoft Hyperlapse (converting video into 3D geometry to generate a different, virtual camera path).

Technically, when using vectors, motion curves, acceleration curves, etc, these types of video files can become framerateless! And can theoretically be played back at any chosen frame rate.

But, right now, we are in an early Wright Brothers territory or Nipikow Wheel mechanical TV territory in this technological progression of recording video as true 3D geometry. It's very glitchy, etc, but will improve progressively (for years, decades, possibly centuries) until Holodeck quality.

This is only the early-canary beginnings, and it will be many years before 3D graphics and video fully merge, where we can have cameras that essentially records in framerateless 3D geometry instead of ordinary fixed-framerate video. Such framerateless video files can be played at essentially any frame rate from, say 1 fps all the way to (infinity)fps. These kinds of video files would be very frame-rate-amplification friendly. It may be decades before this really truly becomes a reality, at detail levels that matters.

Today's depth video is often extremely low resolution and becomes ugly if you lean too far too left/right. Tomorrow, it could be VR-ready (and thus, also Holodeck-ready) while also still looking correct when displayed on ordinary 2D screens too (smartphones, tablets, televisions, monitors, laptops, etc) just like today's 3D videogames don't look bad when displayed on 2D screens if you do not want to use 3D glasses or virtual reality. Going to a framerateless 3D-geometry video file, that looks as good as reality, would be a holy grail for the ability to handle _any_ kind of future display (from postage stamp phone displays all the way up to future Holodecks).
Getting there (cheaply) will be tough, but nobody decades ago thought we'd all gain pocket 4K camera studios & pocket 4K broadcasting studios [smartphone cameras + wireless streaming]. But eventually, it is the natural path forward.

By roughly, your grandkid's video format (H.267 or H.268) could be a framerateless-motion 3D geometry file.

</FUTURE VIDEO FORMAT>
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3659
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 19 Aug 2017, 22:56

Glide wrote:"Soap Opera Effect" is just a disparaging term used by dinosaurs to describe smooth or high framerate video playback.
It doesn't matter to them if it's interpolated or native HFR. They are enemies of progress.

Generically, sometimes the term is also used to disparage related effects such as "motion blurry 60fps" (and/or VHS-smearing effects) as well as the interpolation artifacts -- the shimmering that occurs at edges of moving objects in front of a moving background, etc -- since it can't successfully correctly guess what got revealed from behind moving objects (due to geometry-unaware video).

Glide wrote:Great post, and I agree that it's probably the best option we have for low-persistence in the future.
The only issue is that, as I understand it, these techniques work for camera tracking, but your game is still animating at whatever framerate it is natively running at.

Many games now manage to animate frame rates exactly at the same rate as the framerate. That won't be a problem for frame rate amplifiers. In some cases, you won't have stop-motion movements anymore but instead have robot-like movements because a movement of various objects is flowing from waypoint to waypoint -- e.g. various parts of an enemy's body recorded at (X,Y,Z) at discrete points, with movement points interpolated in between.

But this is anyway, is the (mostly) solved problem at least in certain games. Frame rate amplifiers won't worsen enemy movements in those specific kinds of games, provided they're already animating their meshes at full gameworld frame rate in the first place, even if interpolated mesh positions. They'd just look equally robotic as before, since interim points in mesh animation is already being interpolated in many games, in order to keep their animations smooth at any framerate.

Glide wrote:Unfortunately, that can look really bad. Bioshock 1 & 2 (original release) are examples of games which only update their physics/animations at 30Hz, and look really bad running at 60 FPS+.

Yeah, they do. I'm aware.

It is possible that frame rate amplifiers may work at a higher-resolution-geometry level, such as approximate models (e.g. ten or twenty collisionboxes for an enemy monster), geometry-aware frame rate amplification can in theory do some animation work for you: interpolate the intermediate movement positions too, like a swinging arm -- especially if each object has an attached movement vector attached to it (e.g. how fast an arm is swinging). This could eventually be offloaded to a different technique such as reprojection, instead of traditional polygonal GPU rendering which would only be needed at intervals (e.g. 120fps or 240fps) to do a near-flawless reprojection to 1000fps or beyond.

Instead of rerendering 50,000 triangles per character (via traditional GPU rendering) every single frame for a high-detail enemy, you might simply reproject 10, 20, maybe even 30 or 40 collisionboxes and going that "sufficient resolution of geometry" may be sufficient enough to pretty much virtually artifact-free for the intermediate frames in a 120fps->1000fps reprojection. The question is how low-resolution or high-resolution the geometry part of the geometry-aware interpolation/reprojection/etc to become. To be sufficiently artifact free, you may only need to do essentially a few hundred critical planes of interpolations (instead of redoing a few million polygons, all over again) per scene, potentially saving a lot of the transistor/silicon cost of of nearly-artifactlessly converting 120fps->1000fps in future GPUs. Current reprojection/timewarping/etc only currently has basic geometry awareness, and thus, more artifacts, but they're doing lots of technological progress to make it look all better.

Glide wrote:But you don't get that issue with low-persistence strobing.
With strobing, animations end up looking smoother rather than making their low framerate stand out.

Only if animations are done at the same rate as refresh rate.
30fps mesh animations in Bioshock are still VERY noticeable in LightBoost/ULMB.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3659
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby open » 20 Aug 2017, 12:13

Well the easiest way to do this and have it look good is by keeping track of layers that occlude eachother and have different motions. If the camera just rotates in one spot you only need one layer. But if you have a stationary user interface or some robot flying in front of the other stuff you will need to render each layer seperately. So the implementation is best done in code and game companies are mostly lazy to put the time into something like this. If vr takes off then there will be enough incentive so that maybe we get some libraries drivers and even hardware level features on silicon to aid the programmers so that its not so work intensive to develop. I would expect this to also be driven by the fact that graphics wow factor sells games and hardware. So giving up that wow factor to achieve higher framerates would be undesireable. But some dank new hardware and software that can QUADRUPLE YOUR FRAMERATE for less than 4x the processing power would be quite marketable. Give it time tho.
open
 
Posts: 47
Joined: 02 Jul 2017, 20:46

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 21 Aug 2017, 07:09

open wrote:But some dank new hardware and software that can QUADRUPLE YOUR FRAMERATE for less than 4x the processing power would be quite marketable. Give it time tho.

Oculus pulled off a frame rate doubling (reprojection) for far less than 2x processing power, so technologically it's successfully proof-of-concept already. It uses depth buffers to help with reprojection, so it even has rudimentary geometry awarenss. So just have to keep progressing the tech, more frame rate, even less artifacts, etc.

Yes, geometry-awareness also includes knowledge of what's beind things. Also, positional awareness too. Geometry and positional awareness make it possible to do lagless interpolation + reducing interpolation artifacts further.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3659
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby drmcninja » 03 Sep 2017, 12:30

Consumers need to be aware that this benefits them. As that human reaction time website noted, everyone's reaction times suffered with the move to LCDs, and most people were enthusiastic because of the novelty and convenience of LCD panels. To this day many don't want to hear that LCD tech is itself like a handicap. I'm very surprised by this stubbornness.

If consumers are convinced to opt for the selections which aid all this, and demand increases, manufacturers will meet that demand and innovate further. Right now the best hope is for tech developed for VR to filter down to normal use (the way Lightboost for 3D vision did).

I imagine maybe we'll see some of then follow BenQ's lead and make expensive "esports" versions that are like 480Hz or higher, maybe with frame rate amplification/interpolation or other new tech. And then if somehow those things sell, despite their marked up prices, competitors will appear.
drmcninja
 
Posts: 79
Joined: 09 May 2017, 10:26

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 05 Sep 2017, 13:58

drmcninja wrote:Consumers need to be aware that this benefits them. As that human reaction time website noted, everyone's reaction times suffered with the move to LCDs, and most people were enthusiastic because of the novelty and convenience of LCD panels. To this day many don't want to hear that LCD tech is itself like a handicap. I'm very surprised by this stubbornness.

Indeed!

We did lose quite a bit when moving from CRT to LCD, and this is one of the raison d'être of Blur Busters -- originally we focussed on bringing back one of the benefits that CRTs used to have: Fast motion with zero motion blurring!

Now, it's only recent with 240Hz monitors with realtime scanout (+1ms GtG delay), that we finally have less average-lag and less lag-jitter than many typical CRTs. The overkill refresh rate (240Hz) currently (in ways) compensates for the +1ms GtG input lag delay, with reduced lag jitter (tighter min/max/average input lag range) than any lower frame rate on any display. But this still isn't enough; there's still often motion blur, poor blacks, bad viewing angles, etc -- typical of a 1ms TN LCD.

Not all LCD monitors can do the "instant mode" lagless scanout (using a scanline-buffer delay only (+ look behind for previous refresh cycles buffered only for the purpose of near-lagless overdrive processing, etc). But such displays essentially can trigger a LCD pixel GtG almost darn near immediately off the video cable within microseconds of the pixel arriving on the cable. Several such as the 240Hz Zowies do that now. Display processing delay of tens of microseconds are now being achieved on certain (not all) eSports monitors, which means more time is wasted on digital-cable codecs (DVI, HDMI, DisplayPort, etc) or on the LCD GtG, than on the display motherboard processing. Certainly, display motherboards can still take several milliseconds (or tens of milliseconds) on many monitors (including most IPS and VA monitors), but I've now seen cases where the display processing has been pushed to insignificance.

Today, we even have more input lag in HDMI/DisplayPort codecs than display processing on any of the 1ms TN "instant mode" 240Hz LCDs. Today, nowadays, we now have situations where Direct3D Present() or OpenGL glFlush() results in human-visible photons emitting from LCD pixels in just 2-or-3-or-4ms after the API call (at least for the pixels right below the tearline -- where the 'metaphorical' CRT electron gun would have been). Most reviewer lag measuring methods will accumulate other things, resulting in 5-10ms lag (even with a CRT), when it's really just bare amount of lag "at the scanline" that I've noticed under photodiode oscillscope tests. Sometimes more lag occurs from the codecing of digital signals than from "sufficient GtG to human visibility" of the 1ms TN GtG (GtG doesn't need to fully complete before the photons are visible). You do have to average lag out during VSYNC OFF, but if you're measuring "lag of first scanline under the VSYNC OFF tearline", the lag differential of CRT and LCD has shrunk to only ~1ms (ish) on some of the best eSports LCDs now.

The problem is it's very hard to compare different lag measuring methods (SMTT versus Leo Bodnar versus photodiode oscilloscope versus high speed camera tests) and they often capture more than just the display response, but the real truth is once the noise is filtered out, the lag differentials is getting surprisingly tiny on the best (least-lag) eSports displays, and it's possible to overcome the remaining lag disadvantage simply via sheer Hertz (most CRTs never could do 240Hz).

Sure, we sometimes need the better colors of IPS and VA -- both laggier than TN -- so we have to do a bit of a lag tradeoff if we want better blacks and colors, better viewing angles.

Even now.... the ideal display needs to focus on increasing Hz (while also getting closer to instant pixel response too). Increasing Hz always reduce lag and lag jitter even further.

The high Hz (as you already now know from this thread) puts us on a journey towards "blurfree sample-and-hold" (something CRTs have never done). So, dare we dream beyond CRT within our lifetimes. Basically the motion beauty of a CRT yet is flickerfree/strobefree/impulsefree/decayfree -- a steady-state yet has zero motion blurring. CRTs has never done that before simultaneously; the more flickerfree you made a CRT, the more motion blur it had (phosphor ghosting, like a radar CRT or a Tektronix CRT). As you already realize from this thread, the Hertz insanity is necessary to successfully combine do steady-illumination with zero-blurring.

From the consumer perspective, they need to see an amazingly colorful display, inky black blacks, bright HDR-compatible whites, zero flicker, no stutter, no tearing, eye-friendly, no pixellation, crisp edges, AND zero motion blurring. Simultaneously. Perfect-looking as possible in all criteria, catchalls the sensitivities, punches through the five-sigma of everybody's vision sensitivities. Whatever the user is sensitive to ("wow, motion looks amazing on that!"). As long as the price is right (cheap 1000fps frame rate amplifiers, affordable 3-figure-pricing displays, etc) they'll buy it up without caring about the details as much as we might. Yeah, I am dreaming, but progress should never stop! ;)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3659
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby BurzumStride » 04 Oct 2017, 08:53

I have just stumbled upon an old article, touching upon the non-linearity of FPS values:
https://www.mvps.org/directx/articles/fps_versus_frame_time.htm

Image

If I understand the logic correctly, the jump from generating 240FPS (~4.17ms) to 1000FPS (1ms) will only require a ~3.17ms frame-time decrease. Seeing how the the jump from 60 to 120 frames already required a 8.3ms decrease in frame-time, shouldn't the final ~3.17ms step towards 1000 frames be relatively easy?
Granted, I do not know much about the frame-time limitations, and how difficult it may be to get over the final 4.17ms to 1ms gap due to things like the time it takes for the CPU and GPU to communicate etc, so please correct me if I am wrong.

At the moment, in games like Battlefield 1 it is still very difficult to hit stable 240FPS even with an overclocked 7700k:
https://youtu.be/vraQ4D3eGcw
However, I would hope that the incremental performance increases following each microarchitecture's release, (with some optimisation from the game developers) would allow us to see something closer to 1000FPS (at low graphical settings) by the end of 2019's Ice Lake release. The difference between the above video's 170FPS and 1000FPS is ~4.9ms frame time - around the same as the jump between 46FPS and 60FPS (~5ms). //corrected this calculation
Mind you, I might once again be wrong, as Battlefield 1 actually added ~1.6ms average frame time on 64 multiplayer servers at Low/1280x720 resolution compared to Battlefield 4. (FX8350 + GTX770)

I would love to be able to predict when we might expect to get 1000fps stable in most games, however most CPU gaming benchmarks I have seen, were performed in scenarios where the frame-rate was GPU-limited! Even if we found the data and assumed a linear CPU performance increase across generations, at the moment we have 0 guarantee that most developers will even remotely care about optimising the game beyond simply squishing all the computations within that 16.7ms frame-time window.

Hopefully industry-standard frame-rate amplification technologies will encourage game developers to put more emphasis on more in-depth frame-time optimisations. Coupled with continuous hardware improvements, these two could provide sufficient performance to justify a mainstream introduction of 1000hz displays.
----
You guys made a valid point about producers meeting the demands of the average consumer. Before reading this I have not even dreamed of seeing non-pixelated 1000hz 1000FPS anytime soon, but with framerate amplification technologies' improved GPU frame output, the high hertz approach could appeal to a broader crowd than BlurBusters and Input lag purists such as myself (I officially coin that term haha).
I would love to see more people get excited about the Hz approach, it definitely is a step in the right direction!
Last edited by BurzumStride on 05 Oct 2017, 08:23, edited 1 time in total.
User avatar
BurzumStride
 
Posts: 8
Joined: 30 Aug 2017, 10:21

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 04 Oct 2017, 12:01

BurzumStride wrote:You guys made a valid point about producers meeting the demands of the average consumer. Before reading this I have not even dreamed of seeing non-pixelated 1000hz 1000FPS anytime soon, but with framerate amplification technologies' improved GPU frame output, the high hertz approach could appeal to a broader crowd than BlurBusters and Input lag purists such as myself (I officially coin that term haha).

Four years ago, I did not even dare think 1000fps at 1000Hz was going to be realistic within our lifetimes.

Now I've realized it's become very realistic in less than 10 years at least for high-end gaming monitor territory. Experimental 1000Hz displays are currently running in laboratories around the world, and some are actually now being sold for laboratory use (ViewPixx 1440Hz DLP) -- and since successful homebrew 480 Hz happened (Making of story) -- we now see 1000fps@1000Hz (lagless & strobeless ULMB!) becoming a reality within a decade.

Blur Busters coverage will be increasingly louder in the coming few years, to help compel GPU and monitor manufacturers work towards this goal. We'll probably reach the point where we'll begin gently shaming the websites that say 240 Hz and 480 Hz is not important -- there are many of those. :D It's necessary for the holy grail of strobless ULMB and blurless sample-and-hold, reaching closer and closer
to a real-life display that has no motion blur above-and-beyond human eye limitations.

It'll factor into our upcoming monitor tests, when we publish "Minimum persistence without strobing" benchmarks in a prominent part of our upcoming new monitor-reviews format. The only way manufacturers can reduce persistence (MPRTs) without strobing via higher Hz closer and closer to the "blur-free sample-and-hold" holy grail.

Mathematically:
120fps at 120Hz non-strobed LCD = minimum possible MPRT/persistence is 8.33ms
240fps at 240Hz non-strobed LCD = minimum possible MPRT/persistence is 4.16ms
480fps at 480Hz non-strobed LCD = minimum possible MPRT/persistence is 2.1ms
1000fps at 1000Hz non-strobed LCD = minimum possible MPRT/persistence is 1ms

Assuming pixel response is not the limiting factor. The more squarewave you can get LCD GtG pixel response to become (closer to traditional blur-reduction strobing), the MPRT measurement actually scales linearly with refresh rate.

Today, the only way manufacturers achieve 1ms MPRT (not GtG) is via strobing. When you see a manufacturer mention "MPRT" along with "1ms", that's the strobed measurement.

BurzumStride wrote:Granted, I do not know much about the frame-time limitations, and how difficult it may be to get over the final 4.17ms to 1ms gap due to things like the time it takes for the CPU and GPU to communicate etc, so please correct me if I am wrong.

My feeling is that it is probably okay to put a small amount of input lag (e.g. 2ms) into the whole pipeline if there's a huge benefit such as converting 100fps->1000fps.

Ideally, the fully rendered frames should be delivered laglessly, with the additional frames inserted laglessly in between.

To help reduce artifacts, the engine & GPU can communicate partial data about intermediate frames (not for rendering, but for better reprojection). You might have only 100 position updates per second, but the game engine could deliver 1000 low-resolution geometry positions per second (e.g. collisionbox granularity), with the GPU doing lagless geometry-aware interpolation via various tricks (multilayer Z-Buffers and other depth buffers) to allow successful artifact-free occulsion-reveals (objects behind objects) during lagless interpolation techniques such as time warping, reprojection, etc. Lots of researchers are working on this as we speak, and probably lots more in secret laboratories at places like NVIDIA or AMD.

Eventually, we'll have detailed enough buffers to allow things like artifact-free object rotations and parallax effects (artifact-free obscure/reveal effects) during some future form of frame rate amplification technology. There are many ways to do this with less GPU horsepower than a full, complete, polygonal scene re-rendering.

That's what researchers are doing, thanks to virtual reality making it critical. Any company that does not do this, risk falling behind, losing shareholder money (as VR and eSports industries rapidly grow beyond them in 10 years, etc).

Single-frame-drop stutters are only mildly bothersome on a computer monitor, but can cause sensitive people to actually puke (real barf) during virtual reality, so completely stutter-free operation is essential in making virtual reality mainstream, especially as more queasy people begin to begin wearing VR headsets.

And some people cannot wear today's VR because of 90Hz flicker (so the only way to get low-persistence without flicker is insane frame rates). Over time, so VR needs to be absolutely more and more 'perfectly stutterfree and real' in solving a lot of problems (motion blur, stutters, input lag, etc). Both AMD and NVIDIA are beginning to realize this only recently. Give them 10 years, and we'll have plenty of dedicated silicon directly on the GPU towards solving the problem of higher frame rates laglessly & artifactlessly without needing full GPU re-renders for each single frame.

It will soon become a priority project at GPU/monitor companies once all the technological jigsaw puzzles are falling in place: It's beginning to happen. Besides -- going towards insanely-high Hertz is currently the only way to pass fast-motion Holodeck Turing Tests anyway: "Wow, I didn't know I was wearing a VR headset instead of transparent ski goggles" fashion where they are not able to tell apart a VR head set and real life. This has to be achieved without motion blur and without strobing.

As explained earlier, going ultra-high-Hz (or using analog motion; going framerateless) is the only way to get closer to a true, real-life display -- because real life doesn't strobe, real life doesn't flicker, real life does not force extra motion blur above your vision limits, real life does not have a frame rate. So ultra-high-Hz is the only easy technological progress out the VR uncanny valley (and it's still difficult).

So, as a result, GPU manufacturers are forced into researching frame rate amplification technologies -- with obvious spinoff applications to cheap 1000fps@1000Hz in a decade or so -- including on desktop gaming monitors (not just VR).

BurzumStride wrote:I would love to be able to predict when we might expect to get 1000fps stable in most games

Two workarounds:

(A) Use a lower frame rate for stability, and framerate-amplify that instead.
We don't necessarily need 1000fps stable for frame rate amplification -- we can just do a lower number stable such as 50fps, 100fps or 200fps stable. Once stability is achieved, frame rate amplification is stable the rest of the way. 100fps stable = can be "frame rate amplified" to 1000fps stable. (whether by interpolation, timewarping, reprojection, or other artifact-free geometry-aware lag less technology)

(B) Use variable refresh rate even at 1000Hz
If randomness is still a problem at these levels (even a 1ms object mis-position can still be a visible microstutter in VRR) -- variable refresh rate rate can still be used in the 1000 Hz stratosphere. VRR (FreeSync, GSYNC) can eliminate visibility of random stutter as long as the randomness is perfectly synchronous. Random frame visibility times are completely stutterless as long as human visibility time is perfectly in sync with gametimes. This is by virtue that random edge-vibrations simply blends into motion blur. 90fps->112fps->93fps->108fps->91fps->118fps->104fps->95fps random frametimes look just like perfect VSYNC ON 100fps@100Hz, by virtue of the variable refresh rate technology.

Image

As long as the game is very "VRR perfect", it's theoretically possible to have a VRR-compatible frame rate amplification technology. It's much harder probably, but it's not mathematically impossible to combine frame rate amplification technology and VRR simultaneously.

Theoretically, VRR may eventually become unnecessary when well above >1000fps or when we gain better foeval "ultra-high-Hz-at-eye-gaze" rendering tricks. But VRR is still useful at 480Hz and 1000Hz, as even a 1ms microstutter is still human-eye-visible, especially in VR. During 8000 pixels/second eye tracking in a 1-screen-width-per-second 8K VR headset, a single 1ms framedrop 1/1000sec -- turns into an 8-pixel microstutter. Small but still visible to human eye.

We already know VRR still remains useful in the 1000Hz league. It's low-persistence VRR without strobing.

BurzumStride wrote:If I understand the logic correctly, the jump from generating 240FPS (~4.17ms) to 1000FPS (1ms) will only require a ~3.17ms frame-time decrease. Seeing how the the jump from 60 to 120 frames already required a 8.3ms decrease in frame-time, shouldn't the final ~3.17ms step towards 1000 frames be relatively easy?

It doesn't get easier. The mathematical difficulty is something else completely different.

From a "keep things artifact-free to human eye", it's theoretically easy if you have geometry-awareness and full artifact-free reveal (parallax / rotate / obscure / reveal effects). Human eyes still can notice things being wrong, even if briefly, as the animation below demonstrates well:

A very good animation demo of this effect is http://www.testufo.com/persistence -- it's very pixellated at 60Hz, doubles in resolution at 120Hz, quadruples in resolution at 240Hz, and octupled in resolution at 480Hz during our actual 480Hz tests.


Turn off all strobing (turn off ULMB), use ordinary LCD, and then look at the stationary UFO, and then look at the moving UFO.

That's always a full-resolution photograph being scrolled behind slits. The pixellation is caused by the sheer lowness of the refresh rate producing limited obscure-and-reveal opportunities. This horizontal pixellation artifact is caused by low Hz, that still is a problem even at 240Hz.

GPUs will need to be able to avoid this type of artifacts. Obscure/reveal artifacts. Frame rate amplification technologies will need to be depth-aware/geometry-aware with sufficient graphical knowledge behind objects.

It doesn't just apply to vertical lines! It also applies to single side-scrolling objects in front of background -- parallax side effects -- and artifacts around edges of objects -- that currently occur with today's VR reprojection technologies during 45fps->90fps operation.

Or even random obscure/reveals such as seeing through a bush, like running through dense jungles. The limited Hz produces limited obscure/reveal opportunities. Running through a dense jungle and trying to identify objects behind dense bush, is much easier in real life. This is because real life has an analog-league of infinite numbers of continual obscure-reveal opportunities that flicker-in-and-out. It all blends better than you could do with a limited-Hz display. The only way to get closer and closer to real life on this, is ultra-high Hz in this respect. Frame rate amplification technologies will need to handle this sort of stuff at least at an acceptable manner (far better than today's reprojectors).

Ordinary interpolators won't successfully guess the proper obscure/reveal effects in between real frames.

But, tomorrow, depth-aware / geometry-aware interpolators/reprojectors/timewarpers can potentially properly fill-in the proper obscure-reveals effects (at least "most of the time"), in order for you to avoid strange effects such as reduced resolution.

Researchers are working on ways to solve this type of problem, so that reprojection/interpolation can be done laglessly with even fewer artifacts -- by skipping full GPU renders for even 80% or 90% of frames -- and using frame rate amplification technologies instead.

BurzumStride wrote:Granted, I do not know much about the frame-time limitations, and how difficult it may be to get over the final 4.17ms to 1ms gap due to things like the time it takes for the CPU and GPU to communicate etc, so please correct me if I am wrong.

Currently, this isn't the main wall of difficulty at the moment.

There may still be enforced lag from the communications, but the key is that the real frames (e.g. 100fps) would be delivered with no lag relative to today. The difficulty is inserting extra frames laglessly. Which is possible with some kinds of re-projection technologies. The even bigger difficulty is to insert extra frames without interpolation artifacts -- by improving the depth-awareness / geometry-awareness of the interpolation technology. Then assuming you had proper motion vectors (which can continue at 1000Hz from the PC), you can still continuously reproject the last rendered frame, with proper "obscure-and-reveal" compensation.

That's the specific important technological breakthrough that current researchers are currently working on -- and it will successfully allow large-ratio frame rate amplification -- such as 10:1 ratios (e.g. 100fps -> 1000fps frame rate amplification). Successful reliable geometry-awareness / depth-awareness -- with obscure-and-reveal compensation during reprojection technologies -- will be very key to this, as probably being germane to the invention of "simultaneously blurless and strobeless" screen modes (blurless sample and hold) without needing unobtainium GPUs.

BurzumStride wrote:At the moment, in games like Battlefield 1 it is still very difficult to hit stable 240FPS even with an overclocked 7700k

This is very true. But you can:
1. Cap to a lower frame rate instead for stability and then framerate-amplify from there.
2. Use variable refresh rate during 1000Hz, and use a variable-framerate-aware reprojection algorithm.
So you can do either (1) or (2) or both.

Problem solved, assuming minor modifications to the game engine to give motion-vector hinting to the frame rate amplification technology (GPU silicon).

Developers probably still want to send roughly 1000 telemetry updates per second to the GPU (basically 6dof telemetry, probably at less than 10 or 100 kilobytes per interpolated frame) -- to help the geometry-aware reprojector avoid mis-guessing motion vectors to avoid back-and-fourth jumping effects sometimes seen in online gameplay during erratic latencies. The information that helps frame rate amplifier technologies might be simply depth-buffer-data level or low-resolution hitbox geometry level, and the frame rate amplification technology (reprojector) does the rest based. With data from the last fully-rendered frame instead. And the last fully rendered frame might maybe need, say, 10% more GPU rendering (not a biggie) to allow extra texture caching to occur to accomodate obscure-and-reveal compensation in succeeding reprojected frames. A small GPU cost, to allow good frame rate amplification ratios (e.g. 5:1 or 10:1).

To accomodate errors in stability, you can timecode everything with microsecond-accurate gametimes, and then simply make sure refresh visibility times stay in sync with gametimes (By using 1000Hz VRR-compatible frame rate amplification technology) and make sure all the reprojected frames are in perfect gametime-sync with predicted eye-tracking positions. Then errors/fluctuations in communications, rendertimes, CPU processing, etc, will be rendered invisible (or mostly invisible), as long as object positions stay within microseconds of refresh-cycle visibility times, even if everything is lag-shifted by a few-hundred-microseconds to de-jitter "engine-to-monitor" pipeline-flow erraticness (Many GPU drivers already do that today, to help frame pacing issues, and this is often increased much bigger when running in SLI mode). There are many technological workarounds to compensate for erraticness, none insurmountable -- what's important is gametime is in really good sync with refresh cycle visibility times. If gametime is erratic, then allow refresh cycles to become synchronously erratic to that, to avoid stutter (in the common traditional "G-SYNC reduces stutters" fashion) -- it still works at the kilohertz refresh leagues -- and still keeps the blurless sample-and-hold (strobeless ULMB/LightBoost) holy grail -- it isn't mutually exclusive.

It can also be a completely different algorithm than what I'm dreaming of.

There are actually huge numbers of ways to potentially pull this off (and we don't know who plans to do what approach) -- but I know this for sure: Researchers are currently working on this problem, indirectly in thanks to billions of of dollars being spent on virtual reality research & development -- and technology has finally caught up to making this feasible in the not-too-distant future.

_____

Oculus' timewarping (45fps->90fps) is a very good step towards true lagless geometry-aware frame rate amplification technology. Today, it's a big breakthrough.

But ultimately, eventually it'll only be just a Wright Brothers airplane. Tomorrow's algorithms will be hugely far more advanced, allow much bigger ratios (10:1 frame rate amplification) and with virtually artifact-free obscure-and-reveal capability.

All of this, will be very, very good towards making 1000fps @ 1000Hz practical by the mid 2020s at Ultra-league details on a three-figure-priced ordinary GPU.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3659
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby BurzumStride » 05 Oct 2017, 10:25

Thank you for the in-depth explanations! It took me a while to read over this slowly, but am I glad that I cancelled the order on a 240hz Zowie for now. Looks like more goodies are coming soon!

Although I feel like I should clarify something:
By "frame time" I did not mean image persistence. I meant the actual number of milliseconds it takes the computer to generate a frame.
If my understanding is correct, this is the determining factor in how many frames per second can be generated, eg. The PC taking ~16.7ms to render each frame will mean that ~60 frames can be sent by the GPU each second.

Perhaps I should have asked this question in a new thread altogether to avoid confusion, but I felt like this information could be tied into the debate of "When can we get 1000hz Displays".

My point was that with low graphical settings, (which still manage to look nice in games like battlefield 1) we are close to hitting 1000 frames already! In order to increase our framerate from 500 to 1000, all we need to do is reduce the frame time (rendering time) by 1ms. The difference of 500 frames seems like a massive performance gap, but it's really not!
It's the equivalent of jumping from 56 to 60 frames per second. The article referenced in my previous post did an excellent job of explaining this: https://www.mvps.org/directx/articles/fps_versus_frame_time.htm
Following this logic, in a couple of CPU generations we could be looking at 1000 frames stable in AAA games at low settings.

BurzumStride wrote:At the moment, in games like Battlefield 1 it is still very difficult to hit stable 240FPS even with an overclocked 7700k:
https://youtu.be/vraQ4D3eGcw
However, I would hope that the incremental performance increases following each microarchitecture's release, (with some optimisation from the game developers) would allow us to see something closer to 1000FPS (at low graphical settings) by the end of 2019's Ice Lake release. The difference between the above video's 170FPS and 1000FPS is ~4.9ms frame time - around the same as the jump between 46FPS and 60FPS (~5ms). //corrected this calculation

This is why my initial question was whether true 1000FPS would potentially be more difficult to achieve due to increasingly difficult rendering times or technical limitations like the base time it takes for the CPU and GPU to communicate.

As it turns out, in some better optimised and less-demanding games it is ALREADY possible to hit not only 1000FPS, but well over 2000FPS:
https://youtu.be/VaCFtnMJy-k?t=258 - i7 7700k ~2000FPS, ~0.5ms frame time
This result suggests that no such hurdles exist on the CPU-heavy side. Slightly faster CPU's and a little more optimisation from developers is all it will take to reach 1000FPS in reasonably running AAA titles at low settings.
Just to backup this concept, and show that the high framerate is not JUST a result of the game-specific optimisation:
https://youtu.be/RtF50zlSRIc?t=406 - i7 6700k - ~1000FPS, in what I assume to be a CPU-limited scenario. If every frame requires an equal rendering time, then there should not be a ~1000FPS disparity between, what are otherwise, such similarly performing processors.

Well, that's all fun and games for competitive weirdos such as myself who don't mind sacrificing visual fidelity for the sake of performance, but what about most people?
This had me wondering how close we are to hitting 1000FPS at ULTRA presets. Here's what I found:
https://www.youtube.com/watch?v=Q0P8CxpI98w - Battlefield 1 Multiplayer ~140 frames, ~7.1ms frame time at Ultra with i7 6700k and 1080ti OC
This suggests that the performance boost needed to go up from those 140FPS to 1000FPS (~6.1ms frametime difference) is about the equivalent of jumping from 44FPS to 60FPS (~6ms frametime difference).

Now, this is true, only assuming that GPU's do not have technical limitations which would make pushing frame time down to the 1ms range difficult. If no such limitations exist, (like in the CPU scenario) we could be looking at TRUE 1000FPS at ultra settings, around or even before the introduction of mainstream Frame Rate Amplification technologies.

Higher framerates offer diminishing returns in terms of things like image persistence and motion clarity. The good news is that the performance required to draw each consecutive frame, scales inversely with the framerate (each consecutive frame takes less time to render).
----
This is of course, as far as I know, and assuming I understood the information correctly. I think that the minecraft example proves that CPU-dependent games have no obvious bottlenecks until we move down to at least 0.5ms frame time at 2000FPS. I don't know much about the intricacies of GPU rendering and whether the same is true when trying to simulate fancy effects such as lighting. Also I do apologise if this information was common knowledge, it eluded me for a long time and I did not find many websites detailing this.

Regardless of when we will be able to hit 1000FPS stable, frame amplification technologies are amazing news. Even if no weird bottlenecks exist on the GPU-side of frame rendering, we currently are still only talking about true 1000FPS 1080p. The idea of true 3D videos and their motion artifact-eliminating effect should be especially useful, since television and movies are usually the last ones to catch up with new trends. Convincing certain people that native high refresh rate content is better might be difficult, so a toggle option on the Frame Rate Amplification technology may serve as a compromise, keeping not only us, but also the:

Glide wrote:enemies of progress.


...relatively satisfied
User avatar
BurzumStride
 
Posts: 8
Joined: 30 Aug 2017, 10:21

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 05 Oct 2017, 12:02

BurzumStride wrote:Thank you for the in-depth explanations! It took me a while to read over this slowly, but am I glad that I cancelled the order on a 240hz Zowie for now. Looks like more goodies are coming soon!

Personally, I would not wait.

Mature 480Hz won't happen till well into the 2020s. Homebrew 240Hz occured in 2013 and it did not take until 2016 for monitor manufacturers to release mainstream 240Hz. The first early 480Hz mainstream-manufacturer monitors likely won't happen till near 2020, and then take a couple of years after to become a more mature market.

We generally recommend readers to get 240Hz today and enjoy it for many years, even as we wait for true-480Hz and true-1000Hz monitors to come out later.

Also -- if your eyes can stand 120Hz+ strobing -- you can get similar motion blur as 1000fps@1000Hz via using ULMB -- you're just simply using strobing to achieve motion blur elimination, instead of achieving low persistence via ultra-high-framerates-at-ultra-high-Hz. This is something that still can be enjoyed immensely today.

BurzumStride wrote:By "frame time" I did not mean image persistence. I meant the actual number of milliseconds it takes the computer to generate a frame.

Frame time is unavoidably related to persistence on sample-and-hold displays (non-strobed LCDs).

This is because the previous frame is displayed on-screen for the frametime it takes to render the next frame.

What this means is 200fps@200Hz has half the motion blur of 100fps@100Hz. This effect is quite easy to see on a variable refresh rate display; the higher the framerate (the shorter frame times), the less motion blur there appears to be.

If you stare at http://www.testufo.com on a 240Hz display, you'll quickly notice that (assuming LCD GtG wasn't the bottleneck), that:
240fps@240Hz = baseline
120fps@120Hz = 2x motion blur
60fps@60Hz = 4x motion blur
30fps@30Hz = 8x motion blur (and it begins to regular-stutter a bit, due to low-frequency of stutter-vibration).

Or if you test http://www.testufo.com on a 120Hz display, then 120fps@120Hz is the baseline, with 60fps@60Hz being 2x more motion blur and 30fps@30Hz being 4x more motion blur.

Which means, unavoidably "persistence = frametime" because the frametime of next frame affects the amount of time the previous frame is displayed-onscreen (very especially more true for VRR displays, but pretty relevant to fixed-Hz at least in granular refresh-cycle-time increments as http://www.testufo.com demonstrates). In this case; translating as 1ms of frame visibility time equalling 1 pixel of motion blurring during 1000 pixels per second.

Image

60fps@60Hz and 60fps@120Hz and 60fps@240Hz all look the same in TestUFO -- individual frame visibility time dictates the amount of motion blurring on a sample-and-hold displays such as non-strobed LCDs and OLEDs.

The journey towards blurless sample-and-hold (strobeless ULMB) will require continuing the Hertz race. But at least we can enjoy strobing during this journey (as long as our eyes don't mind 120Hz flicker).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3659
Joined: 05 Dec 2013, 15:44

PreviousNext

Return to Area 51: Display Science, Research & Engineering

Who is online

Users browsing this forum: No registered users and 1 guest