Blur Busters Forums

Who you gonna call? The Blur Busters! For Everything Better Than 60Hz™ Skip to content

New term: "Frame Rate Amplification" (1000fps in cheap GPUs)

Advanced display talk, display hackers, advanced game programmers, scientists, display researchers, display manufacturers, vision researchers. The masters on Blur Busters.

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 19 Aug 2017, 21:19

thatoneguy wrote:EDIT: Could this technology be implement in a typical TV?
Because this would be one of the biggest revolutions in TV history. Good lord...just imagine finally watching Sports without motion blur in a 100+ inch tv.

Well, theoretically you could just use regular interpolation.

The problem is interpolation artifacts are harder to eliminate without geometry awareness. Those "Motionflow-type" artifacts become really ugly when you've got moving objects in front of moving objects. You get ugly effects at boundaries and edges.

With normal 2D interpolators like Sony Motionflow or Samsung ClearMotion or whatever -- they aren't perfect. You can't guess what went behind/reappears in things like http://www.testufo.com/persistence (occulsion effects), like scrolling scenery behind a picket fence -- normal 2D interpolators generally can't do that. You need 3D-geometry-awareness (what's behind objects) to fix that kind of motionflow flaw / interpolation flaw, and make frame rate acceleration technologies pratical.

<FUTURE VIDEO FORMAT>

For near-flawless frame rate acceleration / frame rate amplification you would need the essential equivalent of 3D video files (to be played back on a GPU, rather than H.264 codec) that are recorded in a framerateless manner (e.g. vectors, accelerations, curves, etc). That's why it's easier to use a geometry-aware interpolator (which I call frame rate amplification technology / frame rate acceleration technology) with 3D graphics. I imagine this would be a future video file format (e.g. H.266 or H.267 or H.268) to record things "holographically" like that, video files meant to be played back on a GPU rather than plain 2D planes. That way, you can play on anything -- map it to any 2D plane (a television), or map it to a 3D world in VR, or even map it to a future Holodeck. That would be the ultimate kind of video file, but probably many years before this type of video file is higher quality (when mapped onto a 4K display) than today's 4K H.264 files. It requires super-incredibly-detailed 3D graphics to look just like real life, but once it's there, then there's nothing stopping from switching from traditional 2D-video to 3D-geometry video files. Future displays will warrant 3D-geometry video files instead of ordinary 2D video (or two 2D streams for "fad 3D" which some of us like, but is not holographic). The 3DTV craze was kind of a fad, but that doesn't necessarily disclaim future decades that might bring true-holographic TVs, future cheap $50 Apple-Oakely chic VR sunshades, or even a future theoretical "Holodeck" glasses-less VR environment. There will always be a market for 2D TVs, but 3D can be mapped onto 2D. Like we play 3D-graphics video games on a 2D monitor. Same thing. And the same files can be played on true 3D by user choice (or not) and even for a 2D display, are much more easily frame rate amplified without interpolation artifacts. At the beginning, we would have depth information (like this paper) before we really have true full-geometry 3D video, since depth cameras are much more pratical. However, even partial depth information can still help with removing interpolation artifacts to an extent.
Depth Intra Coding for 3D Video Based on Geometric Primitives wrote:"
Abstract— This paper presents an advanced depth intra-coding
approach for 3D video coding based on the High Efficiency
Video Coding (HEVC) standard and the multiview video plus
depth (MVD) representation. This paper is motivated by the
fact that depth signals have specific characteristics that differ
from those of natural signals, i.e., camera-view video. Our
approach replaces conventional intra-picture coding for the
depth component, targeting a consistent and efficient support of
3D video applications that utilize depth maps or polygon meshes
or both, with a high depth coding efficiency in terms of minimal
artifacts in rendered views and meshes with a minimal number
of triangles for a given bit rate. For this purpose, we introduce
intra-picture prediction modes based on geometric primitives
along with a residual coding method in the spatial domain,
substituting conventional intra-prediction modes and transform
coding, respectively. The results show that our solution achieves
the same quality of rendered or synthesized views with about
the same bit rate as MVD coding with the 3D video extension
of HEVC (3D-HEVC) for high-quality depth maps and with
about 8% less overall bit rate as with 3D-HEVC without
related depth tools. At the same time, the combination of
3D video with 3D computer graphics content is substantially
simplified, as the geometry-based depth intra signals can be
represented as a surface mesh with about 85% less triangles,
generated directly in the decoding process as an alternative
decoder output..."
http://ieeexplore.ieee.org/stamp/stamp. ... er=7051251

And, some of us are quite fascinated by Kinect depth videos & by things like Microsoft Hyperlapse (converting video into 3D geometry to generate a different, virtual camera path).

Technically, when using vectors, motion curves, acceleration curves, etc, these types of video files can become framerateless! And can theoretically be played back at any chosen frame rate.

But, right now, we are in an early Wright Brothers territory or Nipikow Wheel mechanical TV territory in this technological progression of recording video as true 3D geometry. It's very glitchy, etc, but will improve progressively (for years, decades, possibly centuries) until Holodeck quality.

This is only the early-canary beginnings, and it will be many years before 3D graphics and video fully merge, where we can have cameras that essentially records in framerateless 3D geometry instead of ordinary fixed-framerate video. Such framerateless video files can be played at essentially any frame rate from, say 1 fps all the way to (infinity)fps. These kinds of video files would be very frame-rate-amplification friendly. It may be decades before this really truly becomes a reality, at detail levels that matters.

Today's depth video is often extremely low resolution and becomes ugly if you lean too far too left/right. Tomorrow, it could be VR-ready (and thus, also Holodeck-ready) while also still looking correct when displayed on ordinary 2D screens too (smartphones, tablets, televisions, monitors, laptops, etc) just like today's 3D videogames don't look bad when displayed on 2D screens if you do not want to use 3D glasses or virtual reality. Going to a framerateless 3D-geometry video file, that looks as good as reality, would be a holy grail for the ability to handle _any_ kind of future display (from postage stamp phone displays all the way up to future Holodecks).
Getting there (cheaply) will be tough, but nobody decades ago thought we'd all gain pocket 4K camera studios & pocket 4K broadcasting studios [smartphone cameras + wireless streaming]. But eventually, it is the natural path forward.

By roughly, your grandkid's video format (H.267 or H.268) could be a framerateless-motion 3D geometry file.

</FUTURE VIDEO FORMAT>
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3573
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 19 Aug 2017, 22:56

Glide wrote:"Soap Opera Effect" is just a disparaging term used by dinosaurs to describe smooth or high framerate video playback.
It doesn't matter to them if it's interpolated or native HFR. They are enemies of progress.

Generically, sometimes the term is also used to disparage related effects such as "motion blurry 60fps" (and/or VHS-smearing effects) as well as the interpolation artifacts -- the shimmering that occurs at edges of moving objects in front of a moving background, etc -- since it can't successfully correctly guess what got revealed from behind moving objects (due to geometry-unaware video).

Glide wrote:Great post, and I agree that it's probably the best option we have for low-persistence in the future.
The only issue is that, as I understand it, these techniques work for camera tracking, but your game is still animating at whatever framerate it is natively running at.

Many games now manage to animate frame rates exactly at the same rate as the framerate. That won't be a problem for frame rate amplifiers. In some cases, you won't have stop-motion movements anymore but instead have robot-like movements because a movement of various objects is flowing from waypoint to waypoint -- e.g. various parts of an enemy's body recorded at (X,Y,Z) at discrete points, with movement points interpolated in between.

But this is anyway, is the (mostly) solved problem at least in certain games. Frame rate amplifiers won't worsen enemy movements in those specific kinds of games, provided they're already animating their meshes at full gameworld frame rate in the first place, even if interpolated mesh positions. They'd just look equally robotic as before, since interim points in mesh animation is already being interpolated in many games, in order to keep their animations smooth at any framerate.

Glide wrote:Unfortunately, that can look really bad. Bioshock 1 & 2 (original release) are examples of games which only update their physics/animations at 30Hz, and look really bad running at 60 FPS+.

Yeah, they do. I'm aware.

It is possible that frame rate amplifiers may work at a higher-resolution-geometry level, such as approximate models (e.g. ten or twenty collisionboxes for an enemy monster), geometry-aware frame rate amplification can in theory do some animation work for you: interpolate the intermediate movement positions too, like a swinging arm -- especially if each object has an attached movement vector attached to it (e.g. how fast an arm is swinging). This could eventually be offloaded to a different technique such as reprojection, instead of traditional polygonal GPU rendering which would only be needed at intervals (e.g. 120fps or 240fps) to do a near-flawless reprojection to 1000fps or beyond.

Instead of rerendering 50,000 triangles per character (via traditional GPU rendering) every single frame for a high-detail enemy, you might simply reproject 10, 20, maybe even 30 or 40 collisionboxes and going that "sufficient resolution of geometry" may be sufficient enough to pretty much virtually artifact-free for the intermediate frames in a 120fps->1000fps reprojection. The question is how low-resolution or high-resolution the geometry part of the geometry-aware interpolation/reprojection/etc to become. To be sufficiently artifact free, you may only need to do essentially a few hundred critical planes of interpolations (instead of redoing a few million polygons, all over again) per scene, potentially saving a lot of the transistor/silicon cost of of nearly-artifactlessly converting 120fps->1000fps in future GPUs. Current reprojection/timewarping/etc only currently has basic geometry awareness, and thus, more artifacts, but they're doing lots of technological progress to make it look all better.

Glide wrote:But you don't get that issue with low-persistence strobing.
With strobing, animations end up looking smoother rather than making their low framerate stand out.

Only if animations are done at the same rate as refresh rate.
30fps mesh animations in Bioshock are still VERY noticeable in LightBoost/ULMB.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3573
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby open » 20 Aug 2017, 12:13

Well the easiest way to do this and have it look good is by keeping track of layers that occlude eachother and have different motions. If the camera just rotates in one spot you only need one layer. But if you have a stationary user interface or some robot flying in front of the other stuff you will need to render each layer seperately. So the implementation is best done in code and game companies are mostly lazy to put the time into something like this. If vr takes off then there will be enough incentive so that maybe we get some libraries drivers and even hardware level features on silicon to aid the programmers so that its not so work intensive to develop. I would expect this to also be driven by the fact that graphics wow factor sells games and hardware. So giving up that wow factor to achieve higher framerates would be undesireable. But some dank new hardware and software that can QUADRUPLE YOUR FRAMERATE for less than 4x the processing power would be quite marketable. Give it time tho.
open
 
Posts: 40
Joined: 02 Jul 2017, 20:46

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 21 Aug 2017, 07:09

open wrote:But some dank new hardware and software that can QUADRUPLE YOUR FRAMERATE for less than 4x the processing power would be quite marketable. Give it time tho.

Oculus pulled off a frame rate doubling (reprojection) for far less than 2x processing power, so technologically it's successfully proof-of-concept already. It uses depth buffers to help with reprojection, so it even has rudimentary geometry awarenss. So just have to keep progressing the tech, more frame rate, even less artifacts, etc.

Yes, geometry-awareness also includes knowledge of what's beind things. Also, positional awareness too. Geometry and positional awareness make it possible to do lagless interpolation + reducing interpolation artifacts further.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3573
Joined: 05 Dec 2013, 15:44

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby drmcninja » 03 Sep 2017, 12:30

Consumers need to be aware that this benefits them. As that human reaction time website noted, everyone's reaction times suffered with the move to LCDs, and most people were enthusiastic because of the novelty and convenience of LCD panels. To this day many don't want to hear that LCD tech is itself like a handicap. I'm very surprised by this stubbornness.

If consumers are convinced to opt for the selections which aid all this, and demand increases, manufacturers will meet that demand and innovate further. Right now the best hope is for tech developed for VR to filter down to normal use (the way Lightboost for 3D vision did).

I imagine maybe we'll see some of then follow BenQ's lead and make expensive "esports" versions that are like 480Hz or higher, maybe with frame rate amplification/interpolation or other new tech. And then if somehow those things sell, despite their marked up prices, competitors will appear.
drmcninja
 
Posts: 79
Joined: 09 May 2017, 10:26

Re: New term: "Frame Rate Amplification" (1000fps in cheap G

Postby Chief Blur Buster » 05 Sep 2017, 13:58

drmcninja wrote:Consumers need to be aware that this benefits them. As that human reaction time website noted, everyone's reaction times suffered with the move to LCDs, and most people were enthusiastic because of the novelty and convenience of LCD panels. To this day many don't want to hear that LCD tech is itself like a handicap. I'm very surprised by this stubbornness.

Indeed!

We did lose quite a bit when moving from CRT to LCD, and this is one of the raison d'être of Blur Busters -- originally we focussed on bringing back one of the benefits that CRTs used to have: Fast motion with zero motion blurring!

Now, it's only recent with 240Hz monitors with realtime scanout (+1ms GtG delay), that we finally have less average-lag and less lag-jitter than many typical CRTs. The overkill refresh rate (240Hz) currently (in ways) compensates for the +1ms GtG input lag delay, with reduced lag jitter (tighter min/max/average input lag range) than any lower frame rate on any display. But this still isn't enough; there's still often motion blur, poor blacks, bad viewing angles, etc -- typical of a 1ms TN LCD.

Not all LCD monitors can do the "instant mode" lagless scanout (using a scanline-buffer delay only (+ look behind for previous refresh cycles buffered only for the purpose of near-lagless overdrive processing, etc). But such displays essentially can trigger a LCD pixel GtG almost darn near immediately off the video cable within microseconds of the pixel arriving on the cable. Several such as the 240Hz Zowies do that now. Display processing delay of tens of microseconds are now being achieved on certain (not all) eSports monitors, which means more time is wasted on digital-cable codecs (DVI, HDMI, DisplayPort, etc) or on the LCD GtG, than on the display motherboard processing. Certainly, display motherboards can still take several milliseconds (or tens of milliseconds) on many monitors (including most IPS and VA monitors), but I've now seen cases where the display processing has been pushed to insignificance.

Today, we even have more input lag in HDMI/DisplayPort codecs than display processing on any of the 1ms TN "instant mode" 240Hz LCDs. Today, nowadays, we now have situations where Direct3D Present() or OpenGL glFlush() results in human-visible photons emitting from LCD pixels in just 2-or-3-or-4ms after the API call (at least for the pixels right below the tearline -- where the 'metaphorical' CRT electron gun would have been). Most reviewer lag measuring methods will accumulate other things, resulting in 5-10ms lag (even with a CRT), when it's really just bare amount of lag "at the scanline" that I've noticed under photodiode oscillscope tests. Sometimes more lag occurs from the codecing of digital signals than from "sufficient GtG to human visibility" of the 1ms TN GtG (GtG doesn't need to fully complete before the photons are visible). You do have to average lag out during VSYNC OFF, but if you're measuring "lag of first scanline under the VSYNC OFF tearline", the lag differential of CRT and LCD has shrunk to only ~1ms (ish) on some of the best eSports LCDs now.

The problem is it's very hard to compare different lag measuring methods (SMTT versus Leo Bodnar versus photodiode oscilloscope versus high speed camera tests) and they often capture more than just the display response, but the real truth is once the noise is filtered out, the lag differentials is getting surprisingly tiny on the best (least-lag) eSports displays, and it's possible to overcome the remaining lag disadvantage simply via sheer Hertz (most CRTs never could do 240Hz).

Sure, we sometimes need the better colors of IPS and VA -- both laggier than TN -- so we have to do a bit of a lag tradeoff if we want better blacks and colors, better viewing angles.

Even now.... the ideal display needs to focus on increasing Hz (while also getting closer to instant pixel response too). Increasing Hz always reduce lag and lag jitter even further.

The high Hz (as you already now know from this thread) puts us on a journey towards "blurfree sample-and-hold" (something CRTs have never done). So, dare we dream beyond CRT within our lifetimes. Basically the motion beauty of a CRT yet is flickerfree/strobefree/impulsefree/decayfree -- a steady-state yet has zero motion blurring. CRTs has never done that before simultaneously; the more flickerfree you made a CRT, the more motion blur it had (phosphor ghosting, like a radar CRT or a Tektronix CRT). As you already realize from this thread, the Hertz insanity is necessary to successfully combine do steady-illumination with zero-blurring.

From the consumer perspective, they need to see an amazingly colorful display, inky black blacks, bright HDR-compatible whites, zero flicker, no stutter, no tearing, eye-friendly, no pixellation, crisp edges, AND zero motion blurring. Simultaneously. Perfect-looking as possible in all criteria, catchalls the sensitivities, punches through the five-sigma of everybody's vision sensitivities. Whatever the user is sensitive to ("wow, motion looks amazing on that!"). As long as the price is right (cheap 1000fps frame rate amplifiers, affordable 3-figure-pricing displays, etc) they'll buy it up without caring about the details as much as we might. Yeah, I am dreaming, but progress should never stop! ;)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter!
User avatar
Chief Blur Buster
Site Admin
 
Posts: 3573
Joined: 05 Dec 2013, 15:44

Previous

Return to Area 51: Display Science, Research & Engineering

Who is online

Users browsing this forum: No registered users and 2 guests