G-Sync and Cloned Displays

Talk about NVIDIA G-SYNC, a variable refresh rate (VRR) technology. G-SYNC eliminates stutters, tearing, and reduces input lag. List of G-SYNC Monitors.
Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: G-Sync and Cloned Displays

Post by Sparky » 17 Feb 2014, 20:45

Chief Blur Buster wrote:
Sparky wrote:Well, you'd still have g-sync on the 50-60fps range, with low latency because you're not limiting framerate with backpressure.
You definitely do lose visual benefits of GSYNC in the 60-70-80-90-100-110+ range, which I do clearly notice. Losing the 60-80fps range would be a huge loss of visual fidelity.
Very true, but if you can usually hit a multiple of your streaming framerate, capping it there will drastically improve stream quality.
Sparky wrote:As long as you can keep up with the framerate cap your stream will avoid the stutter associated with triple buffering, the tradeoff is that you could get a higher framerate if you can live with a stuttering stream.
Only if you're perfectly maintaining frame rate cap, since 50fps is essentially guaranteed to stutter in a 30fps or 60fps stream. You lose access to the visual fluidity of continuously maintained "perfect X fps@X hz" look of GSYNC -- for example 90fps look like 90fps@90Hz, roughly in between perfect synced 60fps@60Hz and 120fps@120Hz.

Most twitch streams all have stutter built in because most competitive players are playing VSYNC OFF, so you're getting microstutters from the fluctuating frame rates (from the stream perspective, it essentially is triple buffer microstutters, as the tearing itself is typically not recorded, so streaming is almost never true WYSIWYG in terms of identical stutter-for-stutter, tearing-for-tearing reproduction. This will probably continue in the GSYNC era)
If you have a constant 50fps framerate it will look terrible at 30hz, no denying that, but if you can usually maintain 60 you can trade low latency for smoothness by buffering ahead and being smart about which frames you choose to drop. For example, instead of delaying a frame by 30ms like you would with normal 30hz v-sync, you can advance it by 3ms. When your buffer gets low and there's little motion, you can use an extra frame to catch back up.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: G-Sync and Cloned Displays

Post by Chief Blur Buster » 18 Feb 2014, 09:45

Sparky wrote:For example, instead of delaying a frame by 30ms like you would with normal 30hz v-sync, you can advance it by 3ms. When your buffer gets low and there's little motion, you can use an extra frame to catch back up.
Only with double buffering.
That doesn't happen with proper low-latency triple buffering.
For an explanation of how the low-latency triple buffering implementation works, see AnandTech: Triple Buffering, Why We Love It.

(This is not the high-lag chained triple buffering, but the low-lag triple buffering algorithm that has less lag than double buffering VSYNC ON).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: G-Sync and Cloned Displays

Post by Sparky » 18 Feb 2014, 11:04

I'm not talking about current buffering arrangements, I'm suggesting a new one specifically for capture cards, in which frames are dropped if they won't fit nicely into the given refresh rate. You have the normal G-sync stream which is double buffered, but instead of being overwritten immediately after being sent to the monitor, those frames are compared with earlier ones, and the GPU decides which is best to drop, in order to fit into a 30hz v-sync stream.

Hypothetical arrangement:
Lets start with 5 framebuffers, 0 1 2 3 4
Buffers 0-3 each hold a frame that took 16ms to render, buffer 4 is working on a frame that will take 25ms.
Buffer 0 is being displayed on the v-sync output, buffer 3 is being displayed on the g-sync output.
Every 33ms, buffer 0 is recycled, and replaced by buffer 1, 2, or 3. (if buffer 2 or 3 is selected, you'd recycle older buffers as well)
Whenever buffer 4 finishes rendering, buffer 3 is moved into buffer 1 or 2. If those are full, 1, 2 or 3 will be dropped to make room for g-sync to continue normally. (when I say move, i'm not suggesting the data be moved, just the pointer to it, it will take extra RAM, but it shouldn't take much extra bandwidth)
Buffer 1 or 2 may be recycled early if the frame in it definitely won't be used.

How does the GPU decide which frame to drop for the v-sync output? In this scenario it's pretty easy, the GPU would display the frames that are currently in buffers 0 2 and 4, dropping 1 and 3 sometime after the g-sync output is done with them. This every other frame situation might continue until the g-sync and v-sync streams drift enough that there's an extra empty buffer, at which point the v-sync stream may catch back up.

Selecting which frame to include in the v-sync output is where you'd be doing the conversion from variable framerate to a fixed framerate, so this is where you can make your algorithm as complicated as you want to, comparing render time or motion between frames before deciding which to drop, maybe even interpolating.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: G-Sync and Cloned Displays

Post by Chief Blur Buster » 18 Feb 2014, 11:25

Sparky wrote:I'm not talking about current buffering arrangements, I'm suggesting a new one specifically for capture cards, in which frames are dropped if they won't fit nicely into the given refresh rate.
Yes -- the algorithm that you described behaves like triple buffering running in parallel with the GSYNC output. Even though you don't call it triple buffering, it would visually behave in the same way (same kind of microstutter) if you let the framerate float upwards/downwards. It's just a different wording to describe almost the same thing.

From the perspective of the 3D game, there's often no concept of "dropped frames". In many implementations, that's done downstream somewhere in the driver code. So if we just simply focus closer to the game engine, we can clone the frames and then operate the frame buffers independently in separate framebuffer streams using separate framebuffer algorithms (as you've described), while letting the GSYNC handle frame buffering. Essentially it's like cloning the frame buffers as they hit the drivers from the game (Direct3D Present() for example), and executing completely separate framebuffering algorithms on the separate framebuffer streams. One framebuffer stream could be double buffered, another stream could be triple buffered. One stream could be GSYNC, the other stream could be triple buffered. etc. You get the idea. What you just described is one variation of this. Programmatically you could merge it all and save memory (keep a count of references pointing to the frame buffers, to avoid duplicating them, and disposing of them when no references point to them), but technically and mathematically you're still doing completely separate frame buffer algorithms on separate frame buffer streams.

Anything that converts higher-than-framerate to lower-framerate, without tearing, and chooses the frametime that most closely matches the output frametime, is even if not called triple buffering, is still an algorithm that _visually_ resembles the low-latency triple buffering, the type of triple buffering whose microstutters fall gradually the higher the framerate you go (e.g. 200fps triple buffered has less microstutters than 100fps triple buffered on a 60Hz display -- the stutter vibration amplitude is (1/fps)th the motionspeed, since there are more frames available that are closer to the ideal frame presentation times)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: G-Sync and Cloned Displays

Post by Chief Blur Buster » 18 Feb 2014, 11:32

...Oh, and another way to do this is to simply run two renderers at the same time and create separate GPU renderings for separate streams (e.g. gametime T+0/60sec T+1/60sec T+2/60sec for the capture stream, and gametime T+0/60sec, T+1.316/60sec, T+2.7416sec for GSYNC stream). GSYNC allows gametimes to stay in sync with frame rendertimes to stay in sync with frame displaytimes, theoretically to the exact millisecond, which is why it's stutterfree to the eye. Mathematically, this is critical for GSYNC operation (I can explain why -- but this article explains this partially). So to keep perfect stutterfree video, AND perfect stutterfree GSYNC... AND letting framerate float up and down, there's no way to pull this feat off without running separate GPU renderings for separate streams... and that consumes GPU power. (And way too complicated anyway)

I don't believe in degrading gameplay fluidity in order to improve twitch streams. So this goes full circle back to prioritizing stutters to the game player, and NOT to the stream. This is the way it has been done for a long time now (and necessary when recording 120Hz users playing VSYNC OFF fluctuating frame rates, while broadcasting 30fps or 60fps streams). The lowest-possible-stutter method is a high quality algorithm that chooses the gsync frametime that's closest to the ideal frame presentation time of the specific frame. The higher the framerate, the more frames that will be closer to the proper time, and the less stutters will occur in the stream. In exactly the same way why 200fps triple-buffered at 60Hz can have less microstutters than 100fps triple-buffered at 60Hz.

GSYNC frames
T+0.00/60sec -- use this for stream, closest to stream T+0/60
T+0.73/60sec
T+1.15/60sec -- use this for stream, closest to stream T+1/60
T+1.77/60sec
T+2.15/60sec -- use this for stream, closest to stream T+2/60
T+2.98/60sec -- use this for stream, closest to stream T+3/60
T+3.73/60sec -- use this for stream, closest to stream T+4/60
T+4.51/60sec
T+4.98/60sec -- use this for stream, closest to stream T+5/60
T+5.25/60sec
T+5.64/60sec
T+6.11/60sec -- use this for stream, closest to stream T+6/60
etc.

This will guarantee "theoretically minimum possible microstutters for stream with zero compromises to the game player" (fully floating GSYNC frame rate permitting full Hz limit), short of going to the "two-separate-GPU-rendering" approach rendering frame 3D buffers from scratch for two separate streams (which would be a waste of GPU resources).

At 200fps @ 60Hz, 1000 pixels per second panning would only have a 5-pixel-amplitude microstutter edge-vibration effect (1/200th of 1000 pixels/second). Obviously, the higher the framerate, the smaller the amplitude of the microstutter edge-vibration effect (confirmed in tests too) -- this is the theoretical mathematical best case scenario for unsynchronized situations (e.g. uncapped framerate VSYNC OFF situation, uncapped framerate triple-buffered situation, etc)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: G-Sync and Cloned Displays

Post by Sparky » 18 Feb 2014, 12:14

Well, the only way to have v-sync with a higher framerate than your refresh rate is to drop frames, and I'm not aware of anything that implements v-sync this way. If you know of anything that does, I'm definitely curious.

At the same framerates and refresh rates, I don't think low latency buffering can look as smooth as a good implementation of high latency buffering, because it can't pull a late frame forward to display it in the slot it's supposed to be displayed in. If your framerate is a multiple of your refresh rate, you can get your animation interval to match the refresh interval exactly.
e.g. 200fps triple buffered has less microstutters than 100fps triple buffered on a 60Hz display
This is true, but 180fps triple buffered v-sync would have less microstutter on a 60hz display than 200fps, because the framerate is a multiple of the refresh rate.

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: G-Sync and Cloned Displays

Post by Sparky » 18 Feb 2014, 12:21

Chief Blur Buster wrote:...Oh, and another way to do this is to simply run two renderers at the same time and create separate GPU renderings for separate streams (e.g. gametime T+0/60sec T+1/60sec T+2/60sec for the capture stream, and gametime T+0/60sec, T+1.316/60sec, T+2.7416sec for GSYNC stream). GSYNC allows gametimes to stay in sync with frame rendertimes to stay in sync with frame displaytimes, theoretically to the exact millisecond, which is why it's stutterfree to the eye. Mathematically, this is critical for GSYNC operation (I can explain why -- but this article explains this partially). So to keep perfect stutterfree video, AND perfect stutterfree GSYNC... AND letting framerate float up and down, there's no way to pull this feat off without running separate GPU renderings for separate streams... and that consumes GPU power. (And way too complicated anyway)

I don't believe in degrading gameplay fluidity in order to improve twitch streams. So this goes full circle back to prioritizing stutters to the game player, and NOT to the stream. This is the way it has been done for a long time now (and necessary when recording 120Hz users playing VSYNC OFF fluctuating frame rates, while broadcasting 30fps or 60fps streams). The lowest-possible-stutter method is a high quality algorithm that chooses the gsync frametime that's closest to the ideal frame presentation time of the specific frame. The higher the framerate, the more frames that will be closer to the proper time, and the less stutters will occur in the stream. In exactly the same way why 200fps triple-buffered at 60Hz can have less microstutters than 100fps triple-buffered at 60Hz.

GSYNC frames
T+0.00/60sec -- use this for stream, closest to stream T+0/60
T+0.73/60sec
T+1.15/60sec -- use this for stream, closest to stream T+1/60
T+1.77/60sec
T+2.15/60sec -- use this for stream, closest to stream T+2/60
T+2.98/60sec -- use this for stream, closest to stream T+3/60
T+3.73/60sec -- use this for stream, closest to stream T+4/60
T+4.51/60sec
T+4.98/60sec -- use this for stream, closest to stream T+5/60
T+5.25/60sec
T+5.64/60sec
T+6.11/60sec -- use this for stream, closest to stream T+6/60
etc.

This will guarantee "theoretically minimum possible microstutters for stream with zero compromises to the game player" (fully floating GSYNC frame rate permitting full Hz limit), short of going to the "two-separate-GPU-rendering" approach rendering frame 3D buffers from scratch for two separate streams (which would be a waste of GPU resources).

At 200fps @ 60Hz, 1000 pixels per second panning would only have a 5-pixel-amplitude microstutter edge-vibration effect (1/200th of 1000 pixels/second). Obviously, the higher the framerate, the smaller the amplitude of the microstutter edge-vibration effect (confirmed in tests too) -- this is the theoretical mathematical best case scenario for unsynchronized situations (e.g. uncapped framerate VSYNC OFF situation, uncapped framerate triple-buffered situation, etc)
Yes, there's a tradeoff between maximizing the framerate you play at and minimizing the stutter on stream, ultimately I think the best option is to enable g-sync and cap your in game framerate at the highest multiple of your stream you can achieve, even if you have to reduce quality settings to maintain that framerate. Say you have a 144hz g-sync monitor, I would cap framerate in game to 120fps, and reduce settings until I can almost always hit that framerate. This way the stream gets smooth animation, I get smooth low latency animation at a high framerate, and I still have g-sync enabled for the rare cirumstance where 120fps can't be maintained.

As for g-sync staying synchronized(with the game engine) to the exact millisecond, I don't think it's that good, because the game engine decides it's animation interval based on previous frames, it can't predict exactly how long this frame will take to render.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: G-Sync and Cloned Displays

Post by Chief Blur Buster » 18 Feb 2014, 13:00

Good idea about 120fps capping (or even 240fps with future true-240Hz monitors) to get best twitch streams most of the time, while still retaining GSYNC benefits.

Now, on the subject of GSYNC, but on a slightly different topic (than this thread) -- in a way that still helps enhances understanding of GSYNC:
Sparky wrote:As for g-sync staying synchronized(with the game engine) to the exact millisecond, I don't think it's that good, because the game engine decides it's animation interval based on previous frames, it can't predict exactly how long this frame will take to render.
Exact millisecond is happening routinely.

This is true, some games are much better than others on this. The accuracy of rendertime is, however, sub-millisecond in the best games and sometimes negligible. The games that GSYNC fixes stutters most completely, would be the ones that have frame render-times most accurately in sync with frame present-times (this is a necessary property of GSYNC stutter-improvement). The games with the least stutters are the ones that have far more consistent rendertimes. Usually, framerates will ramp up/down smoothly, so rendertimes will ramp up/down smoothly.

Image

It continuously looked like permanent "fully capped out VSYNC ON silky motion effect" despite a varying frame rate. As I was turning through 50fps, it looked like perfect 50fps@50Hz. As I was turning through 60fps, it looked like perfect 60fps@60fps. As I was turning through 70fps, it looked like perfect 70fps@70Hz, and so on. This is one of the bigger benefits of GSYNC; the ability to make a game maintain the "perfect smooth capped-out look" at fluctuating frame rates.

When I did this, it appeared stutter free (at least random-stutter free; I still see the regular stop-motion-effect of low frame rate motion when I go closer to 30fps -- but the erratic stutters are gone) -- I couldn't see stutters normally associated with framerate changes; it was smooth framerate ramping effect like a CVT (continuous variable transmission) with no stepped effect (erratic stutters). This situation is where GSYNC really shines, otherwise GSYNC isn't really all that useful of a feature; it looked that 40fps GSYNC looked visually better than floating 50-70fps VSYNC OFF (less stutter, less tearing)

Assuming you take approximately 1 second to smoothly move from 50fps to 100fps, GPU rendertimes vary only gradually. The frame rendertimes would vary by only hundreds of microseconds in consecutive frames relative to photons hitting the eyes For example at 50fps going to 51fps in next frame, you'd get, say 1/51th variance to 1/50th sec (~400 microsecond GPU rendering variance), and for 99fps going to 100fps in next frame -- smoothly going to 1/100th variance of 1/99th sec (~100 microsecond GPU rendering variance). For this, the dis-synchronization of rendertimes away from presenttimes would be a statistically insignificant factor, and GSYNC performs admirably as advertised.

Now, when frames go from a single-frame-granularity randomization, e.g. 50fps suddenly going to 70fps suddenly 30fps, suddenly 100fps, there would be enough variance between game times and the light hitting eyes (due to GPU rendering variances) to create stutters that show up through GSYNC. Some game engines still stutter with GSYNC, so I would presume that part of this effect is because of this. Explosions, sudden physics processing, trigger mechanisms, door opening, can create massively dramatic framerate swings that amplifies stutters, but if GPU rendertimes still remain reasonably consistent (e.g. 60fps suddenly changes to 45fps next frame, you're getting a GPU rendertime change of 1/45th of 1/60 = only about a 370 microsecond change in GPU rendertime during a single-frame sudden change from 60fps down to 45fps. GSYNC is perfectly capable of smoothing that stutter out, so reasonable sudden changes in framerates can still noticeably look much more stutterfree.

Obviously, the error margin is probably bigger than this as all the factors add up (e.g. CPU time variances, mouse poll time variances) but GSYNC becomes the statistically insigificant cause of microstutter in this situation. I even think that the world is now ready for 2000Hz mice (reduce microstutter variances to 0.5ms) since today, it is already known by Blur Busters audience that LightBoost and GSYNC massively amplify the visibility of microstutters of 500Hz versus 1000Hz mice -- giving a hint that the world might even begin to be ready for 2000Hz computer mice (once sensor technology is good enough), especially during a future era of strobed >100Hz+ 4K (since higher resolutions, low persistence & clearer motion makes microstutters easier to see).

Theoretically you could "frame-pace" (sort of like what you have to do for SLI) -- e.g. buffer the frames, and then time the actual display of the frames to be correctly relative to game time (adding input lag, in exchange for further stutter removal for wildly fluctuating frame rates), compensating for GPU frame rendering time, but as far as I know, GSYNC drivers don't currently do that depth of stutter removal (Addition of "frame pacing" to GSYNC), but according to this NVIDIA diagram NVIDIA isn't currently framepacing GSYNC, otherwise, panel displaytimes would stay perfectly in sync with the very _beginning_ of GPU rendertimes (rather than _end_ of GPU rendertimes).

As you pointed out -- currently, GSYNC makes panel display times in sync with _end_ of GPU rendertimes:

Image

Which is often good enough, at least for a lot of game engines (apparently) since stutter elimination benefits are already immediately observed. I see many situations where GSYNC at 70-100fps look smoother than triple-buffered 300fps (microstutter error margin of 3.3ms), so I am not surprised if GSYNC during normal situations can get to sub-millisecond precision. Perhaps this is a situation where Blur Busters would love to borrow a Phantom Flex, to determine how accurate frame times stay in sync with game times.

The ideal perfect motion scenario is if GSYNC is modified/optimized further to stay in sync with _beginning_ of GPU rendertimes, so that game times are perfectly in sync with panel times (In other words: Light from a new frame, hitting human eyes at a time synchronized accurately with game time). However, this would help certain problem game engines, and not all situations, and would necessarily add a slight amount of input lag necessary to achieve frame pacing (roughly equivalent to SLI input lag)... The ideal motion secenario would not be the ideal latency scenario, as frame pacing necessarily adds lag.

In theory, it is possible that future NVIDIA drivers might add an extra mode ("enhanced frame-paced GSYNC") that synchronizes refresh timings relative to the _begin_ of GPU rendertimes, rather than _end_ of GPU rendertimes, that would smooth-out stutters in even more games. But I don't know if enough "problem" game engines exist that such further GSYNC enhancements might benefit.

Either way, good discussions, and I know NVIDIA is paying attention to these forums, they're probably getting ideas (at least for the distant future) already as we speak.

What you are saying is a good idea about 120fps capping (or even 240fps with future true-240Hz monitors) to get best twitch streams most of the time, while still retaining GSYNC benefits. However, GSYNC makes fluctuating framerate no longer matter anymore because fluctuating 50-100fps looks like "consistent perfectly capped out look" despite fluctuating frame rate; that's the GSYNC benefit -- the luxury of letting framerate fluctuate and keeping it looking smooth throughout. Games like Battlefield 4 and Crysis 3 really shine here where you don't even notice the difference of 40fps versus 50fps (except a 40/50th difference in motion blurring = 25% difference in motion blur -- but since motion blur varies in a subtle way, e.g. motion blur increases/decreases by fractions of pixel per frame, during subsequent frames -- that you do not notice the subtly varying motion blur effect of GSYNC which is far less noticeable than tearing/stutter). GSYNC exists with the ability to make variable framerates look perfect-looking, and keeping the frame rate fixed defeats a lot of the maximal benefits of GSYNC.

I am kind of getting offtopic, I know -- but this discussion is very interesting -- from a vision science perspective, nontheless.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Sparky
Posts: 682
Joined: 15 Jan 2014, 02:29

Re: G-Sync and Cloned Displays

Post by Sparky » 18 Feb 2014, 13:28

There are a few problems with frame pacing, latency is the hardest to solve, because it requires knowing beforehand how long a frame will take to render. I think some old arcade and console game programmers did this prediction to allow for just in time rendering on a single framebuffer, but those were much simpler times. A somewhat easier problem is getting information about the animation interval out of the game engine and into the part of the GPU that does frame pacing, I have no idea how they do this now, but it's probably not as good as it could be if that information was directly available. I assume the frame pacing in SLI is currently done by adjusting backpressure, rather than buffering frames.

User avatar
Chief Blur Buster
Site Admin
Posts: 11648
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: G-Sync and Cloned Displays

Post by Chief Blur Buster » 18 Feb 2014, 13:38

Sparky wrote:I assume the frame pacing in SLI is currently done by adjusting backpressure, rather than buffering frames.
NVIDIA has been having occasional problems with combining GSYNC and SLI (getting better with new drivers), so I imagine they're trying to tweak the SLI framepacing to be more GSYNC compatible as we speak. I wonder if there are opportunities to combine GSYNC timing information with SLI frame pacing, to reduce SLI microstutters even further.

Yeah, framepacing is problematic from an input lag perspective (even SLI has more input lag than non-SLI), so it's simplest for GSYNC currently presents relative to end of GPU rendertimes.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply