Page 1 of 2

NVIDIA may have implemented a synchronous framerate limiter

Posted: 10 Feb 2017, 05:46
by Glide
I'm currently running the 378.57 hotfix drivers on the latest Windows 10 Insider Build (15031) and noticed some new behavior when using their framerate limiter.

With V-Sync disabled and the framerate limiter set to anything near the refresh rate - 60 FPS in my case - the tear line is now locked to the bottom 2% of the screen.

There is some slight variability where the tear line bounces a few lines up or down, but it never moves from that position on the screen.

With any other framerate limiter (e.g. RTSS) there would normally be significant movement of the tear line, and its position would basically be random.

This seems like a pretty big deal if anyone else is able to confirm it.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 10 Feb 2017, 11:37
by lexlazootin
So like, G-Sync?

This is actually pretty cool if true since i pretty much use G-Sync in this way.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 10 Feb 2017, 12:25
by Glide
lexlazootin wrote:So like, G-Sync?
This is actually pretty cool if true since i pretty much use G-Sync in this way.
More like near-zero-latency v-sync. You still have to keep the framerate at 60 FPS.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 10 Feb 2017, 17:22
by knypol
What is "their frame limiter"? U mean NVinspector? If yes i tried it and indeed tear line is static in the bottom part of the screen - so basically dont bother me. But with NVInspector frame limiter (V2) i feel bad input lag (my refresh is 75Hz so i set 75fps). It feels like normal VSYNC ON. When i use ingame limiter i dont feel any input lag but the tear line is moving randomly so basically cant play with it.

ps. tested on 378.48 so propably it wasnt implemented in latest hotfix and existed earlier.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 10 Feb 2017, 17:28
by Chief Blur Buster
This method of framerate limiter definitely does reduce latency -- it's like VSYNC ON but with 1 framebuffer less lag. But does not eliminate all of it, it still has to wait till the raster reaches the bottom -- basically VSYNC OFF via steering the position of the tearline by waiting to a raster position. For this type of "raster-synchronous framerate limiter", the average added input lag is half a frame.

Another consideration is when does input reads occur? e.g. framerate limiters that times the input read at the last minute.

Most Laggy "VSYNC ON"
Double buffered VSYNC ON combined with compositing. Rendering buffer, completed frame buffer, and displayed buffer.

Less laggy "VSYNC ON"
VSYNC ON or VSYNC OFF with this logic:
[input read] -> [render frame] -> [busywait for raster to reach bottom] -> [flip buffer]

Least Laggy "VSYNC ON"
VSYNC OFF with this logic:
[predictive busywait] -> [input read] -> [render frame] -> [raster already near bottom] -> [flip buffer]

Definition of raster: This comes from the old CRT days, but is still the sequential top-to-bottom scanning nature transmitted over digital cables to LCD monitors even today. The current pixel row number, which is where the tearline will occur when the buffer is flipped. The more you wait before flip, the lower the tearline will be.

The problem is we don't always know render time, so it's hard to delay an input read very precisely. Different framerate limiters have very different logic that affects the rendering workflow slightly differently than others, adding or removing lag.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 10 Feb 2017, 20:11
by Glide
Chief Blur Buster wrote:This method of framerate limiter definitely does reduce latency -- it's like VSYNC ON but with 1 framebuffer less lag. But does not eliminate all of it, it still has to wait till the raster reaches the bottom -- basically VSYNC OFF via steering the position of the tearline by waiting to a raster position. For this type of "raster-synchronous framerate limiter", the average added input lag is half a frame.
Yes, I was perhaps a bit over-enthusiastic but it seems really good so far.
Ideally the tear line could be moved even closer to the bottom of the screen as I do find it distracting in some games, but it's nice to have more options for reducing latency via a driver update.

It would be great if the people with hardware latency testers could measure this, as I know that people have complained about NVIDIA's framerate limiter having higher latency than other solutions like RTSS in the past.
knypol wrote:What is "their frame limiter"? U mean NVinspector? If yes i tried it and indeed tear line is static in the bottom part of the screen - so basically dont bother me. But with NVInspector frame limiter (V2) i feel bad input lag (my refresh is 75Hz so i set 75fps). It feels like normal VSYNC ON. When i use ingame limiter i dont feel any input lag but the tear line is moving randomly so basically cant play with it.
ps. tested on 378.48 so propably it wasnt implemented in latest hotfix and existed earlier.
You can control NVIDIA's framerate limiter via NVIDIA Inspector. (Inspector just edits NVIDIA profiles, it doesn't implement its own limiter)

I didn't see any difference between the v1/v2 limiter on my system when testing this, but I was only testing at 60Hz.
I know that previously the v2 limiter would cause some older games to crash while the v1 limiter would not, so there is definitely something different between them.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 10 Feb 2017, 22:21
by lexlazootin
Ok, so i tested with Nvcp 378.49 and NvidiaInspector 2.12 in CS:S and yes it worked perfectly like you described.

And it's super cool now with all the new frame rates you can cap at now! http://imgur.com/xKPAhqr

I found 59.7 to have a scrolling tear line but 60.7 to have a perfect stable tearline right at the bottom of the screen. But with FRLv2 59,60,61 were all the same with the tear line at the bottom.

The only problem is that it has a V-Sync like input lag and it's defiantly worse then G-Sync and no sync :( but it does work!

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 11 Feb 2017, 06:27
by Glide
I think you must be using an old version of Inspector if you have those options, it should just be limiting to whole frames now. https://ci.appveyor.com/project/Orbmu2k ... /artifacts

It definitely feels lower latency than V-Sync to me. Not as low as without a frame limiter, but definitely lower than V-Sync.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 11 Feb 2017, 10:37
by RealNC
Don't want to shatter your hopes, but whatever nvidia is doing there doesn't seem to have any latency benefit. It feels more like a frame-limited vsync rather than vsync off.

RTSS seems to have 1 frame less input lag. At least when testing using the "drag the console window in CS:GO" trick. But a proper latency test would be preferable, obviously.

Re: NVIDIA may have implemented a synchronous framerate limi

Posted: 11 Feb 2017, 16:42
by Chief Blur Buster
I haven't studied RTSS closely, but:

Does RTSS use predictive waiting before input reads? To do just-in-time rendering before VSYNC?

Basically, monitor the amount of time it takes to render frames, and then guess when it's time to render the next frame?

Also, a very forgiving rate limiter could permit the page flip to occur anyway, causing only occasional tearing (near the bottom or top edge) if the render-time prediction is incorrect. So predictive waiting before input read doesn't have to cause a 1-frame-lag penalty. Basically, render completes after VSYNC. If the page flip occurs anyway "late" (creates a tearline at top edge, instead ofa 1-frame-delay).

I think game writers & graphics drivers should attempt ultra-low-lag VSYNC ON with this predictive-waiting-before-input-read and also flip-anyway-if-missed-vsync. This will cause zero tearing at consistent framerates (easy predictive rendertimes) and occasional tearing during unexpected framerate/rendertime changes (incorrect prediction of rendertime).

Basically this special sauce duo combo (predictive-waiting and flip-anyway-if-late) will create nearly-lagless VSYNC ON, and might be extremely useful for GSYNC during hitting GSYNC framerate limit.