"ultra" setting in low latency mode. question.

Everything about input lag. Tips, testing methods, mouse lag, display lag, game engine lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
cluelessgamer
Posts: 8
Joined: 04 Mar 2020, 04:12

"ultra" setting in low latency mode. question.

Post by cluelessgamer » 05 Mar 2020, 11:08

wondering why the "ultra" setting on nvcp or even the "on" setting would increase input lag in games if it is NOT gpu maxing or bound? like the test in battlenonsense video
https://youtu.be/7CKnJ5ujL_Q?t=502

ive read on here you guys saying that the render frames is effectively (0) until the game becomes gpu bound. does this also mean that the game will only run the option i put it on if it has too? so what would "ultra" be if its lower than mprf(1) "on" and cant go any higher than that of "ultra" that i set it to?

im just wondering why "ultra" would cause extra input lag input on a non bottlenecked gpu. would this mean turning it to "off" is a better option than "ultra" or even "on" if the game is not gpu bound?

what effect does this also have on the CPU if its running very high percent/max 100% and also when its not?
does it set it to (0) also if the cpu isnt bottlenecked?
does it effect latency in a bad way if its maxed or not maxed in the same way it does the gpu? especially in correlation to nvcp low latency mode options?

this topic has me so confused in regards to cpu and gpu limited to not limited with the 3 availible options in nvcp for low latency mode. sorry for over asking. trying to touch all bases. any help explaining this to me and others would be appreciated.

maybe a suggestion on what to put the low latency mode settings to for each scenario would also be greatly appreciated. id love to understand this more and get the lowest latency i can depending on the scenario using low latency mode in nvcp.

cpu bound/ gpu bound
cpu bound/gpu not bound
cpu not bound/gpu bound
cpu not bound/gpu not bound

User avatar
jorimt
Posts: 1140
Joined: 04 Nov 2016, 10:44

Re: "ultra" setting in low latency mode. question.

Post by jorimt » 05 Mar 2020, 12:11

cluelessgamer wrote:
05 Mar 2020, 11:08
wondering why the "ultra" setting on nvcp or even the "on" setting would increase input lag in games if it is NOT gpu maxing or bound? like the test in battlenonsense video
https://youtu.be/7CKnJ5ujL_Q?t=502
In non-G-SYNC scenarios, according to Nvidia, "Ultra" LLM has a "just-in-time" frame delivery mechanism, so in non-GPU-bound situations, it may occasionally time delivery of a frame too late, missing a fixed refresh window, causing the previous frame to repeat once over, instead of the next frame showing. If this even happens every few seconds, each of these repeat frame occurrences could cause a minor average input lag increase over LLM "On" or "Off" in non-GPU-bound situations.

At least that's the idea, and a possible explanation for what Battle(non)sense was seeing there.
cluelessgamer wrote:
05 Mar 2020, 11:08
ive read on here you guys saying that the render frames is effectively (0) until the game becomes gpu bound.
Not quite; the pre-rendered frames queue is effectively "0" until the framerate is no longer limited by an FPS cap + until the game becomes GPU-bound. Typically:

1. Framerate limited by FPS cap + non-GPU-bound = pre-rendered frames queue (and MPRF/LLM settings) not in effect
2. Fluctuating framerate + GPU-bound/non-GPU-bound OR Framerate limited by FPS cap + GPU-bound = pre-rendered frames queue (and MPRF/LLM settings) in effect

Additionally:

A. GPU-bound = a constant, higher amount of queued pre-rendered frames based on the currently set queue value
B. Not GPU-bound = a fluctuating, lower amount of queued pre-rendered frames based on currently set queue value

It can obviously bounce between scenario #1 & #2 depending on system performance at any given point, but as far as what is currently known, it really should be as simple as that.
cluelessgamer wrote:
05 Mar 2020, 11:08
does this also mean that the game will only run the option i put it on if it has too? so what would "ultra" be if its lower than mprf(1) "on" and cant go any higher than that of "ultra" that i set it to?
In theory, your pre-rendered frames queue is only "up to," not "always." In other words, if (legacy) MPRF is set to "4," it doesn't mean the pre-rendered frames queue is a static "4," it means the queue can be filled with anywhere from 0-4 pre-rendered frames at any given point.
cluelessgamer wrote:
05 Mar 2020, 11:08
im just wondering why "ultra" would cause extra input lag input on a non bottlenecked gpu. would this mean turning it to "off" is a better option than "ultra" or even "on" if the game is not gpu bound?
As far as is known, if you're not GPU-bound, it's better to have LLM set to "On."
cluelessgamer wrote:
05 Mar 2020, 11:08
what effect does this also have on the CPU if its running very high percent/max 100% and also when its not?
does it set it to (0) also if the cpu isnt bottlenecked?
does it effect latency in a bad way if its maxed or not maxed in the same way it does the gpu? especially in correlation to nvcp low latency mode options?

this topic has me so confused in regards to cpu and gpu limited to not limited with the 3 availible options in nvcp for low latency mode. sorry for over asking. trying to touch all bases. any help explaining this to me and others would be appreciated.

maybe a suggestion on what to put the low latency mode settings to for each scenario would also be greatly appreciated. id love to understand this more and get the lowest latency i can depending on the scenario using low latency mode in nvcp.

cpu bound/ gpu bound
cpu bound/gpu not bound
cpu not bound/gpu bound
cpu not bound/gpu not bound
I don't think it's as complicated as all that; again, anything to do with pre-rendered frames is typically really as simple as whether the FPS is fluctuating or is limited by a cap + GPU bound/not GPU bound.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

cluelessgamer
Posts: 8
Joined: 04 Mar 2020, 04:12

Re: "ultra" setting in low latency mode. question.

Post by cluelessgamer » 05 Mar 2020, 15:25

thank you much for the answers. that helps out a ton and puts my mind at ease with the setting and helps me understand a lot better.

much appreciated taking your time out to explain and coming from a reputable guy like you none the less. i look to your gysnc write ups and refer people to it anytime they wonder themselves.

i guess one last question possibly for input latency if you dont mind. if possible should i have my CPU able to have some breathing room just as the gpu, all things considered for lower input lag generally? does a max cpu % effect input latency really too much in most scenarios?

kinna like how we saw in some games that the breathing room on the gpu allowed for lower input latency so same for CPU? i guess thats what im wondering.

User avatar
jorimt
Posts: 1140
Joined: 04 Nov 2016, 10:44

Re: "ultra" setting in low latency mode. question.

Post by jorimt » 05 Mar 2020, 16:05

cluelessgamer wrote:
05 Mar 2020, 15:25
does a max cpu % effect input latency really too much in most scenarios?
Concerning input lag directly, no, not that I know of.

Maxed CPU usage instead more typically affects overall system performance, which I guess could indirectly affect input lag due to a decrease in overall framerate (mostly minimum FPS) and cause more uneven frametime performance (aka sporadic and/or reoccuring stutter, micro or otherwise), but since the CPU is the "beginning" of the chain in this respect (at least in relation to how the CPU hands off frame info to the GPU; it's a one way street), there's not much you can do in instances of max CPU usage other than reduce CPU-heavy graphical effects and/or up internal resolution.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

cluelessgamer
Posts: 8
Joined: 04 Mar 2020, 04:12

Re: "ultra" setting in low latency mode. question.

Post by cluelessgamer » 06 Mar 2020, 02:13

thanks so much jorim your time is appreciated. you nailed it for me with what was concerning, have a good one!

User avatar
jorimt
Posts: 1140
Joined: 04 Nov 2016, 10:44

Re: "ultra" setting in low latency mode. question.

Post by jorimt » 06 Mar 2020, 08:13

cluelessgamer wrote:
06 Mar 2020, 02:13
thanks so much jorim your time is appreciated. you nailed it for me with what was concerning, have a good one!
No prob, you too ;)
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

Ashun
Posts: 21
Joined: 06 Jan 2014, 21:12

Re: "ultra" setting in low latency mode. question.

Post by Ashun » 07 Mar 2020, 19:17

cluelessgamer, carrying on our conversation from YouTube, you've asked a great question.

I didn't catch this in Battle(non)sense's original video, but I think you were referring to this:
https://youtu.be/7CKnJ5ujL_Q?t=458

On the NVIDIA side, he did find that turning ULLM OFF with the 60 FPS cap reduced input lag by about 4 ms. But I didn't actually test that scenario, so your question made me very curious, and this afternoon I ran my input lag measurements for ULLM On and Off, but this time I captured 64 responses for each:

https://www.aperturegrille.com/features ... -Vs-On.png

This shows my lag measurements taken in UE4 with two internal caps: 100 FPS and 60 FPS, and ULLM On and Off for each. In my tests, ULLM On is beating ULLM Off by a fraction of a millisecond. I don't know if there's any real difference here... to suss that out, I'd need to take hundreds more measurements, but I'm confident enough to say you wouldn't notice 0.7 ms of latency.

I don't know how or why Battle(non)sense measured that increase in lag with ULLM, but I'm not seeing that in Unreal, and I can't find any reason why you shouldn't turn ULLM on.

cluelessgamer
Posts: 8
Joined: 04 Mar 2020, 04:12

Re: "ultra" setting in low latency mode. question.

Post by cluelessgamer » 07 Mar 2020, 19:44

yea thats the vid i saw it on

amazing info ashun and thanks for taking the time out of your day to retest that for everyone.

so i guess it seems for ur4 at least and most likely generally, as you guys stated ultra seems to be the best bet best to the current knowledge id assume.

since the frames arent flucutating or gpu bound, even with ullm set to "ultra" it hands off better results or within the same result of low latency mode not effecting anything and basically running at a (0) value as its not in use. but as soon as it needs to run or kick in, its set to the best option for latency when needed and already limiting the queue to the lowest thing we can get based off nvcp.

i hope it got pretty close to that as an understanding. so ultra it is!

and thanks once again to you guys going out of your ways to check this for the nitpickers like myself. its a real head clearer, definitely some real valuable info for sure in helping people get the best optimizations and understanding of this all.

User avatar
jorimt
Posts: 1140
Joined: 04 Nov 2016, 10:44

Re: "ultra" setting in low latency mode. question.

Post by jorimt » 07 Mar 2020, 20:21

Ashun wrote:
07 Mar 2020, 19:17
I don't know how or why Battle(non)sense measured that increase in lag with ULLM, but I'm not seeing that in Unreal, and I can't find any reason why you shouldn't turn ULLM on.
I think the Battle(non)sense results he was referring to were with VRR off and V-SYNC off, and as far as I'm aware, Ultra LLM functions differently in fixed refresh scenarios; fixed refresh Ultra = "just-in-time" frame delivery + MPRF "1," whereas VRR Ultra = MPRF "1" + auto FPS limit below the refresh rate, as VRR already delivers frames "just-in-time."

Would be interesting to see what you find if you tested the same thing without adaptive sync (though with how your device works, that might be problematic when testing standalone V-SYNC off? E.g. top screen only readings vs. screen-wide first reaction readings).
cluelessgamer wrote:
07 Mar 2020, 19:44
as you guys stated ultra seems to be the best bet best to the current knowledge id assume.
Not sure if you use G-SYNC, but if you do, I actually currently recommend LLM "On" if you intend to use a manual FPS limit. This is because since Ultra sets an auto FPS limit (with G-SYNC enabled), if you're already using a recommended -3 RTSS or in-game FPS limit, at higher refresh rates, Ultra will set its own below that, preventing the other two methods from kicking in (and in-game limiters typically have 1 frame or lower input lag than external limiters).

However, if you don't use VRR, so long as what Battle(non)sense found in his fixed refresh ULLM test scenarios ends up being a fluke, I'd also assume "Ultra" is better to use than "On" in non-VRR, GPU-bound scenarios.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Display: Acer Predator XB271HU OS: Windows 10 Pro MB: ASUS ROG Maximus X Hero CPU: i7-8700k GPU: EVGA GTX 1080 Ti FTW3 RAM: 32GB G.SKILL TridentZ @3200MHz

Ashun
Posts: 21
Joined: 06 Jan 2014, 21:12

Re: "ultra" setting in low latency mode. question.

Post by Ashun » 08 Mar 2020, 02:15

Hah. I didn't notice that Battle(non)sense wasn't using variable refresh. Why isn't he using VRR? :)

Well, this had me curious enough to pull out the light probe again. You're right about the V-Sync off measurements, which complicates things a bit for me. Since I'm using a light probe at the top of the screen, I need to take a lot of measurements to find the best response, so I went back to taking 255.

Apologies for the enormous image! Here are the responses:
https://www.aperturegrille.com/features ... ll-Big.png

The spread of those results will be exaggerated by how I test, but what we really care about are the best responses:
https://www.aperturegrille.com/features ... rt-Big.png

At close to no GPU load with an internal cap of 60, ULLM has slightly more lag, 5.2 ms vs 3.5 ms for the others. I don't know if that's real. At 90% GPU use (also capped at 60), ULLM is now slightly better than the others, 16.5 ms vs 17.7 ms, but again, not sure if that's meaningful. And at full GPU use (cap disabled), ULLM is one frame better than the others.

Doesn't look much different than the adaptive-sync results. Perhaps there's a real 1.7 ms penalty at low GPU use, but I can't say for certain right now.

Post Reply