"ultra" setting in low latency mode. question.

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
cluelessgamer
Posts: 8
Joined: 04 Mar 2020, 04:12

Re: "ultra" setting in low latency mode. question.

Post by cluelessgamer » 08 Mar 2020, 07:19

so with all the tests it seems that giving your gpu breathing room seems to lower input lag pretty substantially at least in ur4, though this could vary from engine to engine, game to game.

to your guys knowledge the best option for gsync combo which is -3 under, gsync and vsync on in nvcp, and vsync off in game typically. running low latency mode set to "on" which is mprf(1) instead of "ultra" so it doesnt overide the in-game limiter which are usually known to be the best option other than rtss. if i understand correctly.

for "no sync" with either fluctuating framerates and especailly gpu bound low latency mode to "ultra" is the best option. also in ashuns test it seems if the gpu gets remotely on the higher end with syncs off "ultra" also is the better option. its interesting that potentially the lower the gpu utilization, that "ultra" could potentially slighty effect the latency making it higher. but that could be to what jorim was referring to when he was saying.
in non-G-SYNC scenarios, according to Nvidia, "Ultra" LLM has a "just-in-time" frame delivery mechanism, so in non-GPU-bound situations, it may occasionally time delivery of a frame too late, missing a fixed refresh window, causing the previous frame to repeat once over, instead of the next frame showing. If this even happens every few seconds, each of these repeat frame occurrences could cause a minor average input lag increase over LLM "On" or "Off" in non-GPU-bound situations.
(sorry i dont know how to highlight when quoting) hope that worked.

so low latency mode set to "on" regardless of sync or not seems to be what you should set it on at the least.

set to "on" for using gsync so "ultra" doesnt override the ingame limiter which is usually the best option. though im confused on the difference of ullm overiding the in game limiter and the science behind that. im probably over thinking it.

at higher gpu ultilization and fluctuating framerates "ultra" performs the best if you arent using any syncs though it could slighty increase latency on the lower end of gpu utilizations.

though as ashun said the times are so tight that its even hard to tell whether ultra is truly having an effect consistently in both cases outside of high gpu utilization. which makes sense also because in battlenonsenses vid the difference in ullm and llm off at 75% gpu at 60 fps was lower but very marginal. could be other factors effecting.
as jorim mentioned that the mprf is effectively (0) until it needs to start queueing due to frame fluctuation and or gpu bound.
though small numbers can add up. that number of latency is so small.that "NO SYNC" gamers would seem to have the most benefits in latency using "ultra" as a global setting as you both said earlier.

though i am curious what that sweet spot could potentially be with llm set to "ultra" in correlation to gpu usage percentages, as it seems theres a point where the return becomes minimal, but it hits that point rather quickly at a higher half of percentages.

assuming i did math right lol. (even tho this probably isnt the case against my own wonder), with your test at 60fps on a 165hz no sync, "ultra" would be beneficial down to around the 20% gpu utilization assuming everything is perfect in conjunction with your set up, the tests you ran, and if percentages stayed consistent all the way down to 0%. but against my own question this is probably not the case at all. as it seems ultra has potential fall off or even out at higher gpu percentages.

wonder how much these would change on a per engine or game basis, if hardly any at all, but it seems that this is just a good general set of measures to set the settings on for how we choose to play regardless.

once again guys ill never stop saying thanks for the info and the time you guys take out of your day and work you do in helping clueless gamers like me understand. ill try not to get in the way too much :D

User avatar
jorimt
Posts: 2484
Joined: 04 Nov 2016, 10:44
Location: USA

Re: "ultra" setting in low latency mode. question.

Post by jorimt » 08 Mar 2020, 11:14

Ashun wrote:
08 Mar 2020, 02:15
Hah. I didn't notice that Battle(non)sense wasn't using variable refresh. Why isn't he using VRR? :)
To isolate ULLM differences only, I'd assume.
Ashun wrote:
08 Mar 2020, 02:15
You're right about the V-Sync off measurements, which complicates things a bit for me. Since I'm using a light probe at the top of the screen, I need to take a lot of measurements to find the best response, so I went back to taking 255.

At close to no GPU load with an internal cap of 60, ULLM has slightly more lag, 5.2 ms vs 3.5 ms for the others. I don't know if that's real. At 90% GPU use (also capped at 60), ULLM is now slightly better than the others, 16.5 ms vs 17.7 ms, but again, not sure if that's meaningful. And at full GPU use (cap disabled), ULLM is one frame better than the others.

Doesn't look much different than the adaptive-sync results. Perhaps there's a real 1.7 ms penalty at low GPU use, but I can't say for certain right now.
Perhaps there is a 1-2ms difference at the top of screen, or perhaps it's simply margin of error/noise; hard to say with such a small difference.

That said, with standalone V-SYNC off at 60 FPS 165Hz, there should only be a single upward rolling tearline. So with how your device works, I'm guessing it can't detect the tearline whenever it appears in the scanout anywhere below the probe, which are the very instances that may end up matching what Battle(non)sense was seeing when you also factor in the differences you were seeing at top of screen.

Either way, if there is an input lag penalty in fixed refresh, non-GPU-bound scenarios with ULLM, with the information currently available to us, it appears to be minuscule.
cluelessgamer wrote:
08 Mar 2020, 07:19
to your guys knowledge the best option for gsync combo which is -3 under, gsync and vsync on in nvcp, and vsync off in game typically. running low latency mode set to "on" which is mprf(1) instead of "ultra" so it doesnt overide the in-game limiter which are usually known to be the best option other than rtss. if i understand correctly.
Yes.
cluelessgamer wrote:
08 Mar 2020, 07:19
so low latency mode set to "on" regardless of sync or not seems to be what you should set it on at the least.
At least with G-SYNC. Again, not 100% sure about fixed refresh; seems to be hard to fully isolate in tests.
cluelessgamer wrote:
08 Mar 2020, 07:19
set to "on" for using gsync so "ultra" doesnt override the ingame limiter which is usually the best option.
Typically, yes.
cluelessgamer wrote:
08 Mar 2020, 07:19
though im confused on the difference of ullm overiding the in game limiter and the science behind that. im probably over thinking it.
At 144Hz, when using G-SYNC, ULLM auto limits to ~138 FPS, so if you use that, and then set another external or in-game limiter to 141 FPS, the framerate is going to be limited by the lower cap, which is ULLM in this case, at which point you would have to set the other limiter below ~138 for it to take effect over the ULLM cap.
cluelessgamer wrote:
08 Mar 2020, 07:19
at higher gpu ultilization and fluctuating framerates "ultra" performs the best if you arent using any syncs though it could slighty increase latency on the lower end of gpu utilizations.
ULLM appears to perform best in uncapped, fixed refresh GPU-bound scenarios, sync or no sync, and can reduce input lag by about 1 frame vs. uncapped, GPU-bound LLM off. That's the scenarios it's built for.
cluelessgamer wrote:
08 Mar 2020, 07:19
though i am curious what that sweet spot could potentially be with llm set to "ultra" in correlation to gpu usage percentages, as it seems theres a point where the return becomes minimal, but it hits that point rather quickly at a higher half of percentages.
Bottom-line, is if you want the lowest input lag at all times, you don't want to have ULLM kick in at all, which means preventing max GPU usage and fluctuating framerates with an appropriate FPS limit.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

User avatar
kurtextrem
Posts: 41
Joined: 05 Mar 2017, 03:35
Location: Munich, Germany

Re: "ultra" setting in low latency mode. question.

Post by kurtextrem » 01 Apr 2020, 16:21

What happens when you set Ultra-Low-Latency mode on the global profile vs per game in a scenario where the GPU has high usage (90%-97%)? Would the latter mean it pre-renders a picture for all programs but for program where ULL is set?

Also, in a scenario where the game has non-limited fps, and they fluctuate between 200-300 fps. Why would I limit to 200 fps? In some cases, when they reach 300 fps, frames render 2 ms faster - which is a benefit over limiting, I'd say? (competitive sight)
Acer XF250Q, R6 competitive player

Post Reply