LLM On vs ultra

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
andrelip
Posts: 160
Joined: 21 Mar 2014, 17:50

Re: LLM On vs ultra

Post by andrelip » 13 Apr 2020, 08:51

bapt337 wrote:
13 Apr 2020, 05:09
andrelip wrote:
13 Apr 2020, 04:27
I think that the author main point is to know the overhead of Low Latency Mode on Ultra for scenarios when it is not needed (bottlenecked by the limiter or CPU). I didn't find any significant difference on my tests so I just leave it on ultra on all cases.

Some people claim it adds significant input lag (many ms) but their methodology is not reliable in my option as they usually test input lag using events that depend on tickrate and network such as strafe or gunfire instead of instantaneous events of the engine like camera angle. Those tickrates are usually a bottleneck (7.8ms for a high quality 128 tickrate) and have other side-effects. You could have a huge difference in response if you cap at 127.5 or 128.5 depending on how the engine handles cmd throttling.
Personnaly i didnt notice any benefit to use LLM ultra comapre to LLM ON in term of inputlag, even more in my scenario cpu never bottleneck casue frame are limited, and gpu also never used to the max, so i dont get the point to use LLM ultra compare to LLM = ON.
And i dont know how LLM=ULTRA can be usefull in cpu bound scenario? i mean, LLM=ULTRA= 0 pre rendered frame, so no buffer, then it should the most heavy for cpu, so if you already cpu bottleneck, and use LLM=ultra, hmmm....
On other hand i dont understand why LLM=ultra have to be used in gpu bound scenario, why couldnt it work with low gpu usage? cause if low gpu usage this mean its cpu limited? or framerate limited? in that case then LLM = ultra have to be used with unlocked fps? cause depending the game, uncapped fps will bound the gpu, but on CS GO for example, uncapped fps give cpu botleneck, so not good for LLM= ultra.
Sorry for all my odd question, this become a bit obsessional to me, i really want to know how it work from A to Z.
ULL is a queue control mechanism. This queue is the one that holds the frames that CPU just have rendered. If the GPU is faster than the CPU you will never have a queue as the GPU will finish its job before the CPU renders the next frame. It is also the same scenario when limiting into a stable value because your GPU time will be faster than the CPU Render time + sleep(). The Queue happens when the GPU is slower than the time it takes to receive a new frame. If you have a large queue you can maximize GPU utilization and have a small increment in the fps as the bottleneck will never wait for a frame but it also means that those frames have their GPU processing delayed. ULL acts like it was the value of 0 but in reality, it's a just in time delivery. It predicts when the GPU will finish the current work and then calculated the exact point where CPU should start rendering to deliver that frame just a little before the GPU is ready. GPU loads nearly 100% means it is not sleeping so it should be slower than the CPU. Do that make sense fit you?

bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 13 Apr 2020, 08:58

jorimt wrote:
13 Apr 2020, 08:34
bapt337 wrote:
13 Apr 2020, 05:09
Personnaly i didnt notice any benefit to use LLM ultra comapre to LLM ON in term of inputlag, even more in my scenario cpu never bottleneck casue frame are limited, and gpu also never used to the max, so i dont get the point to use LLM ultra compare to LLM = ON.
And i dont know how LLM=ULTRA can be usefull in cpu bound scenario? i mean, LLM=ULTRA= 0 pre rendered frame, so no buffer, then it should the most heavy for cpu, so if you already cpu bottleneck, and use LLM=ultra, hmmm....
On other hand i dont understand why LLM=ultra have to be used in gpu bound scenario, why couldnt it work with low gpu usage? cause if low gpu usage this mean its cpu limited? or framerate limited? in that case then LLM = ultra have to be used with unlocked fps? cause depending the game, uncapped fps will bound the gpu, but on CS GO for example, uncapped fps give cpu botleneck, so not good for LLM= ultra.
You're continuing to complicate it. LLM can reduce input lag, and typically by only 1 frame, when you're GPU-bound. When you're not, it has little to no effect on input lag. That said, it should be safe to leave on when you're not GPU-bound, just don't expect it to do anything for input lag when you aren't GPU-bound.

Also, the pre-rendered frames queue and LLM aren't always the same thing. The pre-rendered frames queue is always present, it's just not always filled. LLM only controls how much the PRF queue can be filled, and only in certain situations (the PRF queue simply stays filled more when the GPU is maxed, because the CPU has to wait longer for the GPU to be ready for the next frame).

And believe it or not, even some DX11 and DX9 games don't support LLM settings and use their own PRF values (many of which are already at "1"). And no, there's no easy way to tell.

And again, if the pre-rendered frames queue is at "1," it's not a constant "1" frame, it's anywhere from 0-1 frames, or 0-3 frames, or 0-5 frames; the "x" number is the "max" number PRF can be, not the constant. That's why the setting used to be called "Maximum pre-rendered frames."

Anyway, we keep talking about input lag here, but the pre-rendered frames queue actually has more to do with how high or low of an average framerate the CPU can output than it does with input lag, that's why setting PRF values too low can sometimes reduce average framerate. Just try any modern Battlefield game, and then set it's "RenderAheadLimit" value to "1"; you'll see how an actual max pre-rendered frame of "1" can impact average framerate.
bapt337 wrote:
13 Apr 2020, 05:09
Sorry for all my odd question, this become a bit obsessional to me, i really want to know how it work from A to Z.
Good luck, because it's the hardest setting to understand, the hardest setting to test for, and the hardest setting to see the effects of, especially on the user-side. And even if you can, in situations where your settings can actually make a difference, you're more likely to see it effect average framerate first, as opposed to input lag directly.

I'll close with a couple other sources on LLM:
https://www.nvidia.com/en-us/geforce/ne ... dy-driver/
The NVIDIA Control Panel has -- for over 10 years -- enabled GeForce gamers to adjust the “Maximum Pre-Rendered Frames”, the number of frames buffered in the render queue. By reducing the number of frames in the render queue, new frames are sent to your GPU sooner, reducing latency and improving responsiveness.

With the release of our Gamescom Game Ready Driver, we’re introducing a new Ultra-Low Latency Mode that enables ‘just in time’ frame scheduling, submitting frames to be rendered just before the GPU needs them. This further reduces latency by up to 33%:

Low Latency modes have the most impact when your game is GPU bound, and framerates are between 60 and 100 FPS, enabling you to get the responsiveness of high-framerate gaming without having to decrease graphical fidelity.
And:
https://www.howtogeek.com/437761/how-to ... -graphics/
Graphics engines queue frames to be rendered by the GPU, the GPU renders them, and then they’re displayed on your PC. As NVIDIA explains, this feature builds on the “Maximum Pre-Rendered Frames” feature that’s been found in the NVIDIA Control Panel for over a decade. That allowed you to keep the number of frames in the render queue down.

With “Ultra-Low Latency” mode, frames are submitted into the render queue just before the GPU needs them. This is “just in time frame scheduling,” as NVIDIA calls it. NVIDIA says it will “further [reduce] latency by up to 33%” over just using the Maximum Pre-Rendered Frames option.

This works with all GPUs. However, it only works with DirectX 9 and DirectX 11 games. In DirectX 12 and Vulkan games, “the game decides when to queue the frame” and the NVIDIA graphics drivers have no control over this.

Here’s when NVIDIA says you might want to use this setting:

“Low Latency modes have the most impact when your game is GPU bound, and framerates are between 60 and 100 FPS, enabling you to get the responsiveness of high-framerate gaming without having to decrease graphical fidelity. “

In other words, if a game is CPU bound (limited by your CPU resources instead of your GPU) or you have very high or very low FPS, this won’t help too much. If you have input latency in games—mouse lag, for example—that’s often simply a result of low frames per second (FPS) and this setting won’t solve that problem.

Warning: This will potentially reduce your FPS. This mode is off by default, which NVIDIA says leads to “maximum render throughput.” For most people most of the time, that’s a better option. But, for competitive multiplayer gaming, you’ll want all the tiny edges you can get—and that includes lower latency.
Thanks again, thats why i dont like LLM=ULTRA, cause input lag may fluctuate depending usage of he the gpu, and anyway i play 60fps mostly not gpu bound most of the times (1080ti), so useless in my case.

Yes i play battlefield V right now, they added a setting called "future frame rendering" and your right , if i disable this setting, i get massive fps drop, without cpu or gpu maxed out, its just random.
Using scanline sync now and its pretty nice even if i see tearing sometimes at 55% gpu usage even if i dont see tearing in other area with gpu usage 70% ... i think gpu can be bounded even if gpu usage is low. Syncflush = 1.

bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 13 Apr 2020, 09:18

andrelip wrote:
13 Apr 2020, 08:51
bapt337 wrote:
13 Apr 2020, 05:09
andrelip wrote:
13 Apr 2020, 04:27
I think that the author main point is to know the overhead of Low Latency Mode on Ultra for scenarios when it is not needed (bottlenecked by the limiter or CPU). I didn't find any significant difference on my tests so I just leave it on ultra on all cases.

Some people claim it adds significant input lag (many ms) but their methodology is not reliable in my option as they usually test input lag using events that depend on tickrate and network such as strafe or gunfire instead of instantaneous events of the engine like camera angle. Those tickrates are usually a bottleneck (7.8ms for a high quality 128 tickrate) and have other side-effects. You could have a huge difference in response if you cap at 127.5 or 128.5 depending on how the engine handles cmd throttling.
Personnaly i didnt notice any benefit to use LLM ultra comapre to LLM ON in term of inputlag, even more in my scenario cpu never bottleneck casue frame are limited, and gpu also never used to the max, so i dont get the point to use LLM ultra compare to LLM = ON.
And i dont know how LLM=ULTRA can be usefull in cpu bound scenario? i mean, LLM=ULTRA= 0 pre rendered frame, so no buffer, then it should the most heavy for cpu, so if you already cpu bottleneck, and use LLM=ultra, hmmm....
On other hand i dont understand why LLM=ultra have to be used in gpu bound scenario, why couldnt it work with low gpu usage? cause if low gpu usage this mean its cpu limited? or framerate limited? in that case then LLM = ultra have to be used with unlocked fps? cause depending the game, uncapped fps will bound the gpu, but on CS GO for example, uncapped fps give cpu botleneck, so not good for LLM= ultra.
Sorry for all my odd question, this become a bit obsessional to me, i really want to know how it work from A to Z.
ULL is a queue control mechanism. This queue is the one that holds the frames that CPU just have rendered. If the GPU is faster than the CPU you will never have a queue as the GPU will finish its job before the CPU renders the next frame. It is also the same scenario when limiting into a stable value because your GPU time will be faster than the CPU Render time + sleep(). The Queue happens when the GPU is slower than the time it takes to receive a new frame. If you have a large queue you can maximize GPU utilization and have a small increment in the fps as the bottleneck will never wait for a frame but it also means that those frames have their GPU processing delayed. ULL acts like it was the value of 0 but in reality, it's a just in time delivery. It predicts when the GPU will finish the current work and then calculated the exact point where CPU should start rendering to deliver that frame just a little before the GPU is ready. GPU loads nearly 100% means it is not sleeping so it should be slower than the CPU. Do that make sense fit you?
This really make a sense to me, i remember i guy explain this with pizza, cooker, and delivery guy.
Pizza are the frame, cooker is the cpu, delivery guy is the gpu,
when no vsync cooker send pizza non stop and delivery guy take a pizza at a regular frequency, then sometimes cookers will give pizza not in time vs when delivery guy take pizza, then tearing.
If max buffer is 2 in instance, and delivery guy faster than the cooker (GPU fatser than CPU) then buffer never get filled and delivery guy get pizza in time if cooker is faster enough, so cooker (cpu) have to work harder and if cpu not enough fast delivery guy bein late (framedrop), when cpu is bottleneck, cooker cannot send pizza as faster as the delivery guy delivery time between each pizza.
If cooker (cpu) faster than delivery guy (gpu) and there is 2 max frame buffercooker will cook pizza until the buffer is full (2 pizza) then send nex pizza when at least one frame buffer is free, delivery guy get pizza in time, but the pizza delivered by the delivery guy is a bit cold (input lag)

User avatar
jorimt
Posts: 2481
Joined: 04 Nov 2016, 10:44
Location: USA

Re: LLM On vs ultra

Post by jorimt » 13 Apr 2020, 11:38

bapt337 wrote:
13 Apr 2020, 08:58
i think gpu can be bounded even if gpu usage is low.
There are cases where the game itself can be CPU or GPU-limited (this may even change from scene-to-scene), even when the actual CPU and/or GPU are not at full usage, in which case it is a limitation of the game engine, not the system.

Sometimes you can even have both at once; aka both game and hardware are CPU or GPU-limited. This varies wildly by the specific game/system combo and performance at any given time, and thus isn't always fully controllable from the user-side.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 13 Apr 2020, 12:31

jorimt wrote:
13 Apr 2020, 11:38
bapt337 wrote:
13 Apr 2020, 08:58
i think gpu can be bounded even if gpu usage is low.
There are cases where the game itself can be CPU or GPU-limited (this may even change from scene-to-scene), even when the actual CPU and/or GPU are not at full usage, in which case it is a limitation of the game engine, not the system.

Sometimes you can even have both at once; aka both game and hardware are CPU or GPU-limited. This varies wildly by the specific game/system combo and performance at any given time, and thus isn't always fully controllable from the user-side.
For sure, its like some area are less optimized than other, curiously ive find this behavior happen mostly on new dlc map, maybe optimsation isnt that good on some area

andrelip
Posts: 160
Joined: 21 Mar 2014, 17:50

Re: LLM On vs ultra

Post by andrelip » 13 Apr 2020, 21:24

It's not the case of CSGO for sure if you play in low resolution and have any decent GPU.
Just leave the game uncapped. According to CapframeX, I have a little less input lag (<1ms) capping with Nvidia V3 at 300 fps but I don't know how they are measuring it. It does feel good and is the setup that I use (Global) but uncapping is very safe if you don't have thermal issues.

bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 14 Apr 2020, 02:55

andrelip wrote:
13 Apr 2020, 21:24
It's not the case of CSGO for sure if you play in low resolution and have any decent GPU.
Just leave the game uncapped. According to CapframeX, I have a little less input lag (<1ms) capping with Nvidia V3 at 300 fps but I don't know how they are measuring it. It does feel good and is the setup that I use (Global) but uncapping is very safe if you don't have thermal issues.
Yes for CS GO im cpu limited, even at 2160p resolution and uncapped fps gpu usage never reach the maximum with uncapped fps.
Gpu is 1080ti watercooled so temps are fine, and in this case the gpu is faster than the cpu, then i can guess the prerendered queue is never filled, so i guess LLM=ultra is useless in this case cause its more cpu bound than gpu bound, so cpu render frame to gpu without delay whatever setting i use right ? cause gpu faster than cpu then the pre rendered frame queue is never filled, so even if i set pre rendered frame to 2 or even 3 queue never get filled so no chance this give inputlag. If it was gpu bound and frame became filled over the frame queue, then yes, LLM ultra could be usefull if i understand right

andrelip
Posts: 160
Joined: 21 Mar 2014, 17:50

Re: LLM On vs ultra

Post by andrelip » 14 Apr 2020, 22:39

bapt337 wrote:
14 Apr 2020, 02:55
andrelip wrote:
13 Apr 2020, 21:24
It's not the case of CSGO for sure if you play in low resolution and have any decent GPU.
Just leave the game uncapped. According to CapframeX, I have a little less input lag (<1ms) capping with Nvidia V3 at 300 fps but I don't know how they are measuring it. It does feel good and is the setup that I use (Global) but uncapping is very safe if you don't have thermal issues.
Yes for CS GO im cpu limited, even at 2160p resolution and uncapped fps gpu usage never reach the maximum with uncapped fps.
Gpu is 1080ti watercooled so temps are fine, and in this case the gpu is faster than the cpu, then i can guess the prerendered queue is never filled, so i guess LLM=ultra is useless in this case cause its more cpu bound than gpu bound, so cpu render frame to gpu without delay whatever setting i use right ? cause gpu faster than cpu then the pre rendered frame queue is never filled, so even if i set pre rendered frame to 2 or even 3 queue never get filled so no chance this give inputlag. If it was gpu bound and frame became filled over the frame queue, then yes, LLM ultra could be usefull if i understand right
Yes, LLM is useless in this scenario. The thermal issues that I've mentioned were on the CPU. In my desktop, I can leave uncapped without trouble even at 5ghz. In my MAC it's better to limit it to a lower value like 160 that it can sustain in the long run. Uncapped will reach 300 for a few seconds then suddenly drops to 80.

bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 15 Apr 2020, 03:59

andrelip wrote:
14 Apr 2020, 22:39
bapt337 wrote:
14 Apr 2020, 02:55
andrelip wrote:
13 Apr 2020, 21:24
It's not the case of CSGO for sure if you play in low resolution and have any decent GPU.
Just leave the game uncapped. According to CapframeX, I have a little less input lag (<1ms) capping with Nvidia V3 at 300 fps but I don't know how they are measuring it. It does feel good and is the setup that I use (Global) but uncapping is very safe if you don't have thermal issues.
Yes for CS GO im cpu limited, even at 2160p resolution and uncapped fps gpu usage never reach the maximum with uncapped fps.
Gpu is 1080ti watercooled so temps are fine, and in this case the gpu is faster than the cpu, then i can guess the prerendered queue is never filled, so i guess LLM=ultra is useless in this case cause its more cpu bound than gpu bound, so cpu render frame to gpu without delay whatever setting i use right ? cause gpu faster than cpu then the pre rendered frame queue is never filled, so even if i set pre rendered frame to 2 or even 3 queue never get filled so no chance this give inputlag. If it was gpu bound and frame became filled over the frame queue, then yes, LLM ultra could be usefull if i understand right
Yes, LLM is useless in this scenario. The thermal issues that I've mentioned were on the CPU. In my desktop, I can leave uncapped without trouble even at 5ghz. In my MAC it's better to limit it to a lower value like 160 that it can sustain in the long run. Uncapped will reach 300 for a few seconds then suddenly drops to 80.
Oh well seems a power limit issue, or thermal issue, ive got ryzen 2600 @ 4ghz with b-die ram and im usually like 250-350 fps in CS GO,
2160p, lowered some cpu demanding setting to get maximum fps. If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this

EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA

User avatar
jorimt
Posts: 2481
Joined: 04 Nov 2016, 10:44
Location: USA

Re: LLM On vs ultra

Post by jorimt » 15 Apr 2020, 10:40

bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

Post Reply