LLM On vs ultra

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 16 Apr 2020, 02:36

jorimt wrote:
15 Apr 2020, 10:40
bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.

Allright, i was thinking the same, result was a bit odd.
Ive got a question, if you cap fps at 61 for 60hz, with vsync on, and its a gpu bound situation (CPU faster than GPU), then the maximum pre rendered fame is always 1 cause i force it by giving only one extra frame (cap 61), even if buffer size is 2 or 3 right?
But if fps for example fps go down to 45fps, in this case frame buffer gonna be full cause i give 61 fps (fps cap) so if the gpu can only render 45fps in one second then it give 16 extra fps (61-45) then buffer will be full and even more filled by the extra frame and the max pre rendered frame setting take effect at this moment if im right

andrelip
Posts: 160
Joined: 21 Mar 2014, 17:50

Re: LLM On vs ultra

Post by andrelip » 16 Apr 2020, 07:26

jorimt wrote:
15 Apr 2020, 10:40
bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.
I like that they are counterpointing incomplete information by Battle(non)sense. There are 3 main points that increases input lag (as always, for v-sync off):

---

1- Queued frames:

GPU receiving frames faster than it can process. The longer the queue and the fast they are receiving it, more input lag.

2- Render Time:

Even with 0 frames in the Queue, you have CPU Time + GPU Time. So if your game renders in CPU in 4ms and 3 ms in the GPU, then it is rendering at 7 ms. So even if your GPU is NOT the bottleneck, you can still receive improvement in the input lag by reducing the graphics settings. For CSGO, it usually reaches diminished effects (< 0.5 ms per frame in GPU time) without too much image degradation but for some other games it matters a lot.

3- Blocks and Artificial Delay:

Input lag should be measured by the time the engine reads the external data (user inputs and network) and the presentation to the monitor. So if the frame cap software is stabilizing the frame0time by placing their sleep() between:

a) CPU and GPU
b) GPU and present()

Then you can be sure that the duration of this is sleep is directly adding to the input lag. If the sleep is placed between the frames (present() and CPU), then your game is receiving the most recent data and rendering without additional delay since it will retrieve the most recent input and renders it as fast as possible.

You can check that by checking it using GPUView. Some other tools like CapframeX also seems to also estimate that but I don't know their methodology.

---

With that in mind, you can think in scenarios that is not too obvious and that is caused when one of that sources have more impact than the others.

User avatar
jorimt
Posts: 2481
Joined: 04 Nov 2016, 10:44
Location: USA

Re: LLM On vs ultra

Post by jorimt » 16 Apr 2020, 09:10

bapt337 wrote:
16 Apr 2020, 02:36
Ive got a question, if you cap fps at 61 for 60hz, with vsync on, and its a gpu bound situation (CPU faster than GPU), then the maximum pre rendered fame is always 1 cause i force it by giving only one extra frame (cap 61), even if buffer size is 2 or 3 right?
No.

The 1 frame difference between 60 FPS and 61 FPS is not the same thing as 1 pre-rendered frame. With pre-rendered frames, it doesn't matter what the average framerate number is (it could be 61, 82, 99, 1003, 25, 42), it matters how that average framerate is currently being generated.

If the average framerate is 61 because the framerate is uncapped and the GPU is maxed out, it means the system can only generate a average max framerate of 61 FPS, at which point the pre-rendered frames queue increases to wait for the GPU to be ready for the next frame(s).

However, if the average framerate is 61 because you're using an FPS limiter to keep it there, so long as the GPU isn't maxed using this limit, this means the system could otherwise output a higher average framerate due to the remaining CPU/GPU overhead available, at which point, since the CPU doesn't need to wait on the GPU, the pre-rendered frames queue is decreased, and the CPU is able to hand frame information over to the GPU without wait (or with less wait).
bapt337 wrote:
16 Apr 2020, 02:36
But if fps for example fps go down to 45fps, in this case frame buffer gonna be full cause i give 61 fps (fps cap) so if the gpu can only render 45fps in one second then it give 16 extra fps (61-45) then buffer will be full and even more filled by the extra frame and the max pre rendered frame setting take effect at this moment if im right
Again, the pre-rendered frames queue isn't directly related to the difference in average framerate between scenario A and B, it's related to how that average framerate is being generated at any given point.

In your posed scenario, uncapped 45 FPS is going to have a higher pre-rendered frames queue because whatever scene is occurring is giving the GPU a harder time than the capped 61 FPS scenario; in the former scenario, the GPU is maxed out, and in the latter scenario, it is not. That's what makes the primary difference with the pre-rendered frames queue.
andrelip wrote:
16 Apr 2020, 07:26
With that in mind, you can think in scenarios that is not too obvious and that is caused when one of that sources have more impact than the others.
Yeah, I haven't watch it since it originally released, but I vaguely recall...

1. They may have been conflating different causes of input lag, which didn't muddy their results as much as it did their conclusion of their results.
2. They didn't seem to understand that external FPS limiters reduce input lag less than some in-game limiters, and further, that some in-game limiters reduce input lag about the same as external limiters.

The whole point was to measure the input lag difference incurred by GPU usage only, but instead, they may have conflated that with differences in FPS limiter input lag, and possibly even V-SYNC and framerate input lag.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

bapt337
Posts: 27
Joined: 10 Apr 2020, 12:54

Re: LLM On vs ultra

Post by bapt337 » 17 Apr 2020, 08:45

andrelip wrote:
16 Apr 2020, 07:26
jorimt wrote:
15 Apr 2020, 10:40
bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.
I like that they are counterpointing incomplete information by Battle(non)sense. There are 3 main points that increases input lag (as always, for v-sync off):

---

1- Queued frames:

GPU receiving frames faster than it can process. The longer the queue and the fast they are receiving it, more input lag.

2- Render Time:

Even with 0 frames in the Queue, you have CPU Time + GPU Time. So if your game renders in CPU in 4ms and 3 ms in the GPU, then it is rendering at 7 ms. So even if your GPU is NOT the bottleneck, you can still receive improvement in the input lag by reducing the graphics settings. For CSGO, it usually reaches diminished effects (< 0.5 ms per frame in GPU time) without too much image degradation but for some other games it matters a lot.

3- Blocks and Artificial Delay:

Input lag should be measured by the time the engine reads the external data (user inputs and network) and the presentation to the monitor. So if the frame cap software is stabilizing the frame0time by placing their sleep() between:

a) CPU and GPU
b) GPU and present()

Then you can be sure that the duration of this is sleep is directly adding to the input lag. If the sleep is placed between the frames (present() and CPU), then your game is receiving the most recent data and rendering without additional delay since it will retrieve the most recent input and renders it as fast as possible.

You can check that by checking it using GPUView. Some other tools like CapframeX also seems to also estimate that but I don't know their methodology.

---

With that in mind, you can think in scenarios that is not too obvious and that is caused when one of that sources have more impact than the others.
I think i finally understand, i was thinking the buffer had 60 frame in a second, but the frame generation process is applied on each frame one by one, and not on all frame in a second, so whaterver framerate is, the pre rendered method is applied on each frame and then cpu will render to gpu in "real time" as long as the cpu dont wait gpu, if gpu maxed out, all frames which cannot be rendered "in time" by the CPU to the GPU are queued to the buffer and that's where LLM setting is applied.

User avatar
axaro1
Posts: 627
Joined: 23 Apr 2020, 12:00
Location: Milan, Italy

Re: LLM On vs ultra

Post by axaro1 » 30 Sep 2020, 04:30

How does Radeon Anti-Lag compare to LLM On or Ultra? Should I keep it enabled if I'm cpu/engine bound?
XL2566K* | XV252QF* | LG C1* | HP OMEN X 25 | XL2546K | VG259QM | XG2402 | LS24F350[RIP]
*= currently owned



MONITOR: XL2566K custom VT: https://i.imgur.com/ylYkuLf.png
CPU: 5800x3d 102mhz BCLK
GPU: 3080FE undervolted
RAM: https://i.imgur.com/iwmraZB.png
MOUSE: Endgame Gear OP1 8k
KEYBOARD: Wooting 60he

slaver01
Posts: 89
Joined: 21 Sep 2020, 01:48

Re: LLM On vs ultra

Post by slaver01 » 30 Sep 2020, 08:10

I play Fortnite 1080p in low resolution 240hz. Cpu max 50%/60% Gpu 50%/60%. How could I get an advantage with LLM ON or LMM OFF?

User avatar
jorimt
Posts: 2481
Joined: 04 Nov 2016, 10:44
Location: USA

Re: LLM On vs ultra

Post by jorimt » 30 Sep 2020, 08:34

axaro1 wrote:
30 Sep 2020, 04:30
How does Radeon Anti-Lag compare to LLM On or Ultra? Should I keep it enabled if I'm cpu/engine bound?
Anti-Lag is effectively the same as LLM Ultra.

See:

phpBB [video]

slaver01 wrote:
30 Sep 2020, 08:10
I play Fortnite 1080p in low resolution 240hz. Cpu max 50%/60% Gpu 50%/60%. How could I get an advantage with LLM ON or LMM OFF?
1. Fortnite now has the "Reflex" setting that replaces LLM:
viewtopic.php?f=10&t=7522
2. Both Reflex and LLM only empty/reduce the render queue in GPU-bound scenarios. Since your system is not GPU-bound, your system's render queue is already reduced/empty.

That said, you could still enable Reflex and see if you feel any improvement, but any improvement it makes won't be as impactful as if you were GPU-bound.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

User avatar
speancer
Posts: 241
Joined: 03 May 2020, 04:26
Location: EU

Re: LLM On vs ultra

Post by speancer » 30 Sep 2020, 09:08

jorimt wrote:
30 Sep 2020, 08:34
Both Reflex and LLM only empty/reduce the render queue in GPU-bound scenarios. Since your system is not GPU-bound, your system's render queue is already reduced/empty.
So what was the hype about with NVIDIA Reflex if it's basically yet another version of "max pre-rendered frames" option, like NVIDIA LLM? If that only helps with GPU-bound scenarios, I dare to say it's useless to basically all high-level professional players (sponsored rigs, powerful tournament PCs) and to any player with high-end rig.
Main display (TV/PC monitor): LG 42C21LA (4K 120 Hz OLED / WBE panel)
Tested displays: ASUS VG259QM/VG279QM [favourite LCD FPS display] (280 Hz IPS) • Zowie XL2546K/XL2540K/XL2546 (240 Hz TN DyAc) • Dell S3222DGM [favourite LCD display for the best blacks, contrast and panel uniformity] (165 Hz VA) • Dell Alienware AW2521HFLA (240 Hz IPS) • HP Omen X 25f (240 Hz TN) • MSI MAG251RX (240 Hz IPS) • Gigabyte M27Q (170 Hz IPS) • Acer Predator XB273X (240 Hz IPS G-SYNC) • Acer Predator XB271HU (165 Hz IPS G-SYNC) • Acer Nitro XV272UKV (170 Hz IPS) • Acer Nitro XV252QF (390 Hz IPS) • LG 27GN800 (144 Hz IPS) • LG 27GL850 (144 Hz nanoIPS) • LG 27GP850 (180 Hz nanoIPS) • Samsung Odyssey G7 (240 Hz VA)

OS: Windows 11 Pro GPU: Palit GeForce RTX 4090 GameRock OC CPU: AMD Ryzen 7 7800X3D + be quiet! Dark Rock Pro 4 + Arctic MX-6 RAM: 32GB (2x16GB dual channel) DDR5 Kingston Fury Beast Black 6000 MHz CL30 (fully optimized primary and secondary timings by Buildzoid for SK Hynix die on AM5 platform) PSU: Corsair RM1200x SHIFT 1200W (ATX 3.0, PCIe 5.0 12VHPWR 600W) SSD1: Kingston KC3000 1TB NVMe PCIe 4.0 x4 SSD2: Corsair Force MP510 960GB PCIe 3.0 x4 MB: ASUS ROG STRIX X670E-A GAMING WIFI (GPU PCIe 5.0 x16, NVMe PCIe 5.0 x4) CASE: be quiet! Silent Base 802 Window White CASE FANS: be quiet! Silent Wings 4 140mm PWM (3x front, 1x rear, 1x top rear, positive pressure) MOUSE: Logitech G PRO X Superlight (white) Lightspeed wireless MOUSEPAD: ARTISAN FX HIEN (wine red, soft, XL) KEYBOARD: Logitech G915 TKL (white, GL Tactile) Lightspeed wireless HEADPHONES: Sennheiser Momentum 4 Wireless (white) 24-bit 96 KHz + Sennheiser BTD600 Bluetooth 5.2 aptX Adaptive CHAIR: Herman Miller Aeron (graphite, fully loaded, size C)

Meowchan
Posts: 40
Joined: 17 Jun 2020, 02:06

Re: LLM On vs ultra

Post by Meowchan » 30 Sep 2020, 09:15

speancer wrote:
30 Sep 2020, 09:08
So what was the hype about with NVIDIA Reflex if it's basically yet another version of "max pre-rendered frames" option, like NVIDIA LLM? If that only helps with GPU-bound scenarios, I dare to say it's useless to basically all high-level professional players and any player with high-end rig.
Pretty much.
Now buy our $700 MSRP $1000 actual GPU before AMD releases theirs.

RTX Voice seem nice. As well as NVENC if you're streaming. But neither of those is strictly speaking required for 'gaming'.

User avatar
jorimt
Posts: 2481
Joined: 04 Nov 2016, 10:44
Location: USA

Re: LLM On vs ultra

Post by jorimt » 30 Sep 2020, 10:15

speancer wrote:
30 Sep 2020, 09:08
So what was the hype about with NVIDIA Reflex if it's basically yet another version of "max pre-rendered frames" option, like NVIDIA LLM? If that only helps with GPU-bound scenarios, I dare to say it's useless to basically all high-level professional players (sponsored rigs, powerful tournament PCs) and to any player with high-end rig.
Unlike LLM, Reflex is guaranteed to work in the game it supports, and it eliminates the render queue. LLM only reduces it, and only if the game allows override.

As for it being "useless" in non-GPU-bound scenarios, not necessarily. With reflex disable, the render queue still exists, and can still be filled at any given point in non-GPU-bound scenarios, just not as much as it would be when the system is GPU-bound.

Reflex also ensures that if your system does become GPU-bound at any point (even for just a few frames in a more demanding scene), input lag/buffering won't increase due to the render queue.
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

Post Reply