Page 3 of 4

Re: LLM On vs ultra

Posted: 16 Apr 2020, 02:36
by bapt337
jorimt wrote:
15 Apr 2020, 10:40
bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.

Allright, i was thinking the same, result was a bit odd.
Ive got a question, if you cap fps at 61 for 60hz, with vsync on, and its a gpu bound situation (CPU faster than GPU), then the maximum pre rendered fame is always 1 cause i force it by giving only one extra frame (cap 61), even if buffer size is 2 or 3 right?
But if fps for example fps go down to 45fps, in this case frame buffer gonna be full cause i give 61 fps (fps cap) so if the gpu can only render 45fps in one second then it give 16 extra fps (61-45) then buffer will be full and even more filled by the extra frame and the max pre rendered frame setting take effect at this moment if im right

Re: LLM On vs ultra

Posted: 16 Apr 2020, 07:26
by andrelip
jorimt wrote:
15 Apr 2020, 10:40
bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.
I like that they are counterpointing incomplete information by Battle(non)sense. There are 3 main points that increases input lag (as always, for v-sync off):

---

1- Queued frames:

GPU receiving frames faster than it can process. The longer the queue and the fast they are receiving it, more input lag.

2- Render Time:

Even with 0 frames in the Queue, you have CPU Time + GPU Time. So if your game renders in CPU in 4ms and 3 ms in the GPU, then it is rendering at 7 ms. So even if your GPU is NOT the bottleneck, you can still receive improvement in the input lag by reducing the graphics settings. For CSGO, it usually reaches diminished effects (< 0.5 ms per frame in GPU time) without too much image degradation but for some other games it matters a lot.

3- Blocks and Artificial Delay:

Input lag should be measured by the time the engine reads the external data (user inputs and network) and the presentation to the monitor. So if the frame cap software is stabilizing the frame0time by placing their sleep() between:

a) CPU and GPU
b) GPU and present()

Then you can be sure that the duration of this is sleep is directly adding to the input lag. If the sleep is placed between the frames (present() and CPU), then your game is receiving the most recent data and rendering without additional delay since it will retrieve the most recent input and renders it as fast as possible.

You can check that by checking it using GPUView. Some other tools like CapframeX also seems to also estimate that but I don't know their methodology.

---

With that in mind, you can think in scenarios that is not too obvious and that is caused when one of that sources have more impact than the others.

Re: LLM On vs ultra

Posted: 16 Apr 2020, 09:10
by jorimt
bapt337 wrote:
16 Apr 2020, 02:36
Ive got a question, if you cap fps at 61 for 60hz, with vsync on, and its a gpu bound situation (CPU faster than GPU), then the maximum pre rendered fame is always 1 cause i force it by giving only one extra frame (cap 61), even if buffer size is 2 or 3 right?
No.

The 1 frame difference between 60 FPS and 61 FPS is not the same thing as 1 pre-rendered frame. With pre-rendered frames, it doesn't matter what the average framerate number is (it could be 61, 82, 99, 1003, 25, 42), it matters how that average framerate is currently being generated.

If the average framerate is 61 because the framerate is uncapped and the GPU is maxed out, it means the system can only generate a average max framerate of 61 FPS, at which point the pre-rendered frames queue increases to wait for the GPU to be ready for the next frame(s).

However, if the average framerate is 61 because you're using an FPS limiter to keep it there, so long as the GPU isn't maxed using this limit, this means the system could otherwise output a higher average framerate due to the remaining CPU/GPU overhead available, at which point, since the CPU doesn't need to wait on the GPU, the pre-rendered frames queue is decreased, and the CPU is able to hand frame information over to the GPU without wait (or with less wait).
bapt337 wrote:
16 Apr 2020, 02:36
But if fps for example fps go down to 45fps, in this case frame buffer gonna be full cause i give 61 fps (fps cap) so if the gpu can only render 45fps in one second then it give 16 extra fps (61-45) then buffer will be full and even more filled by the extra frame and the max pre rendered frame setting take effect at this moment if im right
Again, the pre-rendered frames queue isn't directly related to the difference in average framerate between scenario A and B, it's related to how that average framerate is being generated at any given point.

In your posed scenario, uncapped 45 FPS is going to have a higher pre-rendered frames queue because whatever scene is occurring is giving the GPU a harder time than the capped 61 FPS scenario; in the former scenario, the GPU is maxed out, and in the latter scenario, it is not. That's what makes the primary difference with the pre-rendered frames queue.
andrelip wrote:
16 Apr 2020, 07:26
With that in mind, you can think in scenarios that is not too obvious and that is caused when one of that sources have more impact than the others.
Yeah, I haven't watch it since it originally released, but I vaguely recall...

1. They may have been conflating different causes of input lag, which didn't muddy their results as much as it did their conclusion of their results.
2. They didn't seem to understand that external FPS limiters reduce input lag less than some in-game limiters, and further, that some in-game limiters reduce input lag about the same as external limiters.

The whole point was to measure the input lag difference incurred by GPU usage only, but instead, they may have conflated that with differences in FPS limiter input lag, and possibly even V-SYNC and framerate input lag.

Re: LLM On vs ultra

Posted: 17 Apr 2020, 08:45
by bapt337
andrelip wrote:
16 Apr 2020, 07:26
jorimt wrote:
15 Apr 2020, 10:40
bapt337 wrote:
15 Apr 2020, 03:59
If only game could use multi thread instead of single core performance maybe cpu dont get maxed out at 40% usage ... new ryzen should fix this
Yes, CS:GO is entirely CPU-limited, which means the faster single-core performance your CPU has, the higher potential FPS for that game.
bapt337 wrote:
15 Apr 2020, 03:59
EDIT : Ive find this video, it show how cap fps not always improve input lag, and even sometimes can make it worse:
https://youtu.be/VtSfjBfp1LA
I've seen this video before, and it was, in fact, discussed in another thread on this forum (can't recall which one at the moment). Their testing methodology is unclear (if not flawed) and doesn't align with any previous findings, so I'd take their results with a grain of salt.

I'm still not sure what they were trying to do there, or how they ultimately came to their results. The whole video was confusing.
I like that they are counterpointing incomplete information by Battle(non)sense. There are 3 main points that increases input lag (as always, for v-sync off):

---

1- Queued frames:

GPU receiving frames faster than it can process. The longer the queue and the fast they are receiving it, more input lag.

2- Render Time:

Even with 0 frames in the Queue, you have CPU Time + GPU Time. So if your game renders in CPU in 4ms and 3 ms in the GPU, then it is rendering at 7 ms. So even if your GPU is NOT the bottleneck, you can still receive improvement in the input lag by reducing the graphics settings. For CSGO, it usually reaches diminished effects (< 0.5 ms per frame in GPU time) without too much image degradation but for some other games it matters a lot.

3- Blocks and Artificial Delay:

Input lag should be measured by the time the engine reads the external data (user inputs and network) and the presentation to the monitor. So if the frame cap software is stabilizing the frame0time by placing their sleep() between:

a) CPU and GPU
b) GPU and present()

Then you can be sure that the duration of this is sleep is directly adding to the input lag. If the sleep is placed between the frames (present() and CPU), then your game is receiving the most recent data and rendering without additional delay since it will retrieve the most recent input and renders it as fast as possible.

You can check that by checking it using GPUView. Some other tools like CapframeX also seems to also estimate that but I don't know their methodology.

---

With that in mind, you can think in scenarios that is not too obvious and that is caused when one of that sources have more impact than the others.
I think i finally understand, i was thinking the buffer had 60 frame in a second, but the frame generation process is applied on each frame one by one, and not on all frame in a second, so whaterver framerate is, the pre rendered method is applied on each frame and then cpu will render to gpu in "real time" as long as the cpu dont wait gpu, if gpu maxed out, all frames which cannot be rendered "in time" by the CPU to the GPU are queued to the buffer and that's where LLM setting is applied.

Re: LLM On vs ultra

Posted: 30 Sep 2020, 04:30
by axaro1
How does Radeon Anti-Lag compare to LLM On or Ultra? Should I keep it enabled if I'm cpu/engine bound?

Re: LLM On vs ultra

Posted: 30 Sep 2020, 08:10
by slaver01
I play Fortnite 1080p in low resolution 240hz. Cpu max 50%/60% Gpu 50%/60%. How could I get an advantage with LLM ON or LMM OFF?

Re: LLM On vs ultra

Posted: 30 Sep 2020, 08:34
by jorimt
axaro1 wrote:
30 Sep 2020, 04:30
How does Radeon Anti-Lag compare to LLM On or Ultra? Should I keep it enabled if I'm cpu/engine bound?
Anti-Lag is effectively the same as LLM Ultra.

See:

phpBB [video]

slaver01 wrote:
30 Sep 2020, 08:10
I play Fortnite 1080p in low resolution 240hz. Cpu max 50%/60% Gpu 50%/60%. How could I get an advantage with LLM ON or LMM OFF?
1. Fortnite now has the "Reflex" setting that replaces LLM:
viewtopic.php?f=10&t=7522
2. Both Reflex and LLM only empty/reduce the render queue in GPU-bound scenarios. Since your system is not GPU-bound, your system's render queue is already reduced/empty.

That said, you could still enable Reflex and see if you feel any improvement, but any improvement it makes won't be as impactful as if you were GPU-bound.

Re: LLM On vs ultra

Posted: 30 Sep 2020, 09:08
by speancer
jorimt wrote:
30 Sep 2020, 08:34
Both Reflex and LLM only empty/reduce the render queue in GPU-bound scenarios. Since your system is not GPU-bound, your system's render queue is already reduced/empty.
So what was the hype about with NVIDIA Reflex if it's basically yet another version of "max pre-rendered frames" option, like NVIDIA LLM? If that only helps with GPU-bound scenarios, I dare to say it's useless to basically all high-level professional players (sponsored rigs, powerful tournament PCs) and to any player with high-end rig.

Re: LLM On vs ultra

Posted: 30 Sep 2020, 09:15
by Meowchan
speancer wrote:
30 Sep 2020, 09:08
So what was the hype about with NVIDIA Reflex if it's basically yet another version of "max pre-rendered frames" option, like NVIDIA LLM? If that only helps with GPU-bound scenarios, I dare to say it's useless to basically all high-level professional players and any player with high-end rig.
Pretty much.
Now buy our $700 MSRP $1000 actual GPU before AMD releases theirs.

RTX Voice seem nice. As well as NVENC if you're streaming. But neither of those is strictly speaking required for 'gaming'.

Re: LLM On vs ultra

Posted: 30 Sep 2020, 10:15
by jorimt
speancer wrote:
30 Sep 2020, 09:08
So what was the hype about with NVIDIA Reflex if it's basically yet another version of "max pre-rendered frames" option, like NVIDIA LLM? If that only helps with GPU-bound scenarios, I dare to say it's useless to basically all high-level professional players (sponsored rigs, powerful tournament PCs) and to any player with high-end rig.
Unlike LLM, Reflex is guaranteed to work in the game it supports, and it eliminates the render queue. LLM only reduces it, and only if the game allows override.

As for it being "useless" in non-GPU-bound scenarios, not necessarily. With reflex disable, the render queue still exists, and can still be filled at any given point in non-GPU-bound scenarios, just not as much as it would be when the system is GPU-bound.

Reflex also ensures that if your system does become GPU-bound at any point (even for just a few frames in a more demanding scene), input lag/buffering won't increase due to the render queue.