Frame time differences between NVidia and AMD?

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Frame time differences between NVidia and AMD?

Post by Chief Blur Buster » 24 Nov 2020, 18:50

Throwing more healthy logs into the fire (instead of gasoline fuel)...

The reviewer art of measuring microseconds is still in its infancy. It’ll eventually get better, and critiques are definitely deserved (but not blanket disdain), while fully respecting reviewers.

Some are just end users like many forum members, who are striving to do more, and sometimes they are using blunt hammers instead of surgical tools of the trade, gets the job done (erratic 400fps is still better than a smooth 5fps of 386SX-16 days playing Test Drive, Flight Simulator or Wing Commander).

While brute fps is not useless, we remember the days of flawed framepacing — like some that were seen during the CrossFire/SLI days. But even after that was mostly fixed, many smaller jitter problems remain. But framepacing is an increasingly bigger problem in the refresh rate race to retina refresh rates, when gametime:photontime jitter can be caused by so many things (programmer, OS, engine, drivers, etc) and becoming more visible at higher Hz, etc.

Adding useful link:
The Amazing Human Visible Feats of the Millisecond, including a few situations where microseconds cascade to human-visible side effects.

By all means, it is not complete, but it is a masterclass of unintended consequences for the refresh rate race to retina refresh rates, where the Vicious Cycle Effect lifts the veil (higher Hz and resolutions revealing tinier-temporal side effects more easily, including those accumulated “deaths by millions nanoseconds” combining in unforeseen and unintended to create human perceptible consequences)
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

ffs_
Posts: 47
Joined: 24 Jul 2020, 00:57

Re: Frame time differences between NVidia and AMD?

Post by ffs_ » 25 Nov 2020, 06:46

schizobeyondpills wrote:
21 Nov 2020, 15:14
note you cant get full benefit of this card unless you run optimized PC like me.
schizobeyondpills wrote:
21 Nov 2020, 15:14
specialized hardware - CLASSIFIED tier low latency stuff 👀)
schizobeyondpills wrote:
24 Nov 2020, 18:36
i chose not to reveal it. simple as that. why would i give out things for free when no one appreciates what i have to say.
Sounds like trolling, but ok.
mossfalt wrote:
24 Nov 2020, 12:15
I'm looking to buy new ram, what do you consider good ram ?
https://www.techpowerup.com/274708/colo ... eries-cpus

User avatar
MaxTendency
Posts: 59
Joined: 22 Jun 2020, 01:47

Re: Frame time differences between NVidia and AMD?

Post by MaxTendency » 28 Nov 2020, 18:49

deama wrote:
23 Nov 2020, 18:51
use a 1000fps camera on at least a 240hz monitor and record overall input lag but switch the mhz of the GPUs, e.g. downlock AMD down to 1500mhz (but keep fps above 240), see what difference that will make

With my limited experimentation with CPU clock speeds, higher clock speed will definitely increase input lag
1. Even at same fps, higher clocks = lower latency.
This feature overrides the power saving features in the GPU to allow the GPU clocks to stay high when heavily CPU-bound. Even when the game is CPU-bound, longer rendering times add latency. Keeping the clocks higher can reduce latency slightly when the GPU is significantly underutilized and the CPU submits the final rendering work in a large batch.
source

2. 1000fps camera's aren't good measuring tool. Even with LDAT which has far higher accuracy compared to 1kfps camera, its still click to photon and suffers from the same problems.
viewtopic.php?f=10&t=7622&p=59454#p58482
deama wrote:
23 Nov 2020, 18:51
With my limited experimentation with CPU clock speeds, higher clock speed will definitely increase input lag
3. I think you meant decrease?

deama
Posts: 370
Joined: 07 Aug 2019, 12:00

Re: Frame time differences between NVidia and AMD?

Post by deama » 30 Nov 2020, 12:57

MaxTendency wrote:
28 Nov 2020, 18:49
deama wrote:
23 Nov 2020, 18:51
use a 1000fps camera on at least a 240hz monitor and record overall input lag but switch the mhz of the GPUs, e.g. downlock AMD down to 1500mhz (but keep fps above 240), see what difference that will make

With my limited experimentation with CPU clock speeds, higher clock speed will definitely increase input lag
1. Even at same fps, higher clocks = lower latency.
This feature overrides the power saving features in the GPU to allow the GPU clocks to stay high when heavily CPU-bound. Even when the game is CPU-bound, longer rendering times add latency. Keeping the clocks higher can reduce latency slightly when the GPU is significantly underutilized and the CPU submits the final rendering work in a large batch.
source

2. 1000fps camera's aren't good measuring tool. Even with LDAT which has far higher accuracy compared to 1kfps camera, its still click to photon and suffers from the same problems.
https://forums.blurbusters.com/viewtopi ... 454#p58482
deama wrote:
23 Nov 2020, 18:51
With my limited experimentation with CPU clock speeds, higher clock speed will definitely increase input lag
3. I think you meant decrease?
Yeah woops I meant decrease.

Hotdog Man
Posts: 6
Joined: 01 Dec 2020, 13:35

Re: Frame time differences between NVidia and AMD?

Post by Hotdog Man » 01 Dec 2020, 14:21

Chief Blur Buster wrote:
23 Nov 2020, 03:55

It's true AMD performs better than NVIDIA in some metrics important to certain people/industries.
To make a statement like that you must have some insight about what aspects AMD is superior in? And of course I'm not talking about FPS/Frametime benchmark that every reviewer and their mother can give you. Can you explain what exactly you meant by this (what metrics)

lizardpeter
Posts: 208
Joined: 01 Dec 2020, 14:41

Re: Frame time differences between NVidia and AMD?

Post by lizardpeter » 01 Dec 2020, 14:46

schizobeyondpills wrote:
21 Nov 2020, 15:14
Frame times dont reveal anything about frame delivery on display. Yes AMD and Nvidia have different directions of using tiled rendering and AMD 5700xt is superior idk about rdna2 yet waiting for 6900. Its about bandwidth vs latency

also AMD has at least 10x more DPCs than nvidia with atleast 3x lower latency which might be why its more responsive and immediate and not due to different rendering modes but that also helps a lot

testing 5700xt vs 1080ti vs 2080ti i can safely say 5700xt is few levels of responsivness above nvidia and that 2080ti is worse of all 3.

note you cant get full benefit of this card unless you run optimized PC like me. especially CR1 ram on proper signal path board for that. its another 40hz of responsivness improvement.( as well as 10K ++ € worth of specialized hardware - CLASSIFIED tier low latency stuff 👀)

frame times dont reflect the full picture. its just time to have your pizza done. delivering the pizza to your ON TIME doorstep is the hardest and most impactful aspect of latency
How is the 5700xt more responsive than the 2080 Ti? Can you clarify? Are you sure this has nothing to do with something like the default pre-rendered frame value being different on each card (something that can be changed)?
i9 9900k | RTX 2080 Ti | 32 GB 4x8GB B-Die 3600 MT/s CL16 | XV252QF 390 Hz 1080p | AW2518H 240 Hz 1080p | PG279Q 144 Hz 1440p

Razer Viper 8K | Artisan Zero Mid XL | Apex Pro TKL | 1 gbps FiOS (Fiber)

Alpha
Posts: 133
Joined: 09 Jul 2020, 17:58

Re: Frame time differences between NVidia and AMD?

Post by Alpha » 02 Dec 2020, 07:22

lizardpeter wrote:
01 Dec 2020, 14:46

How is the 5700xt more responsive than the 2080 Ti? Can you clarify? Are you sure this has nothing to do with something like the default pre-rendered frame value being different on each card (something that can be changed)?
schizobeyondpills, I enjoy your post. Thank you for taking the time.

Chief, you manage a lot here lol. You've managed to gather such a unique group of skilled humans its fascinating. I remember when the internet used to be like this. It's almost like a time gap to early 2000's around here before everyone turned to their cups are full and closed minded dicks on the web. I appreciate you.

lizardpeter, this won't help but thought I'd throw this in. I have an Omen 25F on a X470, 16gb 3600 CL16 memory and a Ryzen 3600 on this machine. TCP prioritization is optimized on the NIC and its on its own isolated dedicated network. Same network scenario though recently added a second machine for streaming so two devices on a subnet but 360hz display, Ryzen 5900X, X570, 32gb 3800 CL14 and a 2080 Ti @ +175 core and +1100 mem (waiting to see how the 6900XT does and couldn't score a 3090) and it doesn't feel as fast as my 5700. Two other of my builds, one X570, the other 550, Nvidia GPU same thing. My response times are typically 130s average and my stupid eyes are extremely sensitive to motion. I don't get motion sick but I definitely see frames and I can feel minute latency. I don't resolve the blur like others do so I don't what its called or how it's possible but the frames I see. If you look at my post history you'll see me mention multiple times feeling a difference in speed on the VG259QM thread between OD and ULMB before reviews ever hit. I believe the difference was under 1ms or something. It's a curse! I have to try and turn it off just to watch a movie. Anyway, I have rebuilt, gone over control panel settings like you wouldn't believe and I play in essentially esports modes with my games of choice being fully optimized down to config files you name it. I am curious if someone can explain it. I don't know enough and am extremely curious. I have not done any testing and my ""feeling" could be completely wrong. Maybe I just need to swap gamer chairs (for the judgement record, I don't own any). :D

Curious to see if this gets a quantifiable explanation.

lizardpeter
Posts: 208
Joined: 01 Dec 2020, 14:41

Re: Frame time differences between NVidia and AMD?

Post by lizardpeter » 02 Dec 2020, 22:16

Alpha wrote:
02 Dec 2020, 07:22

lizardpeter, this won't help but thought I'd throw this in. I have an Omen 25F on a X470, 16gb 3600 CL16 memory and a Ryzen 3600 on this machine. TCP prioritization is optimized on the NIC and its on its own isolated dedicated network. Same network scenario though recently added a second machine for streaming so two devices on a subnet but 360hz display, Ryzen 5900X, X570, 32gb 3800 CL14 and a 2080 Ti @ +175 core and +1100 mem (waiting to see how the 6900XT does and couldn't score a 3090) and it doesn't feel as fast as my 5700. Two other of my builds, one X570, the other 550, Nvidia GPU same thing. My response times are typically 130s average and my stupid eyes are extremely sensitive to motion. I don't get motion sick but I definitely see frames and I can feel minute latency. I don't resolve the blur like others do so I don't what its called or how it's possible but the frames I see. If you look at my post history you'll see me mention multiple times feeling a difference in speed on the VG259QM thread between OD and ULMB before reviews ever hit. I believe the difference was under 1ms or something. It's a curse! I have to try and turn it off just to watch a movie. Anyway, I have rebuilt, gone over control panel settings like you wouldn't believe and I play in essentially esports modes with my games of choice being fully optimized down to config files you name it. I am curious if someone can explain it. I don't know enough and am extremely curious. I have not done any testing and my ""feeling" could be completely wrong. Maybe I just need to swap gamer chairs (for the judgement record, I don't own any). :D

Curious to see if this gets a quantifiable explanation.
That's really interesting. It sounds like you've really done a lot of optimizing. Can you explain what you've done with the TCP prioritization optimization?

I'm interested to know why you're on Ryzen if you value the latency so much and are that sensitive to it. Hasn't it been proven that even on the end-user experience side Intel has slightly less input lag (even at equal frame rates)? I might be mistaken, but I thought this was documented behavior (only about a 1-3 ms change, however). Of course, we all know that the actual memory latency on Intel is obviously better, but perhaps it doesn't translate to any real-world impact in terms of overall responsiveness (it will be very noticeable in some programs that require faster memory access).

I'm actually about to upgrade my RAM to some B-die to hopefully see some kind of improvement (was looking to upgrade to 32 GB anyway). I have only used Intel CPUs and NVIDIA GPUs, so I have nothing to compare my 2080 Ti to. That's very interesting that you thought you noticed a difference, with the AMD GPU being more responsive. I would think the more powerful GPU would be more responsive and that all inputs registered prior to the frame being rendered would be the same on both, with added responsiveness on the more powerful GPU due to a greater effective "sampling" of all the inputs per second (measured by the frame interval being lower on the card producing a greater FPS). It is also interesting that NVIDIA has been pushing their latency marketing a lot more and developing technologies like Reflex to lower latency. Also, most top players do actually play on NVIDIA GPUs, and I would have thought that they would have switched to AMD GPUs by now if some advantage was found.

Are there any other optimizations, other than the TCP prioritization, that you would recommend?
i9 9900k | RTX 2080 Ti | 32 GB 4x8GB B-Die 3600 MT/s CL16 | XV252QF 390 Hz 1080p | AW2518H 240 Hz 1080p | PG279Q 144 Hz 1440p

Razer Viper 8K | Artisan Zero Mid XL | Apex Pro TKL | 1 gbps FiOS (Fiber)

howiec
Posts: 183
Joined: 17 Jun 2014, 15:36

Re: Frame time differences between NVidia and AMD?

Post by howiec » 03 Dec 2020, 03:02

Chief Blur Buster wrote:
24 Nov 2020, 18:50
Adding useful link:
The Amazing Human Visible Feats of the Millisecond, including a few situations where microseconds cascade to human-visible side effects.
There are a lot of good threads/posts like this one. Is there a "main post / thread" that contains links to various posts/threads such as this so that they don't get lost?

Alpha
Posts: 133
Joined: 09 Jul 2020, 17:58

Re: Frame time differences between NVidia and AMD?

Post by Alpha » 04 Dec 2020, 18:58

lizardpeter wrote:
02 Dec 2020, 22:16
Alpha wrote:
02 Dec 2020, 07:22

lizardpeter, this won't help but thought I'd throw this in. I have an Omen 25F on a X470, 16gb 3600 CL16 memory and a Ryzen 3600 on this machine. TCP prioritization is optimized on the NIC and its on its own isolated dedicated network. Same network scenario though recently added a second machine for streaming so two devices on a subnet but 360hz display, Ryzen 5900X, X570, 32gb 3800 CL14 and a 2080 Ti @ +175 core and +1100 mem (waiting to see how the 6900XT does and couldn't score a 3090) and it doesn't feel as fast as my 5700. Two other of my builds, one X570, the other 550, Nvidia GPU same thing. My response times are typically 130s average and my stupid eyes are extremely sensitive to motion. I don't get motion sick but I definitely see frames and I can feel minute latency. I don't resolve the blur like others do so I don't what its called or how it's possible but the frames I see. If you look at my post history you'll see me mention multiple times feeling a difference in speed on the VG259QM thread between OD and ULMB before reviews ever hit. I believe the difference was under 1ms or something. It's a curse! I have to try and turn it off just to watch a movie. Anyway, I have rebuilt, gone over control panel settings like you wouldn't believe and I play in essentially esports modes with my games of choice being fully optimized down to config files you name it. I am curious if someone can explain it. I don't know enough and am extremely curious. I have not done any testing and my ""feeling" could be completely wrong. Maybe I just need to swap gamer chairs (for the judgement record, I don't own any). :D

Curious to see if this gets a quantifiable explanation.
That's really interesting. It sounds like you've really done a lot of optimizing. Can you explain what you've done with the TCP prioritization optimization?

I'm interested to know why you're on Ryzen if you value the latency so much and are that sensitive to it. Hasn't it been proven that even on the end-user experience side Intel has slightly less input lag (even at equal frame rates)? I might be mistaken, but I thought this was documented behavior (only about a 1-3 ms change, however). Of course, we all know that the actual memory latency on Intel is obviously better, but perhaps it doesn't translate to any real-world impact in terms of overall responsiveness (it will be very noticeable in some programs that require faster memory access).

I'm actually about to upgrade my RAM to some B-die to hopefully see some kind of improvement (was looking to upgrade to 32 GB anyway). I have only used Intel CPUs and NVIDIA GPUs, so I have nothing to compare my 2080 Ti to. That's very interesting that you thought you noticed a difference, with the AMD GPU being more responsive. I would think the more powerful GPU would be more responsive and that all inputs registered prior to the frame being rendered would be the same on both, with added responsiveness on the more powerful GPU due to a greater effective "sampling" of all the inputs per second (measured by the frame interval being lower on the card producing a greater FPS). It is also interesting that NVIDIA has been pushing their latency marketing a lot more and developing technologies like Reflex to lower latency. Also, most top players do actually play on NVIDIA GPUs, and I would have thought that they would have switched to AMD GPUs by now if some advantage was found.

Are there any other optimizations, other than the TCP prioritization, that you would recommend?
TCP and network optimizations are by far and large some of the best a person can do for a competitive edge easy. I do a lot of professional tournaments so competing for money makes it a big deal and why I have a commercial grade network. For other optimizations, I won't sacrifice image quality to a potato like some will but all my OS deployments are custom built but yours truly using Microsofts framework. These are essentially completely stripped of everything with the exception of security protocols. However until we see something more serious like an artificial intelligence based solution that doesn't do signature scanning I live with defender. Hoping to see some commercial solutions roll out soon. I have my CEH so am delicately paranoid about the risk associated with those doors being open. Its unreal how easy it is to hit systems and own them. I can handle the security at the firewall levels but running deep packet inspection and geo filtering and other intrusion prevention methods cost the microseconds we fight for that can cost (and has) big money (to me). I'm a career IT guy by trade though this year I have highly considered retiring due to some dumb luck and gaming (more dumb luck but I am pretty fast) but I piggie back off some of the big brains out in the world especially here on these boards of all places because there is a legit collection of awesome people and Chief has masterfully managed to keep the community amazing. I implement whatever changes make sense or I can tell a difference by feel even if I can't quantify it with test. It could all be complete BS and maybe some light being reflected off something hits the eye just right not even being noticed helping focus or whatever and bam fragging out lol. No idea here truth be told.

The issue with AMD GPU's is simply that they won't get the frames (excluding the 6000 series that may make sense with the rasterization muscle @ 1080p). In addition, when you're sponsored, you're sponsored. I wouldn't run with a mouse due to it not being what I prefer. Probably wouldn't make a difference but it was no good in my hands. I don't need the income and already established career wise but if you're living in a team house and Logitechs bringing G Pro X Wireless Superlights, that's what you're fragging with (just ordered mine while I wait for Razer to drop the 8000hz mouse). Chief or someone would have to explain why something feels "ahead" because I clearly don't fully understand but on the 5700 system and easily 60+ fps difference that's a moving target, its almost like being ahead of the 2080ti face to face (due to network test I know its not packets landing first so its something else). I am hoping to hear some reasons on this. My theory was that the pipe was organized in a way that prioritizes inputs. Literally no idea if that's even possible.

I really want to get my hands on the 6000 series for just a few minutes. 6900XT hits in a few days and if the 3090 is faster that'd be my choice but if the 6900XT feels like the 5700 I'd go that direction and buy a 3080 Ti as a back up or throw in my other machine. I'd do this if the 6900XT was a bit slower even. No issues with the 5700 but it'll take awhile to get cool with the confidence in AMD's GPU drivers.


My experience with Boost is not so good. Maybe my brain is fired up at times and less at others but boost and enabled can hurt negatively or feel ok at times. I turn it off now and stick to the older recommended NCP settings.

Post Reply