Frame time differences between NVidia and AMD?

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
Meowchan
Posts: 40
Joined: 17 Jun 2020, 02:06

Frame time differences between NVidia and AMD?

Post by Meowchan » 20 Nov 2020, 17:43

I am interested in the difference between the rtx 3080 and 6800 xt from an eSport perspective. This image is taken from Gamers Nexus review of the 6800 xt:

Image

It's unlike a competitive shooter settings. It's 4k / Ultra / ~50fps instead of 1080p / lowest settings / ~300-500 fps. But it raises the question what does 1080p lowest comparison graph look like. So far I haven't seen such an image online, if someone knows of one please share, or even make one if you have the means how. Perhaps we could even get someone with NVidia card and sommeone with AMD card to make 2 charts and use that to compare.

Basically does AMD have an edge in eSports title is what I'm trying to figure out.

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Frame time differences between NVidia and AMD?

Post by schizobeyondpills » 21 Nov 2020, 15:14

Frame times dont reveal anything about frame delivery on display. Yes AMD and Nvidia have different directions of using tiled rendering and AMD 5700xt is superior idk about rdna2 yet waiting for 6900. Its about bandwidth vs latency

https://youtu.be/Nc6R1hwXhL8

also AMD has at least 10x more DPCs than nvidia with atleast 3x lower latency which might be why its more responsive and immediate and not due to different rendering modes but that also helps a lot

testing 5700xt vs 1080ti vs 2080ti i can safely say 5700xt is few levels of responsivness above nvidia and that 2080ti is worse of all 3.

note you cant get full benefit of this card unless you run optimized PC like me. especially CR1 ram on proper signal path board for that. its another 40hz of responsivness improvement.( as well as 10K ++ € worth of specialized hardware - CLASSIFIED tier low latency stuff πŸ‘€)

frame times dont reflect the full picture. its just time to have your pizza done. delivering the pizza to your ON TIME doorstep is the hardest and most impactful aspect of latency

Meowchan
Posts: 40
Joined: 17 Jun 2020, 02:06

Re: Frame time differences between NVidia and AMD?

Post by Meowchan » 22 Nov 2020, 04:04

What a nice video. Shame the creator hasn't done more.

How does the latency difference manifest itself? Would there be measurable difference between an AMD and NVidia cards at same fps? What about the two at 100% utilization in GPU limited games? Even if AMD only shines with ultra optimized system, I would expect that someone out there had such a system in place and ran 2 tests using AMD/NVidia replacing only the GPU.

Looking around people have done tests of AMD vs Intel for input lag such as this one
phpBB [video]
, but I can't find similar videos for video cards.

Would we see greater fluctuations in frame times of NVidia compared to AMD in non GPU limited scenarios? If we do that would suggest AMD does have an edge in consistency and thus competitive gaming.

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Frame time differences between NVidia and AMD?

Post by schizobeyondpills » 22 Nov 2020, 16:21

FPS is a useless metric. the more you have the more you need to deliver on time. one ms of latency is worth 1000fps.

even if you can produce 1000 fps you have to deliver them to your monitor. modern systems are heavily crippled in latency due to so many error correction layers and bus saving features like coaleacing and write combining.

its like when you are hungry. do u order 1000 pizzas that will take hours to be made and delivered or do you want one pizza right now?

sure 3090 has 50% perf in fps measures but my 5700xt at 2.4Ghz clocks is like 500Hz more responsive with 50%less fps.

people got lost somewhere and think about frames per second without context of how and when those frames get sent out and delivered which is what actually matters since thats the hard part of being real. beating the speed of light.

im sorry to shatter your bubble but all the reviewers are useless and exist for mass appeal over all ages from 5 to 50 meaning they dont perform accurate or good results for people who need low latency input lag and real time simulation responsivness. such tests require at least 500+ variables tuned(to remove any jitter or noise) and listed.

especially since all consumer products cover a huge range of edge cases with a lot of power saving features/undocumented settings/tweaks which stripped away shape this family car tier all edge cases thing into a low latency real time formula.

GN video cant even show windows version or use proper ram.

actual tests comparing gpus latency do exist because i have done them but i dont share these things publicly and especially not for free. if you can sign an NDA and pay at least $50k for my time then you will have all your answers.


best advice i can give you is to do your own research and dont trust what people or reviewers say because they appeal to mass reach and marketing goals rather than objective truth that would sever half of their audience into negative outburst of tears.

not to mention latency is ungraspable to 99.9999% of mere mortals so they chase FPS. per second is a perspective of measuring the past. (t2-t1= fps). which is as wrong as it can be. we want real timeness aka low latency. measure of propagation delay.

why would you nerf 8 billion transistors working at sub nanosecond clock into something measured per second that hides 9 decimal digits of information and expect to see anything about how that system performs?

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Frame time differences between NVidia and AMD?

Post by Chief Blur Buster » 23 Nov 2020, 03:55

It's true that FPS is a useless metric in many industries.

It's true AMD performs better than NVIDIA in some metrics important to certain people/industries.

But... Around here, it is best not to diss mainstream reviewers -- they are useful to the audiences they cater to -- even if they are not of use to military-precision systems or real-time operating system use cases or big-funded ultralow latency systems. It isn't what Blur Busters is to be snobbish and diss reviewers. There is still use to the FPS metric even if the details matter (better-framepaced lower FPS can be superior as an example). So, can the blanket disdain, please, although it's okay to critique reviewers in a productive way (like my complaints about improper pursuit camera process, etc). Thanks.

Much of the mainstream don't grasp the advanced concepts and the reviewer still has its raison d'etre, even if incremental improvements are needed.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Meowchan
Posts: 40
Joined: 17 Jun 2020, 02:06

Re: Frame time differences between NVidia and AMD?

Post by Meowchan » 23 Nov 2020, 04:24

schizobeyondpills, all that is well and good. But in the end of the day if the difference between AMD and NVidia cannot be measured in miliseconds then I don't see why should I care about it for gaming latency wise. Can it be shown or can't it? Last thing I want is another Roach with discussions turning into feelorycraft and bedtime stories.

Nobody's paying you anything. Share, or don't. So far I remain unconvinced.

User avatar
Chief Blur Buster
Site Admin
Posts: 11653
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Frame time differences between NVidia and AMD?

Post by Chief Blur Buster » 23 Nov 2020, 04:34

I do admit -- reviewers tools need to continue to improve!
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

deama
Posts: 370
Joined: 07 Aug 2019, 12:00

Re: Frame time differences between NVidia and AMD?

Post by deama » 23 Nov 2020, 18:51

It would be an interesting idea to take an AMD GPU and an nvidia GPU, install them on a system with a clean OS (windows 7?), and then use a 1000fps camera on at least a 240hz monitor and record overall input lag but switch the mhz of the GPUs, e.g. downlock AMD down to 1500mhz (but keep fps above 240), see what difference that will make, do something similiar to nvidia, but maybe instead overclock it.

With my limited experimentation with CPU clock speeds, higher clock speed will definitely increase input lag, the question is, how much.
I'm guessing it's a giant pipeline, increasing CPU clock speed decreases the CPU input lag chain, increasing GPU clock speed decreases GPU input lag chain, same with RAM etc...
Problem is, I don't really know what the whole chain looks like per se, as in, what the percentages are, how much does CPU clock speed represent in the chain? What about GPU?

So many questions, but not enough money to measure it all...

mossfalt
Posts: 37
Joined: 23 Nov 2020, 08:43

Re: Frame time differences between NVidia and AMD?

Post by mossfalt » 24 Nov 2020, 12:15

schizobeyondpills wrote: ↑
21 Nov 2020, 15:14
Frame times dont reveal anything about frame delivery on display. Yes AMD and Nvidia have different directions of using tiled rendering and AMD 5700xt is superior idk about rdna2 yet waiting for 6900. Its about bandwidth vs latency

https://youtu.be/Nc6R1hwXhL8

also AMD has at least 10x more DPCs than nvidia with atleast 3x lower latency which might be why its more responsive and immediate and not due to different rendering modes but that also helps a lot

testing 5700xt vs 1080ti vs 2080ti i can safely say 5700xt is few levels of responsivness above nvidia and that 2080ti is worse of all 3.

note you cant get full benefit of this card unless you run optimized PC like me. especially CR1 ram on proper signal path board for that. its another 40hz of responsivness improvement.( as well as 10K ++ € worth of specialized hardware - CLASSIFIED tier low latency stuff πŸ‘€)

frame times dont reflect the full picture. its just time to have your pizza done. delivering the pizza to your ON TIME doorstep is the hardest and most impactful aspect of latency


I'm looking to buy new ram, what do you consider good ram ?

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Frame time differences between NVidia and AMD?

Post by schizobeyondpills » 24 Nov 2020, 18:36

Meowchan wrote: ↑
23 Nov 2020, 04:24
schizobeyondpills, all that is well and good. But in the end of the day if the difference between AMD and NVidia cannot be measured in miliseconds then I don't see why should I care about it for gaming latency wise. Can it be shown or can't it? Last thing I want is another Roach with discussions turning into feelorycraft and bedtime stories.

Nobody's paying you anything. Share, or don't. So far I remain unconvinced.
evolution is not religion to need believers.

did i ever say it cant be shown? absolutely it can. you just need proper accurate and precise tools as well as knowing where and how to apply them. like that github tool above to compare different rasterization and tiling modes between gpus.

what all these reviewers dont understand or chose not to due to their mass apeal goals for revenue is to do an in depth accurate benchmark rather than click to photon joke of a single pixel. last time i checked video games are a continuous stream of frames onto display and dont require clicking to have a frame updated. but everyone of such reviewers keeps circlejerking with click to photon and then once they see a small difference say 0.05ms they call it an error. but u see. if you take 240fps and multiply by 0.05ms you get 12ms per second difference. these are just made up numbers but its still showing how no one is testing actual frame latency but both fails at proper test and how to reason about it. theres also consistency, skew, heat problems, testing under real life scenario and not on local server without network load or low rendering load etc..

i chose not to reveal it. simple as that. why would i give out things for free when no one appreciates what i have to say.

theres displayport and hdmi analyzers, oscillioscopes, logging of frames, OS. etc. if you arent paying me at least 6 figures then feel free to set all of it up yourself. i dont need mass apeal for ad revenue to nerf science and falsify untuned bench environments with invalid results to favor both sides of A vs B for affiliate sales.

its an insult to mathematics itself when i look at these benchmarks that use per second values. the cpus operate at sub nanosecond cycles meaning
5 000 000 000
vs
5Ghz

people have been brainwashed into wrong perception of numbers by marketing companies. 100.0000 "fps" is far far far more smoother and responsive than unstable 400fps with 400.291 for example. when you reason about time then its measures are different than that of space. meaning 10.5ms SHOULD UNDER NO CIRCUMSTANCES EVER BE INTERPRETED SAME AS 10.5m (meters).
0.5ms is interpretation of jitter while 10ms is interpretation of frequency/latency.

so how do u expect to see anything about your system with MULTIPLE SUB NANOSECOND COMPONENTS being "benchmarked" using miliseconds or per seconds? when you strip away 9 digits of measured value?????
and no. those digits all have huge meaning and importance because the perspective and context is CPU. not your tick tock mind and its perception of time.

if one pixel on amd takes (for example) 10.0008ms and on nvidia it takes 10.0009851ms then wtf will using miliseconds rounded up show? oh wait we multiply that "small" difference by number of pixels and by number of frames per second and by speed of raster line and omg?!?!!?

anyone can do click to photon tests for under $100. do they have any value? sure if you measure 60hz vs 144hz where the gap is so huge even a blind man can tell the difference. for decimals or smaller margins you need 10x 100x resolution and more

Post Reply