Frame time differences between NVidia and AMD?

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
lizardpeter
Posts: 208
Joined: 01 Dec 2020, 14:41

Re: Frame time differences between NVidia and AMD?

Post by lizardpeter » 06 Dec 2020, 04:04

Alpha wrote:
02 Dec 2020, 07:22

TCP and network optimizations are by far and large some of the best a person can do for a competitive edge easy. I do a lot of professional tournaments so competing for money makes it a big deal and why I have a commercial grade network. For other optimizations, I won't sacrifice image quality to a potato like some will but all my OS deployments are custom built but yours truly using Microsofts framework. These are essentially completely stripped of everything with the exception of security protocols. However until we see something more serious like an artificial intelligence based solution that doesn't do signature scanning I live with defender. Hoping to see some commercial solutions roll out soon. I have my CEH so am delicately paranoid about the risk associated with those doors being open. Its unreal how easy it is to hit systems and own them. I can handle the security at the firewall levels but running deep packet inspection and geo filtering and other intrusion prevention methods cost the microseconds we fight for that can cost (and has) big money (to me). I'm a career IT guy by trade though this year I have highly considered retiring due to some dumb luck and gaming (more dumb luck but I am pretty fast) but I piggie back off some of the big brains out in the world especially here on these boards of all places because there is a legit collection of awesome people and Chief has masterfully managed to keep the community amazing. I implement whatever changes make sense or I can tell a difference by feel even if I can't quantify it with test. It could all be complete BS and maybe some light being reflected off something hits the eye just right not even being noticed helping focus or whatever and bam fragging out lol. No idea here truth be told.

The issue with AMD GPU's is simply that they won't get the frames (excluding the 6000 series that may make sense with the rasterization muscle @ 1080p). In addition, when you're sponsored, you're sponsored. I wouldn't run with a mouse due to it not being what I prefer. Probably wouldn't make a difference but it was no good in my hands. I don't need the income and already established career wise but if you're living in a team house and Logitechs bringing G Pro X Wireless Superlights, that's what you're fragging with (just ordered mine while I wait for Razer to drop the 8000hz mouse). Chief or someone would have to explain why something feels "ahead" because I clearly don't fully understand but on the 5700 system and easily 60+ fps difference that's a moving target, its almost like being ahead of the 2080ti face to face (due to network test I know its not packets landing first so its something else). I am hoping to hear some reasons on this. My theory was that the pipe was organized in a way that prioritizes inputs. Literally no idea if that's even possible.

I really want to get my hands on the 6000 series for just a few minutes. 6900XT hits in a few days and if the 3090 is faster that'd be my choice but if the 6900XT feels like the 5700 I'd go that direction and buy a 3080 Ti as a back up or throw in my other machine. I'd do this if the 6900XT was a bit slower even. No issues with the 5700 but it'll take awhile to get cool with the confidence in AMD's GPU drivers.

My experience with Boost is not so good. Maybe my brain is fired up at times and less at others but boost and enabled can hurt negatively or feel ok at times. I turn it off now and stick to the older recommended NCP settings.
Wow, that’s a lot of information. Thanks. I’ve been trying to target latency as much as possible since I’ve been thinking about playing more competitively since I’ve always been a fairly top-tier COD player.

For the Windows deployments, are there any pre-made ones you could recommend with everything stripped out of them?

What do you mean by “boost”? I saw another one of your comments on the 360 Hz monitor review thread. You were saying your reaction time test was 20 ms slower with Reflex. What do you mean by that? I thought Reflex was something that had to be enabled on a per-game basis in the game’s settings. There’s also ultra low latency mode in the control panel.

I actually have also been looking at all of these optimizations because when I first got my 240 Hz AW2518H, my reaction time tests were around 145 ms and now I struggle to get much under 160 ms. I’m not sure if it is something with Windows or Chrome or just me. I’ve tried using a C++ version running without full screen optimizations (no DWM) and I can easily get in the 130 ms range.

Do you recommend any of the popular changes to the platform clock and disabledynamictick? Basically, any of the settings you can find in bcdedit /enum. Also, is there any tool you use to see if the changes you’re making are actually doing anything? I’ve used LatencyMon, but I know DPC latency is not directly related to input lag. Also, my DPC latency is usually 7-15 us. I managed to get it down a lot by switching GPU and audio drivers to MSI mode.
i9 9900k | RTX 2080 Ti | 32 GB 4x8GB B-Die 3600 MT/s CL16 | XV252QF 390 Hz 1080p | AW2518H 240 Hz 1080p | PG279Q 144 Hz 1440p

Razer Viper 8K | Artisan Zero Mid XL | Apex Pro TKL | 1 gbps FiOS (Fiber)

Hotdog Man
Posts: 6
Joined: 01 Dec 2020, 13:35

Re: Frame time differences between NVidia and AMD?

Post by Hotdog Man » 09 Dec 2020, 21:14

For input lag minded people who are wondering whether they should go Team Green or Team Red this gen, here is an interesting article on Ampere architecture vs. RDNA 2 architecture:

https://www.techspot.com/article/2151-n ... amd-rdna2/
[EDIT by Moderator: Linkified your link.]

In short, centrally located cache/controllers and the Infinity Cache (on-die SRAM will always provide lower latencies) of RDNA 2 lend themselves to better latencies than Ampere. Nvidia has to dedicate space on their chip to AI learning, graphics design, etc, whereas AMD is built mostly for gaming. Whether that translates to a real-world effect on latency for competitive gaming is a mystery. Also, not sure which GPU has a rendering pipeline better optimized to deliver frames ASAP.

User avatar
Chief Blur Buster
Site Admin
Posts: 11721
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Frame time differences between NVidia and AMD?

Post by Chief Blur Buster » 09 Dec 2020, 23:20

P.S. Linkified your link. Sorry about the antispam bot preventing your link, it won't bother you once you have a several posts made. It's unfortunate antispam settings now has to block new-users URLs to save workload of having to manually delete up to 100 spam posts day.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Hotdog Man
Posts: 6
Joined: 01 Dec 2020, 13:35

Re: Frame time differences between NVidia and AMD?

Post by Hotdog Man » 10 Dec 2020, 21:26

Chief Blur Buster wrote:
09 Dec 2020, 23:20
P.S. Linkified your link. Sorry about the antispam bot preventing your link, it won't bother you once you have a several posts made. It's unfortunate antispam settings now has to block new-users URLs to save workload of having to manually delete up to 100 spam posts day.
Thanks, Chief. Don't mean to nag, so I'll only bring it up once again, can you please elaborate on what you meant in your comment "It's true AMD performs better than NVIDIA in some metrics important to certain people/industries."

That point in particular piqued my interest and the omission of detail afterwards is making me very curious. Would appreciate it greatly

deama
Posts: 370
Joined: 07 Aug 2019, 12:00

Re: Frame time differences between NVidia and AMD?

Post by deama » 13 Dec 2020, 11:46

schizobeyondpills wrote:
24 Nov 2020, 18:36
Meowchan wrote:
23 Nov 2020, 04:24
schizobeyondpills, all that is well and good. But in the end of the day if the difference between AMD and NVidia cannot be measured in miliseconds then I don't see why should I care about it for gaming latency wise. Can it be shown or can't it? Last thing I want is another Roach with discussions turning into feelorycraft and bedtime stories.

Nobody's paying you anything. Share, or don't. So far I remain unconvinced.
evolution is not religion to need believers.

did i ever say it cant be shown? absolutely it can. you just need proper accurate and precise tools as well as knowing where and how to apply them. like that github tool above to compare different rasterization and tiling modes between gpus.

what all these reviewers dont understand or chose not to due to their mass apeal goals for revenue is to do an in depth accurate benchmark rather than click to photon joke of a single pixel. last time i checked video games are a continuous stream of frames onto display and dont require clicking to have a frame updated. but everyone of such reviewers keeps circlejerking with click to photon and then once they see a small difference say 0.05ms they call it an error. but u see. if you take 240fps and multiply by 0.05ms you get 12ms per second difference. these are just made up numbers but its still showing how no one is testing actual frame latency but both fails at proper test and how to reason about it. theres also consistency, skew, heat problems, testing under real life scenario and not on local server without network load or low rendering load etc..

i chose not to reveal it. simple as that. why would i give out things for free when no one appreciates what i have to say.

theres displayport and hdmi analyzers, oscillioscopes, logging of frames, OS. etc. if you arent paying me at least 6 figures then feel free to set all of it up yourself. i dont need mass apeal for ad revenue to nerf science and falsify untuned bench environments with invalid results to favor both sides of A vs B for affiliate sales.

its an insult to mathematics itself when i look at these benchmarks that use per second values. the cpus operate at sub nanosecond cycles meaning
5 000 000 000
vs
5Ghz

people have been brainwashed into wrong perception of numbers by marketing companies. 100.0000 "fps" is far far far more smoother and responsive than unstable 400fps with 400.291 for example. when you reason about time then its measures are different than that of space. meaning 10.5ms SHOULD UNDER NO CIRCUMSTANCES EVER BE INTERPRETED SAME AS 10.5m (meters).
0.5ms is interpretation of jitter while 10ms is interpretation of frequency/latency.

so how do u expect to see anything about your system with MULTIPLE SUB NANOSECOND COMPONENTS being "benchmarked" using miliseconds or per seconds? when you strip away 9 digits of measured value?????
and no. those digits all have huge meaning and importance because the perspective and context is CPU. not your tick tock mind and its perception of time.

if one pixel on amd takes (for example) 10.0008ms and on nvidia it takes 10.0009851ms then wtf will using miliseconds rounded up show? oh wait we multiply that "small" difference by number of pixels and by number of frames per second and by speed of raster line and omg?!?!!?

anyone can do click to photon tests for under $100. do they have any value? sure if you measure 60hz vs 144hz where the gap is so huge even a blind man can tell the difference. for decimals or smaller margins you need 10x 100x resolution and more
But there's a guy that did some tests on nvidia rtx cards vs amd's 5800XT and found out the difference is negligible:
phpBB [video]

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Frame time differences between NVidia and AMD?

Post by schizobeyondpills » 17 Dec 2020, 22:34

1. perception of time is continuous motion not an action inside time aka click. that means 0.03ms difference per frame that he measured on his nowhere near accurate or precise setup and then labeled as margin of error are what tells the actual story. are you only perceiving your game when you click with mouse or is it constantly ongoing experience? so then wtf are all "latency" gamers reasoning it per frame rather than per continuous motion???????????

so if on 144hz its ~7ms per frame and he measured 7.05ms its "margin of error" and flawed results are preached to the masses. now take 1second time span which is 144*7.05ms = 1015.2ms and thats a TIME DRIFT of 15.2ms per second from mere 0.05ms. now multiply this drift for every second of minute of your gameplay and see where u end up.


all of these youtubers lack the techical knowledge, equipment, awareness of time and how to reason about it to measure multi gigahertz devices at 0.2ns clocks.

the reason is due to trying to make an appealing video to audience rather than proper research which gives them mass marketing (like you just did by claiming evidence from a random video about simulation of reality), video views and subscribers. nothing wrong w that if done right but when push comes to shove they will have to make a choice. either appeal to masses by casualizing and stripping away information(easy) or educate them properly(hard). So then they end up with as simple as possible test they can put on video because nobody cares enough about even first digit behind the decimal seperator so fk it its just an integer.

Also his video doesnt have a full list of all the settings of OS, Bios, game, hardware temps, ambient temps etc. (did i mention driver loading order, or KASLR, ASLR which randomizes virtual address of where os drivers game and every single library of every single process is loaded at? did he disable those? can you verify?)

Also, theres nothing more idiotic(im sry but someone had to say it out loud) than trying to measure a system output frame latency and then walling a monitor into that measuring pipeline where its common knowledge even small 0.5c temp diff affects response times of every display. So if you want actual reviews and latency measures get a display protocol analyzer to measure at cpu/main system latency where its going out. at the gpu. not at display per click.

Any click to photon or latency measurement in general benchmark is useless if
1. it doesnt provide 2000+ system(full bios, full os, full kernel infodump of drivers apps and libs loaded and running) and environment variables(power, ambient, air, humidity, temps)
2. doesnt measure at least to 4 decimal precisions at resolution of single action (frame is in single digit ms, so you need x.yyyy ms resolution)
3. you need very good accuracy and precision (not the same thing) of #2
4. you need consistency in your tests across the border
5. you need to reason about your measures correctly (0.1ms per frame latency difference is nothing but added with frames per second is huge)
Despite looking like plastic black boxes these boxes are built in nanometers with billions of transistors and VARIABLES that change per minute from usage and ambient/environment and other stuff. So stop trying to nerf them into black boxes and nerf your egos to learn how it actually works to measure the truth. Not by stripping away decimals as margin of error for views or clicks or selfreinforce your own beliefs

Oh y one more thing.

SPACE MEASURE IS NOT SAME AS MEASURE OF FREQUENCY. TIME CONTROLS THREE SPATIAL DIMENSIONS. SPACE MEASURES ARE STATIC WHILE TIME MEASURES ARE CONTINUOUS.
wtf?
reality perception 101 of space/time value of 10(m or hz) with 0.5(m or hz) of noise.

space measure - 10.5 meters. 0.5m here is just extra space.
time measure - 10.5Hz. 10 = interval of frequency, 0.5ms is jitter of that frequency forward in time till "infinity". two ticks of 10.5Hz is delaying time of 10hz freq by 1ms in JUST TWO TICKS.

its continuous motion till "infinity".


i could write two bibles about time and perception of it with latency but whats the point if your race substitutes ignorance for knowledge and thinks they have same amount of value?

User avatar
MaxTendency
Posts: 59
Joined: 22 Jun 2020, 01:47

Re: Frame time differences between NVidia and AMD?

Post by MaxTendency » 18 Dec 2020, 00:26

I'll just leave this here.

Image
-Source

Note: fps was locked at 240 (same as monitor hz) in all the tests.

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Frame time differences between NVidia and AMD?

Post by schizobeyondpills » 18 Dec 2020, 01:52

MaxTendency wrote:
18 Dec 2020, 00:26
www.twitter.com/josefspjut/status/1323611501182226433

Josef @josefspjut
·
Nov 3
This is an excellent video. While 200 samples sounds like a lot, I highly recommend going to at least 2000 to get a better distribution of latencies. Up to 360 Hz the frame rate and refresh rate dominate the distribution as you can see if you look closely at the results.

Josef
@josefspjut
·
Nov 3
Early on when I took 1000-2000 samples (automated clicking), I still found significant sample correlation from the automation bias. If a person clicks individually the 1000-2000 times, then you might get good enough results, but at 20-200 clicks, there's still too much error IMO.

Computers research at NVIDIA.


please stop chasing nanoseconds with knowledge of understanding at seconds. Calyptos measures are fine fo few milisecond of accuracy. deffinitely not for sub ms. Chasing the raster line of hSync and also measuring anything past few ms of resolution with a monitor is a joke.(no the small sub ms changes dont mean much is wrong. those changes vary per every single pixel of a frame per frame as continuous motion) You can ask him why those 5700xt results are meaningless, related to his overclocking at that time.

You know how when you are ingame at 240fps your temps rise and then every trace of copper slows down? And how your monitor heats up more from ambient temps rising AND from actually displaying those frames? Thats why you make 2000 valid samples and make sure they are valid(click to photon is still an insult to mathematics and electrical engineering even at 20k samples.) Why cant latency chasers get past the monitor and look deep? Is it fear or arrogance? Or both?


For things in motion importance of decimal digits exponentionally scales in value, opposite of standing still(spatial) measures of space. Because they are continuous motion and that adds up and skews the clock of frequency.

User avatar
Chief Blur Buster
Site Admin
Posts: 11721
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Frame time differences between NVidia and AMD?

Post by Chief Blur Buster » 18 Dec 2020, 02:43

@josefspjut is definitely correct, known this for years. Especially for VSYNC OFF frame rates, where latency varies a lot from sample to sample because of the time differential between scanout position on screen, to the fixed location of a pixel (not all pixels are the same lag)

If you're using fully backpressure'd VSYNC ON then latency can be really consistent from sample to sample, but even driver issues, frame-release jitter, and power management scheninigians (and tons of other factors) can also cause VSYNC ON latency to vary behind the scenes. Especially since frame release (return from a Present() call) can jitter the next inputread relative to the next VSYNC.

Also, indeed, many reviewers often add too many decimal digits for their lag measurement accuracy method. Say, SMTT method, and showing those decimal digits after milliseconds.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply