Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
Post Reply
nursejoy
Posts: 22
Joined: 02 Jun 2020, 17:01

Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by nursejoy » 05 Jun 2020, 01:11

I saw all over the forums and techyescities comparison that Ryzen cpus have more input lag than intel cpus? Is it true, is there data, and lastly is there evidence as to why this happens? I know when games are running at higher framerates the lag is automatically higher but in techyescities old comparison an older intel even beat the ryzen out in terms of lag.

goa604
Posts: 16
Joined: 15 Aug 2019, 16:32

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by goa604 » 08 Jun 2020, 07:21

Yes you can easily see it using latencymon program.
Intels usually get 40us and newest ryzens run at 80us. Its numbers from my memory but i believe they are true.
If i dont forget i can post few sources when im back home on my pc.

senny22
Posts: 94
Joined: 03 May 2019, 17:40

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by senny22 » 08 Jun 2020, 08:27

goa604 wrote:
08 Jun 2020, 07:21
Yes you can easily see it using latencymon program.
Intels usually get 40us and newest ryzens run at 80us. Its numbers from my memory but i believe they are true.
If i dont forget i can post few sources when im back home on my pc.
Since it's a matter of difference between us, am I right to assume that the difference wont matter in terms of performance when gaming?

User avatar
dervu
Posts: 249
Joined: 17 Apr 2020, 18:09

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by dervu » 08 Jun 2020, 08:56

I managed to get my Ryzen 2600 to 20-40us in latencymon. Would Intel get 20us in same conditions?
Ryzen 7950X3D / MSI GeForce RTX 4090 Gaming X Trio / ASUS TUF GAMING X670E-PLUS / 2x16GB DDR5@6000 G.Skill Trident Z5 RGB / ASUS ROG SWIFT PG279QM / Logitech G PRO X SUPERLIGHT / SkyPAD Glass 3.0 / Wooting 60HE / DT 700 PRO X || EMI Input lag issue survivor

1000WATT
Posts: 391
Joined: 22 Jul 2018, 05:44

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by 1000WATT » 08 Jun 2020, 10:16

dervu wrote:
08 Jun 2020, 08:56
I managed to get my Ryzen 2600 to 20-40us in latencymon. Would Intel get 20us in same conditions?
20 good result. intel yes.
I often do not clearly state my thoughts. google translate is far from perfect. And in addition to the translator, I myself am mistaken. Do not take me seriously.

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by schizobeyondpills » 08 Jun 2020, 10:55

senny22 wrote:
08 Jun 2020, 08:27
goa604 wrote:
08 Jun 2020, 07:21
Yes you can easily see it using latencymon program.
Intels usually get 40us and newest ryzens run at 80us. Its numbers from my memory but i believe they are true.
If i dont forget i can post few sources when im back home on my pc.
Since it's a matter of difference between us, am I right to assume that the difference wont matter in terms of performance when gaming?
Your CPU works in nanoseconds, actually billions of clock cycles per nanosecond, thats per core, so 5GHZ = 5 billion cycles a second, your ram usually has (on intel, 35-55ns latency), those nanoseconds are so important to the CPU in terms how much they mean to it that billions of dollars are invested into reducing that same latency, evident from CPUs having not one, not two, but three levels of caching to reduce that latency, see how important it is? Your CPU works in nanoseconds, so microseconds to it are same as minutes are to humans. Those microseconds matter far far more than FPS.

And yes AMD Ryzen has a lot higher input lag and a lot worse latency overall, both in RAM latency as well as core2core latency. If you want to measure how important nanosecond latency is its a good way to think 1ns of latency in RAM or core2core is ~ 10 FPS in responsivness/lateny/input lag reduction

Image


Thats default RAM, you can tighten the clocks a lot and easily hit 37ns, and even 35ns on Intel, not possible on Ryzen, it sucks for anything requiring user input/interaction.

ashrr
Posts: 50
Joined: 21 Jun 2019, 10:12

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by ashrr » 08 Jun 2020, 10:58

senny22 wrote:
08 Jun 2020, 08:27
goa604 wrote:
08 Jun 2020, 07:21
Yes you can easily see it using latencymon program.
Intels usually get 40us and newest ryzens run at 80us. Its numbers from my memory but i believe they are true.
If i dont forget i can post few sources when im back home on my pc.
Since it's a matter of difference between us, am I right to assume that the difference wont matter in terms of performance when gaming?
Yh an inrease of 40us or whatever wouldn't make a difference. The problem is probably somewhere else.
phpBB [video]

User avatar
schizobeyondpills
Posts: 103
Joined: 06 Jun 2020, 04:00

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by schizobeyondpills » 08 Jun 2020, 11:13

ashrr wrote:
08 Jun 2020, 10:58
senny22 wrote:
08 Jun 2020, 08:27
goa604 wrote:
08 Jun 2020, 07:21
Yes you can easily see it using latencymon program.
Intels usually get 40us and newest ryzens run at 80us. Its numbers from my memory but i believe they are true.
If i dont forget i can post few sources when im back home on my pc.
Since it's a matter of difference between us, am I right to assume that the difference wont matter in terms of performance when gaming?
Yh an inrease of 40us or whatever wouldn't make a difference. The problem is probably somewhere else.
phpBB [video]
Image


Do you know how much that is relative to the CPU clock?

Image

Yh an inrease of 40us or whatever wouldn't make a difference.
200000 times the clock rate of CPU doesnt make a difference? Scaled to human doing 1 thought a second thats 200000 seconds/ 3333 minutes or 55 hours, now imagine if you thought of one word is delayed 55 hours, still doesnt make a difference, right? ABOSLUTELY NO DIFFERENCE WHATSOEVER, ITS MINISCULE, RIGHT?

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by Chief Blur Buster » 08 Jun 2020, 12:41

Nanoseconds certainly doesn't make a meaningful difference.

However, alas, there are so many Rube Goldberg mechanisms where, theoretically nanoseconds builds into milliseconds (Example: Meltdown/Sceptre fixes only takes nanoseconds, but for some applications the modified code is executed millions times, building nanoseconds into meaningful milliseconds, etc.)

I don't think this is one of them, but much of the worry is mainly about the general "overall CPU thread performance over a whole humanscale time period" type of thing. Historically in the past, Threadripper cores have often been somewhat slower than Intel cores on a per-core basis, slowing frame rates down in CPU-thread-limited games. You get more total performance for money though (total threads + performance per thread, combined). The problem is many games are single threaded, so it's important not to be distracted by wild goose chases for red herrings -- and single threaded games are often why fewer-core-but-faster-threads CPUs outperform massively-cored CPUs with slower threads.

Things are gradually changing with newer multithreaded games, but sometimes the whole picture is simply just brute thread performance, and we're distracted by molehills when the mountain is simply the per-thread performance (and how well the thread performance maintains when other cores light up too)...

That said, there are so many complex factors -- critical loops that are executed millions or billions times per second, are the kind of situations where nanoseconds builds up into more meaningful milliseconds. A single input read delayed by a few nanoseconds definitely won't matter -- but a critical loop executed millions times before an input read (like the operations of a graphics render), would actually make nanoseconds affect input lag (e.g. a driver that had an unoptimized security patch that slowed frame rate down noticeably on a specific particular chip). Little things like that can matter, the TL;DR is "The devil is in the details".

A good fast Threadripper will still tend to be able to generally outperform a lesser performing model of an Intel chip, so definitely don't dismiss Threadrippers. There are underlying problems, however -- as diverse as budget-system skimping (seems more common with Threadrippers than Intel, the cheaper system components attracting cheaper Threadripper chips) as well as availability of some problematic budget motherboards too (also affects Intel to an extent), creating other weak links. Now, there can be some underlying issues that makes specific Threadripper systems perform badly, and it's all hard to diagnose. (Including problems big enough to show up in LatencyMon like a christmas tree). But mind you, a computer has many simultaneous problems it must fight against -- even mediocre USB drivers and mediocre chipset drivers can end up being the bigger cause of lag -- the distraction of nanoseconds versus microseconds versus milliseconds that are all happening simultaneously. Even though nanoseconds can build up to milliseconds in certain cases, it is all too easy to blame the wrong latency causes. The Red Herring Index is extremey high, and the Wild Goose Chase Meter is in the red zone.

It might be that it's easier to build a clean-performing Intel system but I've seen Threadripper systems outperform Intel for competitive gaming as well. There are esports players in prosettings.net that play by Threadrippers, too. There's some really decent players using them with no problems.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Calypto
Posts: 17
Joined: 16 Sep 2019, 20:58

Re: Is it true Ryzen has higher input lag than Intel? Is there conclusive data to prove this?

Post by Calypto » 08 Jun 2020, 20:33

We can keep our heads stuck in the sand and pretend that latency doesn't exist, or we can acknowledge the problem and finally begin to address it. As latency becomes more of a problem nowadays than ever (Windows 10, Ryzen, game developers buffering frames to boost FPS instead of properly optimizing games, bloated electron garbage "software," etc.), more and more people are waking up to the latency question and are questioning why their old and less powerful systems were noticeably more responsive than their brand new systems (quad core Intel owners upgrading to Ryzen systems being a notable example).

To say that nanoseconds don't matter is ignorant (or even disingenuous, as the case with AMD fanatics knowing full well that Ryzen is heavily bottlenecked by its latency). Again, with the advent of AMD's Zen architecture, even casual gamers are aware of latency affecting performance as is the case with inter-CCX latency penalties and the benefits of RAM overclocking, which both are nanosecond-scale. By disabling a CCX, Ryzen users can see a very large reduction in average DPC latency. I had a Pinnacle Ridge-based system and was able to decrease my average interrupt to DPC latency from .7μs to .48μs, which is about 220ns. 220ns of latency shaved off per average DPC simply by disabling a CCX. The yield in smoothness and reduction in input lag was massive. Of course FPS increased as the latency bottleneck was reduced. I no longer have that system as I upgraded to a 9700K. Even at capped 200 FPS (uncapped FPS exceeded 200 on both systems), the Intel system was vastly more responsive in-game.

Image

Recently the R3 3300X was released; a single CCX Matisse CPU. When compared to multi-CCX CPUs like the R3 3100 (both quad cores), the 3100 is not even in the same league as the 3300X. Again, we begin with nanoseconds and end up with milliseconds felt by the user. This is a perfect example of the butterfly effect.

Image

Memory latency is also measured in nanoseconds, and it also has a profound effect on the system's latency. Keep in mind the high settings (GPU bottleneck in a CPU benchmark, nice job reviewers).

Image

Here is a benchmark that shows how memory latency matters in CPU-bound games like Overwatch.

Image

Another example I really like is the 5775C vs. the 6700K. Despite clocking much lower, the 5775C is capable of beating the 6700K in average FPS as well as % lows, thanks to its 128MB eDRAM/L4. Imagine if Intel replaced the useless iGPU with eDRAM, or AMD fit an eDRAM die instead of gluing on a second CCD with even more mediocre cores. I would gladly shell out a few hundred more if it meant better latency.

Image

Click on the images for sources.
Last edited by Calypto on 04 Jul 2020, 14:36, edited 1 time in total.

Post Reply