Ryzen vs Intel Input Lag/Latency

Everything about latency. This section is mainly user/consumer discussion. (Peer-reviewed scientific discussion should go in Laboratory section). Tips, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
bbbe
Posts: 3
Joined: 23 Oct 2025, 02:14

Re: Ryzen vs Intel Input Lag/Latency

Post by bbbe » 19 Nov 2025, 19:39

kyube wrote:
19 Nov 2025, 15:10
bbbe wrote:
04 Nov 2025, 10:27
Coming back with updates, just tried a setup with m.2 riser to pci + via 805. omg. I can definitely feel the difference even between the usb2 cpu port. About 2ms+ less I think.
You're free to share a ETW sample of your VIA VL805-based PCIe AIC.
Considering that I own a AIC of every USB xHCI controller released in the past decade (planning on hopefully releasing my testing data by the end of the year), the value you've thrown is complete nonsense. :)
Clearly you haven't read earlier messages where I clearly stated I haven't measured it precisely yet, so yes. Obviously what I'm describing is currently anecdotal. But I stand by what I said. It's just the same rhetoric that you would hear just about 8-10y ago how a few seemingly engineering background internet personas would claim how there is no perceivable difference in panels higher than 60hz because your eyes can't see higher than that and a bunch of snake oil like that. Same thing people could claim about how a monitor I'm typing on which is 400hz atm how it's diminishing returns compared to 240hz. Well, there is a clear distinct timing difference. Latency comes with the territory and there are numerous pillars that could affect e2e feel and potentially measured latency. I can't get into a technical debate with you unless I would have actual technical measurements.

At the same time, while you are going to spend time chirping consider the fact that I'm not a single person claiming that they seemed to have had stupid and unreal mouse feel issues on new ryzen systems oob, unlike something you seemingly get on intel. And hey, listen. Whether it's true or not, nobody actually knows where the bulk of it is coming from and how to actually prove or disprove that fact. It could be as stupid as lack of optimizations around hw-kernel level on the amd's end to do with .. windows? x86 instruction set? io eco system standards compliance? Even something as stupid as a drivers/firmware for these usb cards could mean a lot in how something can feel. Or it could be a wider packaging problem in manufacturing in the age of everything accelerated on that end. Also it's a fact that intel io eco system overall had better maturity across the whole stack (hw + sw) over these years. (Intel were one of the first to introduce high performance optane ssds that were unmatched in server space, intel developed thunderbolt standard years ago before usb4 just onset, intel's developed their own nics for decades, intel to date has better memory controllers than ryzen...). By maturity i'm mainly implying predictable performance characteristics that have been consistent for years. And not whether they are faster or slower or whatever else. Can you possibly establish metrics for that and measure it? Of course. Now is that realistic for a bunch of people complaining in the thread to suddenly pull those receipts out? Debatable. I'm not claiming to be an expert. I'm someone who has clearly experienced a clear issue, subjectively, anecdotally. I still do stand by that fact. Computer systems are large complex systems and you can only try to fish out a couple of needles in the haystack of potential pillar fallout that is ryzens performance. Hey, what if it's all the chiplet tsmc thing? I don't know what people (if any) are saying about the new intel cpus? I've no clue?

At the end of the day whatever the real bulk of issues with ryzen platform are in terms of input I don't really care. I just care to try all possible options available without having to just give up and submit oneself to whatever is part of the quo and wait for them or someone adjacent to one day just post an analysis. Which btw I don't care for people who are and just plug the machine and call it a day. I just happen to be annoyed enough to find out why. It could even be nvidia's driver stack and how it fits in the bigger picture with ryzen. It could literally be anything. But hey, until there is someone with better expertise who's actually may actual further the conversation by participating I will keep trying all the stupid options and keep those that do seem to have affected something.

Conveniently enough, I am setting up a measuring rig for simple click to photon. Will see if I can get results on that with via805 vs usb3 on mobo (which are coincidentally ASM) and usb2 ports which I believe also go thru ASM (I could be wrong).

Also another thing that's not even close to being discussed publicly is jitter and how systems react to acceleration in input change. So there are so many behaviors that take so much effort to quantify that it makes no sense for then someone to come around and say along the lines of I've tested all these micro-controllers and they are all identical. Care to show receipts?

P.S:

As a fun exercise feel free to lookup on the search: "Windows Timer Resolution: The Great Rule Change". Sometimes things change, and sometimes they might have impact in places you don't expect and sometimes things like that change and they don't really affect anything.
Last edited by bbbe on 19 Nov 2025, 20:03, edited 8 times in total.

gster
Posts: 6
Joined: 09 Nov 2025, 13:56

Re: Ryzen vs Intel Input Lag/Latency

Post by gster » 20 Nov 2025, 00:42

bbbe wrote:
18 Nov 2025, 01:12
gster wrote:
11 Nov 2025, 13:59
bbbe wrote:
04 Nov 2025, 10:27
Coming back with updates, just tried a setup with m.2 riser to pci + via 805. omg. I can definitely feel the difference even between the usb2 cpu port. About 2ms+ less I think. At the same time it just feels more consistent. Idk why. I'm gonna get a photo sensor to really measure it and make sure it's not a placebo but so far I do think there is a certain difference. In the compositor I can tell that everything is much tighter and like timings wise just on point. Same goes for the games.

idk how I would measure consistency though... As I think there are even more gains in that than just latency. It just feels like it's back closer to what I had on my intel systems.

It might really be something to do with a bunch of amd mobos and usb implementations I have no clue. But I'm definitely feeling that i'm getting much more consistent clocking in the chain or something and overall lower latency.
Please check out my last post. Was using the PCIe riser with the usb card a clear difference?
Hey, no yeah. For me I do feel like it's much more consistent. I do have via 805 however and afaik there are differences between usb chips. And specifically ASM not being that much of a benefit. You can pickup via 805 for pennies from aliexpress.

Other things that might have indirectly helped is tuning power profiles precisely a bunch with PowerSettingsExplorer on top of Lasso bitsum base profile. And ofc all the things like making sure the cores are not entering sleep states n such. Minimum processor thresholds maxed out. The one obscure setting that I think is kinda significant is "Processor performance time check interval". I have it set to max of 5000. Basically it's a tick based profiling mechanism and there is a guy on yt (search "THIS WILL DRAMATICALLY REDUCE YOUR LATENCY!", can't post yt links here) who found that it uses plenty of kernel time esp when you switch to performance profile which have this frequency increased dramatically from balanced which is an issue.

Another thing I tried playing with is MSI tool and I feel like MSI mode on defo had a slightly more consistent feeling. I still set my pci dev to high priority though.

Also in BIOS I indeed turned off all the dynamic clock garbage, power states and such. Fixed clock 5.2ghz. Idc about maxing out setup I just need consistency. Also set 6000mhz xmp from my stick. At first I tried maxing out timings but the issue it seems that if you get vsoc voltage high then the system becomes unstable real quick. Like i'm not joking. It's better to run the stick at 1.35v and I just left vsoc on auto on my board which spits out 1.272 v which is f- high, but if I set it lower manually it gets unstable pretty quickly as well.

Anyways, yeah, basically from what I can tell amd platform is a load of garbage in terms of stability OOB in my opinion. And indeed all these little stupid tweaks do get it to a much better state. At the same time it doesn't fully eliminate everything. Like for example on boot & win logon I will still have my cursor freeze consistently once for like .5s. And to some degree I could say that there are times where there is funny jitter moments.

But also I think with a lot of tuning I did get the system down to a point where basically in valorant if I'm missing a pixel aim then it's likely me. But also I think there is some truth to the fact that on today's systems and maybe especially ryzen peripherals clocks above 1000hz basically is a performance downgrade. The issue being is that the packet timings on usb 2 fit max into 1000hz for polling. So when manufacturers advertise 1khz+ polling rate and say you plug it into usb 2 port it's basically just tricks and not true 1khz+ polling. And this is something I haven't verified myself yet but even when you plug the peripherals into usb 3 it doesn't necessarily mean they will run in usb 3 mode. In usb 3 packet timings can be as short as to fit into 8000hz max. But yeah, that's another thing too. And so far idk I noticed that my mice that I have running at 1khz overall yields more stable experience and on intel I think it might be less of a pronounced issue but also not immune.

So tldr, good luck, this nightmare of a platform will have you pulling your hair out. But yeah, I do think with all these tweaks you can get it better. And frankly even though in some games the % lows are a disaster, still the fact that I can have my pc basically run on fixed 5ghz+ clock with fixed fan speeds, have network throughput pump out 2.5 GBe much faster and have generally higher fps than 14th gen intel that is enough for me to stay on this jitter fest. And also I couldn't get my hands on the best 14th gen mobo fast in my area. And gosh, are the mid tier intel boards bad. Like if you can't get to 7200mhz on intel then idk what the point of it is.

Also some tweaks from here did help (specifically network & usb ones in devmgmt)

viewtopic.php?t=11799
Thank you so much for sharing this info! Using the PCie m.2 adapter to the USB hub has been the biggest difference, it is much more consistent now, and it makes sense because on the motherboard schematics it shows the only way to get direct USB access to the CPU is thru the m.2 1 slot on my x870 tomahawk.

I just tried the PowerSettingsExplorer tip now and am testing currently. Need to look into processor lasso and getting a stable 5.22ghz clock stable on this and those other tweaks you mentioned. Thank you so much again for sharing that.

Feel free to add me on discord and let me know u came from here my username is the same as on here. Would love to speak more to you about this.

gster
Posts: 6
Joined: 09 Nov 2025, 13:56

Re: Ryzen vs Intel Input Lag/Latency

Post by gster » 24 Nov 2025, 13:14

kyube wrote:
19 Nov 2025, 15:10
bbbe wrote:
04 Nov 2025, 10:27
Coming back with updates, just tried a setup with m.2 riser to pci + via 805. omg. I can definitely feel the difference even between the usb2 cpu port. About 2ms+ less I think.
You're free to share a ETW sample of your VIA VL805-based PCIe AIC.
Considering that I own a AIC of every USB xHCI controller released in the past decade (planning on hopefully releasing my testing data by the end of the year), the value you've thrown is complete nonsense. :)
What do you think the best PCIe to USB is, based on price to performance?

User avatar
kyube
Posts: 697
Joined: 29 Jan 2018, 12:03

Re: Ryzen vs Intel Input Lag/Latency

Post by kyube » 24 Nov 2025, 16:09

bbbe wrote:
19 Nov 2025, 19:39
It's just the same rhetoric that you would hear just about 8-10y ago how a few seemingly engineering background internet personas would claim how there is no perceivable difference in panels higher than 60hz because your eyes can't see higher than that and a bunch of snake oil like that.
That's absolutely not the same rhetoric...
I was only interested in raw data, not speculation.
bbbe wrote:
19 Nov 2025, 19:39
At the same time, while you are going to spend time chirping consider the fact that I'm not a single person claiming that they seemed to have had stupid and unreal mouse feel issues on new ryzen systems oob, unlike something you seemingly get on intel. And hey, listen. Whether it's true or not, nobody actually knows where the bulk of it is coming from and how to actually prove or disprove that fact.
Actually, there are people that know. Most of users here like to sell snake oil instead of solve their problems though.
There's a finite set of 'problems', each of which has a solution. :)
bbbe wrote:
19 Nov 2025, 19:39
It could be as stupid as lack of optimizations around hw-kernel level on the amd's end to do with .. windows? x86 instruction set? io eco system standards compliance? Even something as stupid as a drivers/firmware for these usb cards could mean a lot in how something can feel. Or it could be a wider packaging problem in manufacturing in the age of everything accelerated on that end. Also it's a fact that intel io eco system overall had better maturity across the whole stack (hw + sw) over these years. (Intel were one of the first to introduce high performance optane ssds that were unmatched in server space, intel developed thunderbolt standard years ago before usb4 just onset, intel's developed their own nics for decades, intel to date has better memory controllers than ryzen...).
Intel has a better memory controller than Ryzen? A interesting take... :D
bbbe wrote:
19 Nov 2025, 19:39
Can you possibly establish metrics for that and measure it? Of course.
This is what I've been attempting a few times on these forums in a few other threads... but to no avail sadly :/
bbbe wrote:
19 Nov 2025, 19:39
I'm not claiming to be an expert. I'm someone who has clearly experienced a clear issue, subjectively, anecdotally. I still do stand by that fact.
I'm not denying that you had a “issue”, I just think that it can be quantifiably measured.
Psychological bias towards a feel also exists. What you find “snappy” may feel “floaty” to someone else.
bbbe wrote:
19 Nov 2025, 19:39
Computer systems are large complex systems and you can only try to fish out a couple of needles in the haystack
I agree, but they're a countable, discrete set of quantifiable problems. Not a continuous set. Not something abstract.
I also disagree that there is an inherent architectural flaw on either sides, as concrete evidence is non-existant in that regard.
bbbe wrote:
19 Nov 2025, 19:39
Hey, what if it's all the chiplet tsmc thing?
There are a myriad of larger fish to catch.
CPU architectural differences aren't even close to being the main culprit, if you ask me. :)
bbbe wrote:
19 Nov 2025, 19:39
At the end of the day whatever the real bulk of issues with ryzen platform are in terms of input I don't really care.
I agree, a common user shouldn't care about the majority of these things. :D
bbbe wrote:
19 Nov 2025, 19:39
I just care to try all possible options available without having to just give up and submit oneself to whatever is part of the quo and wait for them or someone adjacent to one day just post an analysis.
I also agree with this, pragmatism is a virtue.
bbbe wrote:
19 Nov 2025, 19:39
Which btw I don't care for people who are and just plug the machine and call it a day. I just happen to be annoyed enough to find out why. It could even be nvidia's driver stack and how it fits in the bigger picture with ryzen. It could literally be anything. But hey, until there is someone with better expertise who's actually may actual further the conversation by participating I will keep trying all the stupid options and keep those that do seem to have affected something.
That's definitely one way to tackle these (abstract-like) “problems”, albeit the most time-consuming one.
I see no issue in this approach though, so long as the person specifically discerns that it was a fix for his issue, on his system, which may or may not be applicable to every possible HW combination.
bbbe wrote:
19 Nov 2025, 19:39
Conveniently enough, I am setting up a measuring rig for simple click to photon. Will see if I can get results on that with via805 vs usb3 on mobo (which are coincidentally ASM) and usb2 ports which I believe also go thru ASM (I could be wrong).
Also another thing that's not even close to being discussed publicly is jitter and how systems react to acceleration in input change. So there are so many behaviors that take so much effort to quantify that it makes no sense for then someone to come around and say along the lines of I've tested all these micro-controllers and they are all identical. Care to show receipts?
I understand, I might've come across a bit rash & insensitive over text.
I understand where you're coming from.
This might be somewhat of a misunderstanding, as I'm not here to bash you or your methods :p
My goal was to try & quantify your “feel” difference in form of data, which other users can take a look at using the generated .etl file.
It was in no means a way to devalue your experience & your perception.
gster wrote:
24 Nov 2025, 13:14
What do you think the best PCIe to USB is, based on price to performance?
I'm unable to give you a concrete answer to this question, as I haven't had the time to devise a testing methodology for the USB PCIe AIC that I own.
I will try to provide some data by the end of the year, hopefully.
I have ordered a MCS9990-based (USB EHCI controller) AIC as well to add into the mix :D
I've also seen reports of Vanguard (Valorant's kernel anti-cheat software) disallowing 3rd party USB PCIe AIC such as Asmedia ones, even with the stock MS driver used...

bbbe
Posts: 3
Joined: 23 Oct 2025, 02:14

Re: Ryzen vs Intel Input Lag/Latency

Post by bbbe » 27 Nov 2025, 16:19

kyube wrote:
24 Nov 2025, 16:09
bbbe wrote:
19 Nov 2025, 19:39
It's just the same rhetoric that you would hear just about 8-10y ago how a few seemingly engineering background internet personas would claim how there is no perceivable difference in panels higher than 60hz because your eyes can't see higher than that and a bunch of snake oil like that.
That's absolutely not the same rhetoric...
I was only interested in raw data, not speculation.
bbbe wrote:
19 Nov 2025, 19:39
At the same time, while you are going to spend time chirping consider the fact that I'm not a single person claiming that they seemed to have had stupid and unreal mouse feel issues on new ryzen systems oob, unlike something you seemingly get on intel. And hey, listen. Whether it's true or not, nobody actually knows where the bulk of it is coming from and how to actually prove or disprove that fact.
Actually, there are people that know. Most of users here like to sell snake oil instead of solve their problems though.
There's a finite set of 'problems', each of which has a solution. :)
bbbe wrote:
19 Nov 2025, 19:39
It could be as stupid as lack of optimizations around hw-kernel level on the amd's end to do with .. windows? x86 instruction set? io eco system standards compliance? Even something as stupid as a drivers/firmware for these usb cards could mean a lot in how something can feel. Or it could be a wider packaging problem in manufacturing in the age of everything accelerated on that end. Also it's a fact that intel io eco system overall had better maturity across the whole stack (hw + sw) over these years. (Intel were one of the first to introduce high performance optane ssds that were unmatched in server space, intel developed thunderbolt standard years ago before usb4 just onset, intel's developed their own nics for decades, intel to date has better memory controllers than ryzen...).
Intel has a better memory controller than Ryzen? A interesting take... :D
bbbe wrote:
19 Nov 2025, 19:39
Can you possibly establish metrics for that and measure it? Of course.
This is what I've been attempting a few times on these forums in a few other threads... but to no avail sadly :/
bbbe wrote:
19 Nov 2025, 19:39
I'm not claiming to be an expert. I'm someone who has clearly experienced a clear issue, subjectively, anecdotally. I still do stand by that fact.
I'm not denying that you had a “issue”, I just think that it can be quantifiably measured.
Psychological bias towards a feel also exists. What you find “snappy” may feel “floaty” to someone else.
bbbe wrote:
19 Nov 2025, 19:39
Computer systems are large complex systems and you can only try to fish out a couple of needles in the haystack
I agree, but they're a countable, discrete set of quantifiable problems. Not a continuous set. Not something abstract.
I also disagree that there is an inherent architectural flaw on either sides, as concrete evidence is non-existant in that regard.
bbbe wrote:
19 Nov 2025, 19:39
Hey, what if it's all the chiplet tsmc thing?
There are a myriad of larger fish to catch.
CPU architectural differences aren't even close to being the main culprit, if you ask me. :)
bbbe wrote:
19 Nov 2025, 19:39
At the end of the day whatever the real bulk of issues with ryzen platform are in terms of input I don't really care.
I agree, a common user shouldn't care about the majority of these things. :D
bbbe wrote:
19 Nov 2025, 19:39
I just care to try all possible options available without having to just give up and submit oneself to whatever is part of the quo and wait for them or someone adjacent to one day just post an analysis.
I also agree with this, pragmatism is a virtue.
bbbe wrote:
19 Nov 2025, 19:39
Which btw I don't care for people who are and just plug the machine and call it a day. I just happen to be annoyed enough to find out why. It could even be nvidia's driver stack and how it fits in the bigger picture with ryzen. It could literally be anything. But hey, until there is someone with better expertise who's actually may actual further the conversation by participating I will keep trying all the stupid options and keep those that do seem to have affected something.
That's definitely one way to tackle these (abstract-like) “problems”, albeit the most time-consuming one.
I see no issue in this approach though, so long as the person specifically discerns that it was a fix for his issue, on his system, which may or may not be applicable to every possible HW combination.
bbbe wrote:
19 Nov 2025, 19:39
Conveniently enough, I am setting up a measuring rig for simple click to photon. Will see if I can get results on that with via805 vs usb3 on mobo (which are coincidentally ASM) and usb2 ports which I believe also go thru ASM (I could be wrong).
Also another thing that's not even close to being discussed publicly is jitter and how systems react to acceleration in input change. So there are so many behaviors that take so much effort to quantify that it makes no sense for then someone to come around and say along the lines of I've tested all these micro-controllers and they are all identical. Care to show receipts?
I understand, I might've come across a bit rash & insensitive over text.
I understand where you're coming from.
This might be somewhat of a misunderstanding, as I'm not here to bash you or your methods :p
My goal was to try & quantify your “feel” difference in form of data, which other users can take a look at using the generated .etl file.
It was in no means a way to devalue your experience & your perception.
gster wrote:
24 Nov 2025, 13:14
What do you think the best PCIe to USB is, based on price to performance?
I'm unable to give you a concrete answer to this question, as I haven't had the time to devise a testing methodology for the USB PCIe AIC that I own.
I will try to provide some data by the end of the year, hopefully.
I have ordered a MCS9990-based (USB EHCI controller) AIC as well to add into the mix :D
I've also seen reports of Vanguard (Valorant's kernel anti-cheat software) disallowing 3rd party USB PCIe AIC such as Asmedia ones, even with the stock MS driver used...
Omg I agree with everything you are saying. I do have to say one thing. To everyone in this thread. I just cooked my mobo by experimenting too much and so I had to go back to my intel laptop. OH MY GODNESS. I'm getting literal goosebumps. My eyes are rolling up on they own. Mouse feel good. Keyboard feel amazing. All the same setup. I just can't like. Whatever things we are here tryina do fix amd mess. God d**n does it just work like bruh. Butter smooth, predictable, I feel the mouse moving, so delicate, so precise, I don't even have to work against the computer. My laptop (intel + nvidia) is pumping out around 360-400 frames on valo. Amd system (amd + nvidia) was doing 800-900. Yes, I miss extra frames and slightly lower avg click to photon latency but damn, the input is PERFECT. I mean what I say - PERFECT.

Please, someone explain this. I just can't. I can't. I just fire up an intel system and its input is working as it should. WHY????

Again, I already tested two 9800x3Ds, pci usb (which did help), win10, win11. Same install types on both amd build and intel. BUT DAMN.

Idk maybe it could also be nvidia interop I really don't know. Or just the type of driver implementation I don't even want to care. WHY DOES IT FEEL SO RIGHT PLEASE EXPLAIN I'M LOSING MY MIND.

I'm now thinking of really just maining valo on my laptop. It's like... how can I put it. When I aim in valo on my intel laptop I just don't have to put an effort into thinking how to move the mouse to 1 tap smoothly. With amd, hey, I defo got good with practice and I could narrow down consistency but sh****. How much more effortless is aiming on intel (+ nvidia) I just can't. I'm crashing out....

So I'm gonna get the same mobo model replacement but idk, I'm lowkey not even sure I care about the build anymore. And not that I was happy with the intel PC system either with the mobo I had, couldn't hit 7200mhz stable on that one and it was definitely its own room heater.

AMD's gotta do something to their input pipeline, it's just so trash.

Yes, the fps is good, yes you do feel the snappiness of that cache in terms of how it pops fast and effortlessly. How easily AMD handles 2.5GBe downstream. I love it. But DAMN. THE MOUSE & KB!

Again, I don't know how to describe it. I almost sometimes catch myself thinking I might have gotten used to intel too much but idk, I doubt it. There is something with the latency characteristics in terms of timing & pacing.

Like say the total latency on the laptop is higher than the AMD system - idk I didn't measure - but say it is. Now take say take a sample of 1-2s of frames on a timeline. Now for each of those frames within the sample a metric some kind like output pixel displacement vs mouse displacement per frame. Imagine we had this type of data.

Now let's plot these hypotheticals, in terms of how they feel to me atm:

Laptop (intel 13th gen + rtx 4070 mobile):

Code: Select all

|-----------|-----------|-----------|-----------|-----------|-----------
-^----------- ^ -----------^-----------^-----------^-----------^--------
System (amd 9800x3d + rtx 5070 ti):

Code: Select all

|------|------|------|------|------|------|------|------|------|------
-^----------^ -------^ ----^----^---------^---^-------^------------^--
(Yes I intentionally skipped one frame as I do believe that there is something with event fineness resolution too at times)

(Keep in mind here, I tried all options, nvidia reflex off/on/ultra, frame limiters on/off ... blah blah)

And how it feels is that the mouse inputs on intel just happen more on point. They just do, or they happen on a tighter latency offset for each sample or what it is. So now just imagine, which of the graphs shown is easier for a brain to predict/adapt to in realtime scenarios.

So again, on laptop 400fps avg, render latency 4ms in valo. System, 800-900 fps, 1ms render latency. And I would take the former any day with the way the input feels.

Yes I can kind of tell that I see less frames and maybe the system takes a bit longer to catch up, but d**n, I move my mouse within the game, within windows and it lands every time where I want it to on the right unit of time that makes sense for a human like me. And idk but this kind of discrepancy is absolutely f****** with my brain.

However there is a catch here, If I never had these two side by side it would likely have been harder for me to perceive this downgrade as one would just take it as a given. But once you feel the difference it's hard to let go. It's like switching from 60hz to 240hz+. Except arguably it's worse. as hz is cycles per second assumed at roughly even intervals. And what I'm describing is something that is more like average cycles per second with perceivable high jitter coefficient (ie variance from sample window to sample window or high max/min).

It's just indescribable. Someone please. Get a masters. Get a PHD. Find out what the hell is going on. While claude is giving a compelling answer like:
The Windows input stack treats all x86 CPUs identically—the differences emerge from hardware
Windows processes mouse and keyboard input through a standardized kernel pipeline: the HID class driver (hidclass.sys) receives USB interrupts, queues Deferred Procedure Calls (DPCs), and routes input through Win32k.sys to application message queues. Since Windows NT moved to asynchronous per-thread input queues, there's been no CPU-specific "tick-based" synchronous mode. However, the timing precision of this pipeline varies dramatically based on underlying hardware characteristics.
The critical difference lies in how quickly and consistently the CPU can service input interrupts. Intel's 13th-gen processors integrate all I/O controllers on the same die as the CPU cores, meaning a USB interrupt travels through a single clock domain to reach application code. AMD Ryzen processors separate compute (CCDs) from I/O (IOD) across different chiplets connected via Infinity Fabric, introducing mandatory clock-domain crossings that add both latency and—crucially—latency variance.
Meanwhile, I'm saying it. Today Intel (maybe + nvidia) >> AMD (maybe + nvidia) with mouse + keyboard input feel, period. Even if you can get AMD down to a lower click to photon latency. Absolutely superior.

Update:

I also asked claude to do a more comprehensive research and with amd you sometimes land on quite comical finds. Unfortunately I'm not allowed to post links but there is one post that claude managed to pull off of archive dot org or idk what it is. Ie it goes like this:
Chipset Driver Bug: USB ports have different polling rates (affects all ryzen systems)

When using standard WinUSB drivers on Windows 10 Ryzen systems, only USB 3.0/3.1/3.2 Gen 1 ports are polled every 8 ms. USB Gen 2 ports (and USB 2.0 ports) are polled at 16 ms, which causes unnecessary issues in certain use cases where a timeout occurs waiting on the new input. This also affects internal USB headers.

This was found using a gamecube controller adapter by Nintendo with standard WinUSB drivers (installed via Zadig). However, I have reproduced the polling rate issue with other hardware.

This occurs across all Ryzen platforms (including laptops).

It can be worked around by using a 3.0/3.1 Gen1/3.2 Gen1 port directly on the motherboard, but not everyone has these available if more hardware needs to be polled at 8ms. Also, laptop users might not have any ports that work properly.

Can the chipset drivers be changed so that the polling rate is consistent (and minimized) across ALL USB ports on Ryzen platforms?
This bizarrely could be one of the reasons (maybe?) why vl805 works well, it's usb 3.0 5gbps. I would still lean towards VIA (vl805) over ASMedia however. Although I would be careful with making a claim like that as I'm not aware whether usbhid events are processed thru chipset if you plug the pci usb into pci cpu lanes.

This was also an interesting part of the report that claude generated:
## Timer resolution and interrupt handling show measurable AMD disadvantages

Three timing subsystems affect input "feel," and all three behave differently on AMD:

**QueryPerformanceCounter (QPC) latency** varies by platform. On Intel Core i7-6700K, QPC calls complete in approximately **11 nanoseconds**. Testing on AMD Ryzen 7 1700X showed certain configurations requiring **2,491 nanoseconds per call**—220× slower—when Windows falls back to platform timers instead of the CPU's Time Stamp Counter (TSC). While modern Ryzen processors have largely resolved this through invariant TSC support, motherboard BIOS configurations can still trigger fallback behavior.

**HPET frequency differs** between platforms: Intel implementations run at **24.00 MHz** (~41.67ns resolution) while AMD runs at **14.32 MHz** (~69.83ns resolution). This 40% resolution difference affects any application or driver relying on HPET for timing.

**DPC latency measurements consistently favor Intel**. Community benchmarks show Intel systems typically achieving ~40μs average DPC latency versus AMD systems at ~80μs. One overclocker reported that even disabling an entire CCX on a Ryzen 5950X couldn't bring DPC latency down to Intel levels, suggesting the IOD architecture itself creates irreducible overhead.

## AMD's Infinity Fabric architecture explains the "early/late input" phenomenon

The user's description of inputs feeling like they "skip" or happen "early/late" aligns precisely with how AMD's chiplet architecture handles I/O under load. Chips and Cheese's technical analysis revealed that Zen 4 I/O latency can spike from **82ns baseline to 700ns+ under contention**—an 8.5× variance when CPU cores generate heavy memory traffic through the same Infinity Fabric that handles USB data.

The mechanism works like this: USB controller receives mouse movement → IOD processes interrupt → data crosses IFOP (Infinity Fabric On-Package) link to reach CCD → interrupt triggers DPC on CPU core → DPC drains through kernel to application. Each clock-domain crossing adds synchronization overhead, and if the Infinity Fabric is handling memory requests from gaming threads simultaneously, **I/O requests experience queuing delays**.

This explains why higher FPS doesn't fix the problem—frame rate measures GPU output, not input timing consistency. A game running at 300 FPS still receives mouse updates through the same contention-prone I/O path. Intel's ring bus architecture keeps I/O and compute in the same clock domain, eliminating this class of timing variance entirely.
Last edited by bbbe on 28 Nov 2025, 08:11, edited 15 times in total.

bbbe
Posts: 3
Joined: 23 Oct 2025, 02:14

Re: Ryzen vs Intel Input Lag/Latency

Post by bbbe » 28 Nov 2025, 07:15

Okay, now that I'm done crashing out and willing to think a little.

Having given it some thought now I think instead of me trying to hyperfocus on click to photon comparisons. Although I might still get this just to see absolute system latencies between my laptop and the pc.

Instead, I'm gonna stat thinking of some kind of test which would be possible to do without specialized equipment that would target displacement accuracy per unit sample window.

I do have arduino leo or what you call it that has hid.

I'm probably going to start either with some game like valo and sample scene with essentially the same sequence of high acceleration mouse movements. I'm not sure yet what would be a decent way of sampling results though besides just the fact that final landing position should be withing certain error margin.

I might write a simple unity game that would sample vector of the camera in 3d space with timestamps and somehow that mapped onto time of mouse events fired on the microcontroller. Not sure yet of the proper way of getting reference clock.

Arduino leo will fire events and will serve as the timestamp source that's fine. But then the unity game I assume will start counting from first event processed, which maybe is fine, that would just mean that we don't get accurancy initial latency but we'll get the total trend and latency delta. Maybe I will flash a little rectangle on each event processed that a light sensor can pickup although idk how accurate this will be unless i cap hid events to be at most 400hz (max refresh rate on my monitor). Plus not even sure of the impulse response of the light sensor yet. I did get good readings for click to photon so I know it can measure at least 15ms delta. Maybe even 60hz will be enough to test the difference in mouse to camera displacement change. I'd say I could even percieve mouse feel difference at 144hz on amd.

gster
Posts: 6
Joined: 09 Nov 2025, 13:56

Re: Ryzen vs Intel Input Lag/Latency

Post by gster » 08 Dec 2025, 00:18

bbbe wrote:
28 Nov 2025, 07:15
Okay, now that I'm done crashing out and willing to think a little.

Having given it some thought now I think instead of me trying to hyperfocus on click to photon comparisons. Although I might still get this just to see absolute system latencies between my laptop and the pc.

Instead, I'm gonna stat thinking of some kind of test which would be possible to do without specialized equipment that would target displacement accuracy per unit sample window.

I do have arduino leo or what you call it that has hid.

I'm probably going to start either with some game like valo and sample scene with essentially the same sequence of high acceleration mouse movements. I'm not sure yet what would be a decent way of sampling results though besides just the fact that final landing position should be withing certain error margin.

I might write a simple unity game that would sample vector of the camera in 3d space with timestamps and somehow that mapped onto time of mouse events fired on the microcontroller. Not sure yet of the proper way of getting reference clock.

Arduino leo will fire events and will serve as the timestamp source that's fine. But then the unity game I assume will start counting from first event processed, which maybe is fine, that would just mean that we don't get accurancy initial latency but we'll get the total trend and latency delta. Maybe I will flash a little rectangle on each event processed that a light sensor can pickup although idk how accurate this will be unless i cap hid events to be at most 400hz (max refresh rate on my monitor). Plus not even sure of the impulse response of the light sensor yet. I did get good readings for click to photon so I know it can measure at least 15ms delta. Maybe even 60hz will be enough to test the difference in mouse to camera displacement change. I'd say I could even percieve mouse feel difference at 144hz on amd.
Like I said as well. I came from my 12900K to this 9800x3d and the mouse feel is night and day. The 12900k was absolutely flawless, one to one like a artist with a paintbrush in their hand. 100% PERSISION.

gster
Posts: 6
Joined: 09 Nov 2025, 13:56

Re: Ryzen vs Intel Input Lag/Latency

Post by gster » 08 Dec 2025, 00:21

bbbe wrote:
27 Nov 2025, 16:19
kyube wrote:
24 Nov 2025, 16:09
bbbe wrote:
19 Nov 2025, 19:39
It's just the same rhetoric that you would hear just about 8-10y ago how a few seemingly engineering background internet personas would claim how there is no perceivable difference in panels higher than 60hz because your eyes can't see higher than that and a bunch of snake oil like that.
That's absolutely not the same rhetoric...
I was only interested in raw data, not speculation.
bbbe wrote:
19 Nov 2025, 19:39
At the same time, while you are going to spend time chirping consider the fact that I'm not a single person claiming that they seemed to have had stupid and unreal mouse feel issues on new ryzen systems oob, unlike something you seemingly get on intel. And hey, listen. Whether it's true or not, nobody actually knows where the bulk of it is coming from and how to actually prove or disprove that fact.
Actually, there are people that know. Most of users here like to sell snake oil instead of solve their problems though.
There's a finite set of 'problems', each of which has a solution. :)
bbbe wrote:
19 Nov 2025, 19:39
It could be as stupid as lack of optimizations around hw-kernel level on the amd's end to do with .. windows? x86 instruction set? io eco system standards compliance? Even something as stupid as a drivers/firmware for these usb cards could mean a lot in how something can feel. Or it could be a wider packaging problem in manufacturing in the age of everything accelerated on that end. Also it's a fact that intel io eco system overall had better maturity across the whole stack (hw + sw) over these years. (Intel were one of the first to introduce high performance optane ssds that were unmatched in server space, intel developed thunderbolt standard years ago before usb4 just onset, intel's developed their own nics for decades, intel to date has better memory controllers than ryzen...).
Intel has a better memory controller than Ryzen? A interesting take... :D
bbbe wrote:
19 Nov 2025, 19:39
Can you possibly establish metrics for that and measure it? Of course.
This is what I've been attempting a few times on these forums in a few other threads... but to no avail sadly :/
bbbe wrote:
19 Nov 2025, 19:39
I'm not claiming to be an expert. I'm someone who has clearly experienced a clear issue, subjectively, anecdotally. I still do stand by that fact.
I'm not denying that you had a “issue”, I just think that it can be quantifiably measured.
Psychological bias towards a feel also exists. What you find “snappy” may feel “floaty” to someone else.
bbbe wrote:
19 Nov 2025, 19:39
Computer systems are large complex systems and you can only try to fish out a couple of needles in the haystack
I agree, but they're a countable, discrete set of quantifiable problems. Not a continuous set. Not something abstract.
I also disagree that there is an inherent architectural flaw on either sides, as concrete evidence is non-existant in that regard.
bbbe wrote:
19 Nov 2025, 19:39
Hey, what if it's all the chiplet tsmc thing?
There are a myriad of larger fish to catch.
CPU architectural differences aren't even close to being the main culprit, if you ask me. :)
bbbe wrote:
19 Nov 2025, 19:39
At the end of the day whatever the real bulk of issues with ryzen platform are in terms of input I don't really care.
I agree, a common user shouldn't care about the majority of these things. :D
bbbe wrote:
19 Nov 2025, 19:39
I just care to try all possible options available without having to just give up and submit oneself to whatever is part of the quo and wait for them or someone adjacent to one day just post an analysis.
I also agree with this, pragmatism is a virtue.
bbbe wrote:
19 Nov 2025, 19:39
Which btw I don't care for people who are and just plug the machine and call it a day. I just happen to be annoyed enough to find out why. It could even be nvidia's driver stack and how it fits in the bigger picture with ryzen. It could literally be anything. But hey, until there is someone with better expertise who's actually may actual further the conversation by participating I will keep trying all the stupid options and keep those that do seem to have affected something.
That's definitely one way to tackle these (abstract-like) “problems”, albeit the most time-consuming one.
I see no issue in this approach though, so long as the person specifically discerns that it was a fix for his issue, on his system, which may or may not be applicable to every possible HW combination.
bbbe wrote:
19 Nov 2025, 19:39
Conveniently enough, I am setting up a measuring rig for simple click to photon. Will see if I can get results on that with via805 vs usb3 on mobo (which are coincidentally ASM) and usb2 ports which I believe also go thru ASM (I could be wrong).
Also another thing that's not even close to being discussed publicly is jitter and how systems react to acceleration in input change. So there are so many behaviors that take so much effort to quantify that it makes no sense for then someone to come around and say along the lines of I've tested all these micro-controllers and they are all identical. Care to show receipts?
I understand, I might've come across a bit rash & insensitive over text.
I understand where you're coming from.
This might be somewhat of a misunderstanding, as I'm not here to bash you or your methods :p
My goal was to try & quantify your “feel” difference in form of data, which other users can take a look at using the generated .etl file.
It was in no means a way to devalue your experience & your perception.
gster wrote:
24 Nov 2025, 13:14
What do you think the best PCIe to USB is, based on price to performance?
I'm unable to give you a concrete answer to this question, as I haven't had the time to devise a testing methodology for the USB PCIe AIC that I own.
I will try to provide some data by the end of the year, hopefully.
I have ordered a MCS9990-based (USB EHCI controller) AIC as well to add into the mix :D
I've also seen reports of Vanguard (Valorant's kernel anti-cheat software) disallowing 3rd party USB PCIe AIC such as Asmedia ones, even with the stock MS driver used...
Omg I agree with everything you are saying. I do have to say one thing. To everyone in this thread. I just cooked my mobo by experimenting too much and so I had to go back to my intel laptop. OH MY GODNESS. I'm getting literal goosebumps. My eyes are rolling up on they own. Mouse feel good. Keyboard feel amazing. All the same setup. I just can't like. Whatever things we are here tryina do fix amd mess. God d**n does it just work like bruh. Butter smooth, predictable, I feel the mouse moving, so delicate, so precise, I don't even have to work against the computer. My laptop (intel + nvidia) is pumping out around 360-400 frames on valo. Amd system (amd + nvidia) was doing 800-900. Yes, I miss extra frames and slightly lower avg click to photon latency but damn, the input is PERFECT. I mean what I say - PERFECT.

Please, someone explain this. I just can't. I can't. I just fire up an intel system and its input is working as it should. WHY????

Again, I already tested two 9800x3Ds, pci usb (which did help), win10, win11. Same install types on both amd build and intel. BUT DAMN.

Idk maybe it could also be nvidia interop I really don't know. Or just the type of driver implementation I don't even want to care. WHY DOES IT FEEL SO RIGHT PLEASE EXPLAIN I'M LOSING MY MIND.

I'm now thinking of really just maining valo on my laptop. It's like... how can I put it. When I aim in valo on my intel laptop I just don't have to put an effort into thinking how to move the mouse to 1 tap smoothly. With amd, hey, I defo got good with practice and I could narrow down consistency but sh****. How much more effortless is aiming on intel (+ nvidia) I just can't. I'm crashing out....

So I'm gonna get the same mobo model replacement but idk, I'm lowkey not even sure I care about the build anymore. And not that I was happy with the intel PC system either with the mobo I had, couldn't hit 7200mhz stable on that one and it was definitely its own room heater.

AMD's gotta do something to their input pipeline, it's just so trash.

Yes, the fps is good, yes you do feel the snappiness of that cache in terms of how it pops fast and effortlessly. How easily AMD handles 2.5GBe downstream. I love it. But DAMN. THE MOUSE & KB!

Again, I don't know how to describe it. I almost sometimes catch myself thinking I might have gotten used to intel too much but idk, I doubt it. There is something with the latency characteristics in terms of timing & pacing.

Like say the total latency on the laptop is higher than the AMD system - idk I didn't measure - but say it is. Now take say take a sample of 1-2s of frames on a timeline. Now for each of those frames within the sample a metric some kind like output pixel displacement vs mouse displacement per frame. Imagine we had this type of data.

Now let's plot these hypotheticals, in terms of how they feel to me atm:

Laptop (intel 13th gen + rtx 4070 mobile):

Code: Select all

|-----------|-----------|-----------|-----------|-----------|-----------
-^----------- ^ -----------^-----------^-----------^-----------^--------
System (amd 9800x3d + rtx 5070 ti):

Code: Select all

|------|------|------|------|------|------|------|------|------|------
-^----------^ -------^ ----^----^---------^---^-------^------------^--
(Yes I intentionally skipped one frame as I do believe that there is something with event fineness resolution too at times)

(Keep in mind here, I tried all options, nvidia reflex off/on/ultra, frame limiters on/off ... blah blah)

And how it feels is that the mouse inputs on intel just happen more on point. They just do, or they happen on a tighter latency offset for each sample or what it is. So now just imagine, which of the graphs shown is easier for a brain to predict/adapt to in realtime scenarios.

So again, on laptop 400fps avg, render latency 4ms in valo. System, 800-900 fps, 1ms render latency. And I would take the former any day with the way the input feels.

Yes I can kind of tell that I see less frames and maybe the system takes a bit longer to catch up, but d**n, I move my mouse within the game, within windows and it lands every time where I want it to on the right unit of time that makes sense for a human like me. And idk but this kind of discrepancy is absolutely f****** with my brain.

However there is a catch here, If I never had these two side by side it would likely have been harder for me to perceive this downgrade as one would just take it as a given. But once you feel the difference it's hard to let go. It's like switching from 60hz to 240hz+. Except arguably it's worse. as hz is cycles per second assumed at roughly even intervals. And what I'm describing is something that is more like average cycles per second with perceivable high jitter coefficient (ie variance from sample window to sample window or high max/min).

It's just indescribable. Someone please. Get a masters. Get a PHD. Find out what the hell is going on. While claude is giving a compelling answer like:
The Windows input stack treats all x86 CPUs identically—the differences emerge from hardware
Windows processes mouse and keyboard input through a standardized kernel pipeline: the HID class driver (hidclass.sys) receives USB interrupts, queues Deferred Procedure Calls (DPCs), and routes input through Win32k.sys to application message queues. Since Windows NT moved to asynchronous per-thread input queues, there's been no CPU-specific "tick-based" synchronous mode. However, the timing precision of this pipeline varies dramatically based on underlying hardware characteristics.
The critical difference lies in how quickly and consistently the CPU can service input interrupts. Intel's 13th-gen processors integrate all I/O controllers on the same die as the CPU cores, meaning a USB interrupt travels through a single clock domain to reach application code. AMD Ryzen processors separate compute (CCDs) from I/O (IOD) across different chiplets connected via Infinity Fabric, introducing mandatory clock-domain crossings that add both latency and—crucially—latency variance.
Meanwhile, I'm saying it. Today Intel (maybe + nvidia) >> AMD (maybe + nvidia) with mouse + keyboard input feel, period. Even if you can get AMD down to a lower click to photon latency. Absolutely superior.

Update:

I also asked claude to do a more comprehensive research and with amd you sometimes land on quite comical finds. Unfortunately I'm not allowed to post links but there is one post that claude managed to pull off of archive dot org or idk what it is. Ie it goes like this:
Chipset Driver Bug: USB ports have different polling rates (affects all ryzen systems)

When using standard WinUSB drivers on Windows 10 Ryzen systems, only USB 3.0/3.1/3.2 Gen 1 ports are polled every 8 ms. USB Gen 2 ports (and USB 2.0 ports) are polled at 16 ms, which causes unnecessary issues in certain use cases where a timeout occurs waiting on the new input. This also affects internal USB headers.

This was found using a gamecube controller adapter by Nintendo with standard WinUSB drivers (installed via Zadig). However, I have reproduced the polling rate issue with other hardware.

This occurs across all Ryzen platforms (including laptops).

It can be worked around by using a 3.0/3.1 Gen1/3.2 Gen1 port directly on the motherboard, but not everyone has these available if more hardware needs to be polled at 8ms. Also, laptop users might not have any ports that work properly.

Can the chipset drivers be changed so that the polling rate is consistent (and minimized) across ALL USB ports on Ryzen platforms?
This bizarrely could be one of the reasons (maybe?) why vl805 works well, it's usb 3.0 5gbps. I would still lean towards VIA (vl805) over ASMedia however. Although I would be careful with making a claim like that as I'm not aware whether usbhid events are processed thru chipset if you plug the pci usb into pci cpu lanes.

This was also an interesting part of the report that claude generated:
## Timer resolution and interrupt handling show measurable AMD disadvantages

Three timing subsystems affect input "feel," and all three behave differently on AMD:

**QueryPerformanceCounter (QPC) latency** varies by platform. On Intel Core i7-6700K, QPC calls complete in approximately **11 nanoseconds**. Testing on AMD Ryzen 7 1700X showed certain configurations requiring **2,491 nanoseconds per call**—220× slower—when Windows falls back to platform timers instead of the CPU's Time Stamp Counter (TSC). While modern Ryzen processors have largely resolved this through invariant TSC support, motherboard BIOS configurations can still trigger fallback behavior.

**HPET frequency differs** between platforms: Intel implementations run at **24.00 MHz** (~41.67ns resolution) while AMD runs at **14.32 MHz** (~69.83ns resolution). This 40% resolution difference affects any application or driver relying on HPET for timing.

**DPC latency measurements consistently favor Intel**. Community benchmarks show Intel systems typically achieving ~40μs average DPC latency versus AMD systems at ~80μs. One overclocker reported that even disabling an entire CCX on a Ryzen 5950X couldn't bring DPC latency down to Intel levels, suggesting the IOD architecture itself creates irreducible overhead.

## AMD's Infinity Fabric architecture explains the "early/late input" phenomenon

The user's description of inputs feeling like they "skip" or happen "early/late" aligns precisely with how AMD's chiplet architecture handles I/O under load. Chips and Cheese's technical analysis revealed that Zen 4 I/O latency can spike from **82ns baseline to 700ns+ under contention**—an 8.5× variance when CPU cores generate heavy memory traffic through the same Infinity Fabric that handles USB data.

The mechanism works like this: USB controller receives mouse movement → IOD processes interrupt → data crosses IFOP (Infinity Fabric On-Package) link to reach CCD → interrupt triggers DPC on CPU core → DPC drains through kernel to application. Each clock-domain crossing adds synchronization overhead, and if the Infinity Fabric is handling memory requests from gaming threads simultaneously, **I/O requests experience queuing delays**.

This explains why higher FPS doesn't fix the problem—frame rate measures GPU output, not input timing consistency. A game running at 300 FPS still receives mouse updates through the same contention-prone I/O path. Intel's ring bus architecture keeps I/O and compute in the same clock domain, eliminating this class of timing variance entirely.
How did you cook the mobo?? lol

Maelstrom
Posts: 29
Joined: 18 Nov 2019, 14:38

Re: Ryzen vs Intel Input Lag/Latency

Post by Maelstrom » 11 Dec 2025, 16:23

bbbe wrote:
27 Nov 2025, 16:19
kyube wrote:
24 Nov 2025, 16:09
bbbe wrote:
19 Nov 2025, 19:39
It's just the same rhetoric that you would hear just about 8-10y ago how a few seemingly engineering background internet personas would claim how there is no perceivable difference in panels higher than 60hz because your eyes can't see higher than that and a bunch of snake oil like that.
That's absolutely not the same rhetoric...
I was only interested in raw data, not speculation.
bbbe wrote:
19 Nov 2025, 19:39
At the same time, while you are going to spend time chirping consider the fact that I'm not a single person claiming that they seemed to have had stupid and unreal mouse feel issues on new ryzen systems oob, unlike something you seemingly get on intel. And hey, listen. Whether it's true or not, nobody actually knows where the bulk of it is coming from and how to actually prove or disprove that fact.
Actually, there are people that know. Most of users here like to sell snake oil instead of solve their problems though.
There's a finite set of 'problems', each of which has a solution. :)
bbbe wrote:
19 Nov 2025, 19:39
It could be as stupid as lack of optimizations around hw-kernel level on the amd's end to do with .. windows? x86 instruction set? io eco system standards compliance? Even something as stupid as a drivers/firmware for these usb cards could mean a lot in how something can feel. Or it could be a wider packaging problem in manufacturing in the age of everything accelerated on that end. Also it's a fact that intel io eco system overall had better maturity across the whole stack (hw + sw) over these years. (Intel were one of the first to introduce high performance optane ssds that were unmatched in server space, intel developed thunderbolt standard years ago before usb4 just onset, intel's developed their own nics for decades, intel to date has better memory controllers than ryzen...).
Intel has a better memory controller than Ryzen? A interesting take... :D
bbbe wrote:
19 Nov 2025, 19:39
Can you possibly establish metrics for that and measure it? Of course.
This is what I've been attempting a few times on these forums in a few other threads... but to no avail sadly :/
bbbe wrote:
19 Nov 2025, 19:39
I'm not claiming to be an expert. I'm someone who has clearly experienced a clear issue, subjectively, anecdotally. I still do stand by that fact.
I'm not denying that you had a “issue”, I just think that it can be quantifiably measured.
Psychological bias towards a feel also exists. What you find “snappy” may feel “floaty” to someone else.
bbbe wrote:
19 Nov 2025, 19:39
Computer systems are large complex systems and you can only try to fish out a couple of needles in the haystack
I agree, but they're a countable, discrete set of quantifiable problems. Not a continuous set. Not something abstract.
I also disagree that there is an inherent architectural flaw on either sides, as concrete evidence is non-existant in that regard.
bbbe wrote:
19 Nov 2025, 19:39
Hey, what if it's all the chiplet tsmc thing?
There are a myriad of larger fish to catch.
CPU architectural differences aren't even close to being the main culprit, if you ask me. :)
bbbe wrote:
19 Nov 2025, 19:39
At the end of the day whatever the real bulk of issues with ryzen platform are in terms of input I don't really care.
I agree, a common user shouldn't care about the majority of these things. :D
bbbe wrote:
19 Nov 2025, 19:39
I just care to try all possible options available without having to just give up and submit oneself to whatever is part of the quo and wait for them or someone adjacent to one day just post an analysis.
I also agree with this, pragmatism is a virtue.
bbbe wrote:
19 Nov 2025, 19:39
Which btw I don't care for people who are and just plug the machine and call it a day. I just happen to be annoyed enough to find out why. It could even be nvidia's driver stack and how it fits in the bigger picture with ryzen. It could literally be anything. But hey, until there is someone with better expertise who's actually may actual further the conversation by participating I will keep trying all the stupid options and keep those that do seem to have affected something.
That's definitely one way to tackle these (abstract-like) “problems”, albeit the most time-consuming one.
I see no issue in this approach though, so long as the person specifically discerns that it was a fix for his issue, on his system, which may or may not be applicable to every possible HW combination.
bbbe wrote:
19 Nov 2025, 19:39
Conveniently enough, I am setting up a measuring rig for simple click to photon. Will see if I can get results on that with via805 vs usb3 on mobo (which are coincidentally ASM) and usb2 ports which I believe also go thru ASM (I could be wrong).
Also another thing that's not even close to being discussed publicly is jitter and how systems react to acceleration in input change. So there are so many behaviors that take so much effort to quantify that it makes no sense for then someone to come around and say along the lines of I've tested all these micro-controllers and they are all identical. Care to show receipts?
I understand, I might've come across a bit rash & insensitive over text.
I understand where you're coming from.
This might be somewhat of a misunderstanding, as I'm not here to bash you or your methods :p
My goal was to try & quantify your “feel” difference in form of data, which other users can take a look at using the generated .etl file.
It was in no means a way to devalue your experience & your perception.
gster wrote:
24 Nov 2025, 13:14
What do you think the best PCIe to USB is, based on price to performance?
I'm unable to give you a concrete answer to this question, as I haven't had the time to devise a testing methodology for the USB PCIe AIC that I own.
I will try to provide some data by the end of the year, hopefully.
I have ordered a MCS9990-based (USB EHCI controller) AIC as well to add into the mix :D
I've also seen reports of Vanguard (Valorant's kernel anti-cheat software) disallowing 3rd party USB PCIe AIC such as Asmedia ones, even with the stock MS driver used...
Omg I agree with everything you are saying. I do have to say one thing. To everyone in this thread. I just cooked my mobo by experimenting too much and so I had to go back to my intel laptop. OH MY GODNESS. I'm getting literal goosebumps. My eyes are rolling up on they own. Mouse feel good. Keyboard feel amazing. All the same setup. I just can't like. Whatever things we are here tryina do fix amd mess. God d**n does it just work like bruh. Butter smooth, predictable, I feel the mouse moving, so delicate, so precise, I don't even have to work against the computer. My laptop (intel + nvidia) is pumping out around 360-400 frames on valo. Amd system (amd + nvidia) was doing 800-900. Yes, I miss extra frames and slightly lower avg click to photon latency but damn, the input is PERFECT. I mean what I say - PERFECT.

Please, someone explain this. I just can't. I can't. I just fire up an intel system and its input is working as it should. WHY????

Again, I already tested two 9800x3Ds, pci usb (which did help), win10, win11. Same install types on both amd build and intel. BUT DAMN.

Idk maybe it could also be nvidia interop I really don't know. Or just the type of driver implementation I don't even want to care. WHY DOES IT FEEL SO RIGHT PLEASE EXPLAIN I'M LOSING MY MIND.

I'm now thinking of really just maining valo on my laptop. It's like... how can I put it. When I aim in valo on my intel laptop I just don't have to put an effort into thinking how to move the mouse to 1 tap smoothly. With amd, hey, I defo got good with practice and I could narrow down consistency but sh****. How much more effortless is aiming on intel (+ nvidia) I just can't. I'm crashing out....

So I'm gonna get the same mobo model replacement but idk, I'm lowkey not even sure I care about the build anymore. And not that I was happy with the intel PC system either with the mobo I had, couldn't hit 7200mhz stable on that one and it was definitely its own room heater.

AMD's gotta do something to their input pipeline, it's just so trash.

Yes, the fps is good, yes you do feel the snappiness of that cache in terms of how it pops fast and effortlessly. How easily AMD handles 2.5GBe downstream. I love it. But DAMN. THE MOUSE & KB!

Again, I don't know how to describe it. I almost sometimes catch myself thinking I might have gotten used to intel too much but idk, I doubt it. There is something with the latency characteristics in terms of timing & pacing.

Like say the total latency on the laptop is higher than the AMD system - idk I didn't measure - but say it is. Now take say take a sample of 1-2s of frames on a timeline. Now for each of those frames within the sample a metric some kind like output pixel displacement vs mouse displacement per frame. Imagine we had this type of data.

Now let's plot these hypotheticals, in terms of how they feel to me atm:

Laptop (intel 13th gen + rtx 4070 mobile):

Code: Select all

|-----------|-----------|-----------|-----------|-----------|-----------
-^----------- ^ -----------^-----------^-----------^-----------^--------
System (amd 9800x3d + rtx 5070 ti):

Code: Select all

|------|------|------|------|------|------|------|------|------|------
-^----------^ -------^ ----^----^---------^---^-------^------------^--
(Yes I intentionally skipped one frame as I do believe that there is something with event fineness resolution too at times)

(Keep in mind here, I tried all options, nvidia reflex off/on/ultra, frame limiters on/off ... blah blah)

And how it feels is that the mouse inputs on intel just happen more on point. They just do, or they happen on a tighter latency offset for each sample or what it is. So now just imagine, which of the graphs shown is easier for a brain to predict/adapt to in realtime scenarios.

So again, on laptop 400fps avg, render latency 4ms in valo. System, 800-900 fps, 1ms render latency. And I would take the former any day with the way the input feels.

Yes I can kind of tell that I see less frames and maybe the system takes a bit longer to catch up, but d**n, I move my mouse within the game, within windows and it lands every time where I want it to on the right unit of time that makes sense for a human like me. And idk but this kind of discrepancy is absolutely f****** with my brain.

However there is a catch here, If I never had these two side by side it would likely have been harder for me to perceive this downgrade as one would just take it as a given. But once you feel the difference it's hard to let go. It's like switching from 60hz to 240hz+. Except arguably it's worse. as hz is cycles per second assumed at roughly even intervals. And what I'm describing is something that is more like average cycles per second with perceivable high jitter coefficient (ie variance from sample window to sample window or high max/min).

It's just indescribable. Someone please. Get a masters. Get a PHD. Find out what the hell is going on. While claude is giving a compelling answer like:
The Windows input stack treats all x86 CPUs identically—the differences emerge from hardware
Windows processes mouse and keyboard input through a standardized kernel pipeline: the HID class driver (hidclass.sys) receives USB interrupts, queues Deferred Procedure Calls (DPCs), and routes input through Win32k.sys to application message queues. Since Windows NT moved to asynchronous per-thread input queues, there's been no CPU-specific "tick-based" synchronous mode. However, the timing precision of this pipeline varies dramatically based on underlying hardware characteristics.
The critical difference lies in how quickly and consistently the CPU can service input interrupts. Intel's 13th-gen processors integrate all I/O controllers on the same die as the CPU cores, meaning a USB interrupt travels through a single clock domain to reach application code. AMD Ryzen processors separate compute (CCDs) from I/O (IOD) across different chiplets connected via Infinity Fabric, introducing mandatory clock-domain crossings that add both latency and—crucially—latency variance.
Meanwhile, I'm saying it. Today Intel (maybe + nvidia) >> AMD (maybe + nvidia) with mouse + keyboard input feel, period. Even if you can get AMD down to a lower click to photon latency. Absolutely superior.

Update:

I also asked claude to do a more comprehensive research and with amd you sometimes land on quite comical finds. Unfortunately I'm not allowed to post links but there is one post that claude managed to pull off of archive dot org or idk what it is. Ie it goes like this:
Chipset Driver Bug: USB ports have different polling rates (affects all ryzen systems)

When using standard WinUSB drivers on Windows 10 Ryzen systems, only USB 3.0/3.1/3.2 Gen 1 ports are polled every 8 ms. USB Gen 2 ports (and USB 2.0 ports) are polled at 16 ms, which causes unnecessary issues in certain use cases where a timeout occurs waiting on the new input. This also affects internal USB headers.

This was found using a gamecube controller adapter by Nintendo with standard WinUSB drivers (installed via Zadig). However, I have reproduced the polling rate issue with other hardware.

This occurs across all Ryzen platforms (including laptops).

It can be worked around by using a 3.0/3.1 Gen1/3.2 Gen1 port directly on the motherboard, but not everyone has these available if more hardware needs to be polled at 8ms. Also, laptop users might not have any ports that work properly.

Can the chipset drivers be changed so that the polling rate is consistent (and minimized) across ALL USB ports on Ryzen platforms?
This bizarrely could be one of the reasons (maybe?) why vl805 works well, it's usb 3.0 5gbps. I would still lean towards VIA (vl805) over ASMedia however. Although I would be careful with making a claim like that as I'm not aware whether usbhid events are processed thru chipset if you plug the pci usb into pci cpu lanes.

This was also an interesting part of the report that claude generated:
## Timer resolution and interrupt handling show measurable AMD disadvantages

Three timing subsystems affect input "feel," and all three behave differently on AMD:

**QueryPerformanceCounter (QPC) latency** varies by platform. On Intel Core i7-6700K, QPC calls complete in approximately **11 nanoseconds**. Testing on AMD Ryzen 7 1700X showed certain configurations requiring **2,491 nanoseconds per call**—220× slower—when Windows falls back to platform timers instead of the CPU's Time Stamp Counter (TSC). While modern Ryzen processors have largely resolved this through invariant TSC support, motherboard BIOS configurations can still trigger fallback behavior.

**HPET frequency differs** between platforms: Intel implementations run at **24.00 MHz** (~41.67ns resolution) while AMD runs at **14.32 MHz** (~69.83ns resolution). This 40% resolution difference affects any application or driver relying on HPET for timing.

**DPC latency measurements consistently favor Intel**. Community benchmarks show Intel systems typically achieving ~40μs average DPC latency versus AMD systems at ~80μs. One overclocker reported that even disabling an entire CCX on a Ryzen 5950X couldn't bring DPC latency down to Intel levels, suggesting the IOD architecture itself creates irreducible overhead.

## AMD's Infinity Fabric architecture explains the "early/late input" phenomenon

The user's description of inputs feeling like they "skip" or happen "early/late" aligns precisely with how AMD's chiplet architecture handles I/O under load. Chips and Cheese's technical analysis revealed that Zen 4 I/O latency can spike from **82ns baseline to 700ns+ under contention**—an 8.5× variance when CPU cores generate heavy memory traffic through the same Infinity Fabric that handles USB data.

The mechanism works like this: USB controller receives mouse movement → IOD processes interrupt → data crosses IFOP (Infinity Fabric On-Package) link to reach CCD → interrupt triggers DPC on CPU core → DPC drains through kernel to application. Each clock-domain crossing adds synchronization overhead, and if the Infinity Fabric is handling memory requests from gaming threads simultaneously, **I/O requests experience queuing delays**.

This explains why higher FPS doesn't fix the problem—frame rate measures GPU output, not input timing consistency. A game running at 300 FPS still receives mouse updates through the same contention-prone I/O path. Intel's ring bus architecture keeps I/O and compute in the same clock domain, eliminating this class of timing variance entirely.
Please do, because until then you have added absolutely nothing of value to the thread - it's borderline userbench drivel at this point. People are notoriously bad at isolating changes and cause/effects even in the best case - so all these anecdotes in the thread where people are coming from intel to AMD without actually giving any specifics is useless. The prevalence of the anecdotes pointing in this direction rather than the other one is probably explainable that more people are swapping from Intel -> AMD than the other way round, and then deciding that the cpu vendor change is the culprit rather than anything else.


Also more components within a pipeline doesn't necessarily mean that there is higher latency involved. So not sure why the fuss about an i/o die -> fabric -> core.

User avatar
Slender
Posts: 1747
Joined: 25 Jan 2020, 17:55

Re: Ryzen vs Intel Input Lag/Latency

Post by Slender » 11 Dec 2025, 23:25

never back to trash intel after feeling amd.
latest good intel is 9700k, 10700k, after - trash ecores with broken mouse feel.

Post Reply