kyube wrote: ↑24 Nov 2025, 16:09
bbbe wrote: ↑19 Nov 2025, 19:39
It's just the same rhetoric that you would hear just about 8-10y ago how a few seemingly engineering background internet personas would claim how there is no perceivable difference in panels higher than 60hz because your eyes can't see higher than that and a bunch of snake oil like that.
That's absolutely not the same rhetoric...
I was only interested in raw data, not speculation.
bbbe wrote: ↑19 Nov 2025, 19:39
At the same time, while you are going to spend time chirping consider the fact that I'm not a single person claiming that they seemed to have had stupid and unreal mouse feel issues on new ryzen systems oob, unlike something you seemingly get on intel. And hey, listen.
Whether it's true or not, nobody actually knows where the bulk of it is coming from and how to actually prove or disprove that fact.
Actually, there are people that know. Most of users here like to sell snake oil instead of solve their problems though.
There's a finite set of 'problems', each of which has a solution.
bbbe wrote: ↑19 Nov 2025, 19:39
It could be as stupid as lack of optimizations around hw-kernel level on the amd's end to do with .. windows? x86 instruction set? io eco system standards compliance? Even something as stupid as a drivers/firmware for these usb cards could mean a lot in how something can feel. Or it could be a wider packaging problem in manufacturing in the age of everything accelerated on that end. Also it's a fact that intel io eco system overall had better maturity across the whole stack (hw + sw) over these years. (Intel were one of the first to introduce high performance optane ssds that were unmatched in server space, intel developed thunderbolt standard years ago before usb4 just onset, intel's developed their own nics for decades, intel to date has better memory controllers than ryzen...).
Intel has a better memory controller than Ryzen? A interesting take...
bbbe wrote: ↑19 Nov 2025, 19:39
Can you possibly establish metrics for that and measure it? Of course.
This is what I've been attempting a few times on these forums in a few other threads... but to no avail sadly :/
bbbe wrote: ↑19 Nov 2025, 19:39
I'm not claiming to be an expert. I'm someone who has clearly experienced a clear issue, subjectively, anecdotally. I still do stand by that fact.
I'm not denying that you had a “issue”, I just think that it can be quantifiably measured.
Psychological bias towards a feel also exists. What you find “snappy” may feel “floaty” to someone else.
bbbe wrote: ↑19 Nov 2025, 19:39
Computer systems are large complex systems and you can only try to fish out a couple of needles in the haystack
I agree, but they're a countable, discrete set of quantifiable problems. Not a continuous set. Not something abstract.
I also disagree that there is an inherent architectural flaw on either sides, as concrete evidence is non-existant in that regard.
bbbe wrote: ↑19 Nov 2025, 19:39
Hey, what if it's all the chiplet tsmc thing?
There are a myriad of larger fish to catch.
CPU architectural differences aren't even close to being the main culprit, if you ask me.
bbbe wrote: ↑19 Nov 2025, 19:39
At the end of the day whatever the real bulk of issues with ryzen platform are in terms of input I don't really care.
I agree, a common user shouldn't care about the majority of these things.
bbbe wrote: ↑19 Nov 2025, 19:39
I just care to try all possible options available without having to just give up and submit oneself to whatever is part of the quo and wait for them or someone adjacent to one day just post an analysis.
I also agree with this, pragmatism is a virtue.
bbbe wrote: ↑19 Nov 2025, 19:39
Which btw I don't care for people who are and just plug the machine and call it a day. I just happen to be annoyed enough to find out why. It could even be nvidia's driver stack and how it fits in the bigger picture with ryzen. It could literally be anything. But hey, until there is someone with better expertise who's actually may actual further the conversation by participating I will keep trying all the stupid options and keep those that do seem to have affected something.
That's definitely one way to tackle these (abstract-like) “problems”, albeit the most time-consuming one.
I see no issue in this approach though, so long as the person specifically discerns that it was a fix for his issue, on his system, which may or may not be applicable to every possible HW combination.
bbbe wrote: ↑19 Nov 2025, 19:39
Conveniently enough, I am setting up a measuring rig for simple click to photon. Will see if I can get results on that with via805 vs usb3 on mobo (which are coincidentally ASM) and usb2 ports which I believe also go thru ASM (I could be wrong).
Also another thing that's not even close to being discussed publicly is jitter and how systems react to acceleration in input change. So there are so many behaviors that take so much effort to quantify that it makes no sense for then someone to come around and say along the lines of I've tested all these micro-controllers and they are all identical. Care to show receipts?
I understand, I might've come across a bit rash & insensitive over text.
I understand where you're coming from.
This might be somewhat of a misunderstanding, as I'm not here to bash you or your methods :p
My goal was to try & quantify your “feel” difference in form of data, which other users can take a look at using the generated .etl file.
It was in no means a way to devalue your experience & your perception.
gster wrote: ↑24 Nov 2025, 13:14
What do you think the best PCIe to USB is, based on price to performance?
I'm unable to give you a concrete answer to this question, as I haven't had the time to devise a testing methodology for the USB PCIe AIC that I own.
I will try to provide some data by the end of the year, hopefully.
I have ordered a MCS9990-based (USB EHCI controller) AIC as well to add into the mix

I've also seen reports of Vanguard (Valorant's kernel anti-cheat software) disallowing 3rd party USB PCIe AIC such as Asmedia ones, even with the stock MS driver used...
Omg I agree with everything you are saying. I do have to say one thing. To everyone in this thread. I just cooked my mobo by experimenting too much and so I had to go back to my intel laptop. OH MY GODNESS. I'm getting literal goosebumps. My eyes are rolling up on they own. Mouse feel good. Keyboard feel amazing. All the same setup. I just can't like. Whatever things we are here tryina do fix amd mess. God d**n does it just work like bruh. Butter smooth, predictable, I feel the mouse moving, so delicate, so precise, I don't even have to work against the computer. My laptop (intel + nvidia) is pumping out around 360-400 frames on valo. Amd system (amd + nvidia) was doing 800-900. Yes, I miss extra frames and slightly lower avg click to photon latency but damn, the input is PERFECT. I mean what I say - PERFECT.
Please, someone explain this. I just can't. I can't. I just fire up an intel system and its input is working as it should. WHY????
Again, I already tested two 9800x3Ds, pci usb (which did help), win10, win11. Same install types on both amd build and intel. BUT DAMN.
Idk maybe it could also be nvidia interop I really don't know. Or just the type of driver implementation I don't even want to care. WHY DOES IT FEEL SO RIGHT PLEASE EXPLAIN I'M LOSING MY MIND.
I'm now thinking of really just maining valo on my laptop. It's like... how can I put it. When I aim in valo on my intel laptop I just don't have to put an effort into thinking how to move the mouse to 1 tap smoothly. With amd, hey, I defo got good with practice and I could narrow down consistency but sh****. How much more effortless is aiming on intel (+ nvidia) I just can't. I'm crashing out....
So I'm gonna get the same mobo model replacement but idk, I'm lowkey not even sure I care about the build anymore. And not that I was happy with the intel PC system either with the mobo I had, couldn't hit 7200mhz stable on that one and it was definitely its own room heater.
AMD's gotta do something to their input pipeline, it's just so trash.
Yes, the fps is good, yes you do feel the snappiness of that cache in terms of how it pops fast and effortlessly. How easily AMD handles 2.5GBe downstream. I love it. But DAMN. THE MOUSE & KB!
Again, I don't know how to describe it. I almost sometimes catch myself thinking I might have gotten used to intel too much but idk, I doubt it. There is something with the latency characteristics in terms of timing & pacing.
Like say the total latency on the laptop is higher than the AMD system - idk I didn't measure - but say it is. Now take say take a sample of 1-2s of frames on a timeline. Now for each of those frames within the sample a metric some kind like output pixel displacement vs mouse displacement per frame. Imagine we had this type of data.
Now let's plot these hypotheticals, in terms of how they feel to me atm:
Laptop (intel 13th gen + rtx 4070 mobile):
Code: Select all
|-----------|-----------|-----------|-----------|-----------|-----------
-^----------- ^ -----------^-----------^-----------^-----------^--------
System (amd 9800x3d + rtx 5070 ti):
Code: Select all
|------|------|------|------|------|------|------|------|------|------
-^----------^ -------^ ----^----^---------^---^-------^------------^--
(Yes I intentionally skipped one frame as I do believe that there is something with event fineness resolution too at times)
(Keep in mind here, I tried all options, nvidia reflex off/on/ultra, frame limiters on/off ... blah blah)
And how it feels is that the mouse inputs on intel just happen more on point. They just do, or they happen on a tighter latency offset for each sample or what it is. So now just imagine, which of the graphs shown is easier for a brain to predict/adapt to in realtime scenarios.
So again, on laptop 400fps avg, render latency 4ms in valo. System, 800-900 fps, 1ms render latency. And I would take the former any day with the way the input feels.
Yes I can kind of tell that I see less frames and maybe the system takes a bit longer to catch up, but d**n, I move my mouse within the game, within windows and it lands every time where I want it to on the right unit of time that makes sense for a human like me. And idk but this kind of discrepancy is absolutely f****** with my brain.
However there is a catch here, If I never had these two side by side it would likely have been harder for me to perceive this downgrade as one would just take it as a given. But once you feel the difference it's hard to let go. It's like switching from 60hz to 240hz+. Except arguably it's worse. as hz is cycles per second assumed at roughly even intervals. And what I'm describing is something that is more like average cycles per second with perceivable high jitter coefficient (ie variance from sample window to sample window or high max/min).
It's just indescribable. Someone please. Get a masters. Get a PHD. Find out what the hell is going on. While claude is giving a compelling answer like:
The Windows input stack treats all x86 CPUs identically—the differences emerge from hardware
Windows processes mouse and keyboard input through a standardized kernel pipeline: the HID class driver (hidclass.sys) receives USB interrupts, queues Deferred Procedure Calls (DPCs), and routes input through Win32k.sys to application message queues. Since Windows NT moved to asynchronous per-thread input queues, there's been no CPU-specific "tick-based" synchronous mode. However, the timing precision of this pipeline varies dramatically based on underlying hardware characteristics.
The critical difference lies in how quickly and consistently the CPU can service input interrupts. Intel's 13th-gen processors integrate all I/O controllers on the same die as the CPU cores, meaning a USB interrupt travels through a single clock domain to reach application code. AMD Ryzen processors separate compute (CCDs) from I/O (IOD) across different chiplets connected via Infinity Fabric, introducing mandatory clock-domain crossings that add both latency and—crucially—latency variance.
Meanwhile, I'm saying it. Today Intel (maybe + nvidia) >> AMD (maybe + nvidia) with mouse + keyboard input feel, period. Even if you can get AMD down to a lower click to photon latency. Absolutely superior.
Update:
I also asked claude to do a more comprehensive research and with amd you sometimes land on quite comical finds. Unfortunately I'm not allowed to post links but there is one post that claude managed to pull off of archive dot org or idk what it is. Ie it goes like this:
Chipset Driver Bug: USB ports have different polling rates (affects all ryzen systems)
When using standard WinUSB drivers on Windows 10 Ryzen systems, only USB 3.0/3.1/3.2 Gen 1 ports are polled every 8 ms. USB Gen 2 ports (and USB 2.0 ports) are polled at 16 ms, which causes unnecessary issues in certain use cases where a timeout occurs waiting on the new input. This also affects internal USB headers.
This was found using a gamecube controller adapter by Nintendo with standard WinUSB drivers (installed via Zadig). However, I have reproduced the polling rate issue with other hardware.
This occurs across all Ryzen platforms (including laptops).
It can be worked around by using a 3.0/3.1 Gen1/3.2 Gen1 port directly on the motherboard, but not everyone has these available if more hardware needs to be polled at 8ms. Also, laptop users might not have any ports that work properly.
Can the chipset drivers be changed so that the polling rate is consistent (and minimized) across ALL USB ports on Ryzen platforms?
This bizarrely could be one of the reasons (maybe?) why vl805 works well, it's usb 3.0 5gbps. I would still lean towards VIA (vl805) over ASMedia however. Although I would be careful with making a claim like that as I'm not aware whether usbhid events are processed thru chipset if you plug the pci usb into pci cpu lanes.
This was also an interesting part of the report that claude generated:
## Timer resolution and interrupt handling show measurable AMD disadvantages
Three timing subsystems affect input "feel," and all three behave differently on AMD:
**QueryPerformanceCounter (QPC) latency** varies by platform. On Intel Core i7-6700K, QPC calls complete in approximately **11 nanoseconds**. Testing on AMD Ryzen 7 1700X showed certain configurations requiring **2,491 nanoseconds per call**—220× slower—when Windows falls back to platform timers instead of the CPU's Time Stamp Counter (TSC). While modern Ryzen processors have largely resolved this through invariant TSC support, motherboard BIOS configurations can still trigger fallback behavior.
**HPET frequency differs** between platforms: Intel implementations run at **24.00 MHz** (~41.67ns resolution) while AMD runs at **14.32 MHz** (~69.83ns resolution). This 40% resolution difference affects any application or driver relying on HPET for timing.
**DPC latency measurements consistently favor Intel**. Community benchmarks show Intel systems typically achieving ~40μs average DPC latency versus AMD systems at ~80μs. One overclocker reported that even disabling an entire CCX on a Ryzen 5950X couldn't bring DPC latency down to Intel levels, suggesting the IOD architecture itself creates irreducible overhead.
## AMD's Infinity Fabric architecture explains the "early/late input" phenomenon
The user's description of inputs feeling like they "skip" or happen "early/late" aligns precisely with how AMD's chiplet architecture handles I/O under load. Chips and Cheese's technical analysis revealed that Zen 4 I/O latency can spike from **82ns baseline to 700ns+ under contention**—an 8.5× variance when CPU cores generate heavy memory traffic through the same Infinity Fabric that handles USB data.
The mechanism works like this: USB controller receives mouse movement → IOD processes interrupt → data crosses IFOP (Infinity Fabric On-Package) link to reach CCD → interrupt triggers DPC on CPU core → DPC drains through kernel to application. Each clock-domain crossing adds synchronization overhead, and if the Infinity Fabric is handling memory requests from gaming threads simultaneously, **I/O requests experience queuing delays**.
This explains why higher FPS doesn't fix the problem—frame rate measures GPU output, not input timing consistency. A game running at 300 FPS still receives mouse updates through the same contention-prone I/O path. Intel's ring bus architecture keeps I/O and compute in the same clock domain, eliminating this class of timing variance entirely.