Ryzen vs Intel Input Lag/Latency

Everything about latency. This section is mainly user/consumer discussion. (Peer-reviewed scientific discussion should go in Laboratory section). Tips, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
User avatar
witega
Posts: 72
Joined: 08 Jun 2019, 11:40

Re: Ryzen vs Intel Input Lag/Latency

Post by witega » 15 Dec 2025, 01:24

kyube wrote:
15 Feb 2025, 19:59
snip
Sorry to necro an old post here but it's been awhile since I've been gaming on the 9800X3D and wanted to share some things I've learned, and maybe if there's any agreement/disagreement here.

I think the biggest challenge for me on this AM5 platform is getting away from the old wisdom of Intel tweaking. A lot of the advice just is inapplicable on AM5, especially with the X3D chips. My brain had a hard time just accepting that leaving things on "Auto" or default is just the way things are now.

Your advice here (and elsewhere) seemed to echo similar restraint when it comes to tuning, that is minimal user input at a BIOS level for example. But I really liked how you approached things not just simply from a performance and low latency perspective but also placing a much greater importance on stability that you don't really see elsewhere on the internet (many people on Reddit and other forums just love to do risky benchmark chasing or applying values without thinking thru what those actually do). I appreciate your posts here.

Anyway just wanted to share some of my own BIOS/Windows settings that I've found to be a pretty good baseline for performance. My system is:

CPU: 9800X3D
Mobo: Asrock Steel Legend X670E
RAM: Kingston Fury Beast DDR5 32GB cl30@6000mt/s
GPU: Asus RTX4090 Strix OC
SSD: 2TB Samsung 990Pro NVME M2
PSU: Corsair AX1600i
Monitor: Benq Zowie 86x+ 600hz

My goals:
Lowest possible input-to-photon latency
Highest sustainable FPS
Tight frametimes (no microstutter)
Absolute reliability
Across esports titles, not just CS2

i think the first thing that should be acknowledged is that the 9800X3D is NOT a "tuning" CPU...it is an "algorithmically optimized" CPU. Manual tuning fights the silicon instead of helping it. That's kind of a huge paradigm shift away from old Intel based systems that I'm used to. So I want to maximize boost responsiveness and cache locality and NOT force static behavior. Anything that pins clocks, disables SMT, disables idle states, forces “always on” behavior etc. increases latency variance, even if average FPS looks similar. (And I know all this looks like heresy compared to Intel)

So the first thing I turned on was EXPO and my RAM kit speed at 6000mt/s is the AM5+X3D sweet spot. The X3D cache already hides most memory latency in esports titles anyway.

For the DRAM Performance Mode I selected the "Competitive" option. Slightly tighter secondary/tertiary memory timings, improved memory latency, excellent frametime consistency and no meaningful stability risk. The AMD AGESA Default profile mode is a little too conservative as it leaves some latency on the table and is meant for absolute baseline stability or workstation use. Asrock also gives you an "Aggresive" option but I found it overtightens sub-timings and reduces training margin. I'd be concerned it would introduce random crashes, WHEA errors, frametime spikes and so forth. The tiny theoretical gain just isn't worth it, especially without ECC.

Here are my other BIOS options. If a BIOS option isn't listed here I just left that setting on "Auto" or whatever the default is. (NOTE: whenever I wanted certainty in an option I would select something other than "Auto", for example with Power Supply Idle Control. "Auto" might often pick "Typical Current Idle" but not always. I need certainty so I selected that setting). I went through each one and painstakingly did as much research as I could and tested/validated where needed.

ZEN5 Gaming Optimizations: Enabled
DFE Read Training: Enabled
Gear Down Mode: Disabled
UCLK DIV1 Mode: UCLK=MEMCLK
SoC / Uncore OC Mode: Enabled
SoC Voltage VDDCR_SOC: Auto
VDDCR_SOC Load-Line Calibration = Level 3
Infinity Fabric Frequency: 2000Mhz
DRAM voltages left on Auto
Above 4G Decoding: Enabled
Suspend to RAM: Disabled
XHCI Hand-off: Disabled
Global C-state Control: Enabled
Power Supply Idle Control: Typical Current Idle
Local APIC Mode: xAPIC
ACPI _CST C1 Declaration: Enabled
MCA FruText: False
SMU and PSP Debug Mode: Disabled
MONITOR and MWAIT disable: Disabled
SVM Lock: Disabled
SVM Enable: Disabled
ACPI SRAT L3 Cache as NUMA Domain: Disabled
PSP Error Injection Support: False
Power Down Enable: Disabled
Disable Memory Error Injection: True
TSME: Disabled
Memory Context Restore: Enabled
IOMMU: Disabled
PCIe ARI Support: Disabled
PCIe All Port ECRC: Disabled
Advanced Error Reporting (AER): Not Supported
PCIe ARI Enumeration: Disabled
dGPU Only Mode: Enabled
UMA Version: Non-legacy
GPU Host Translation Cache: Disabled
NB Azalia: Enabled
PCIe Loopback Mode: Disabled
Persistence mode for legacy endpoints: Disabled
Retimer margining support: Disabled
FCH Spread Spectrum: Disabled
CPPC Dynamic Preferred Cores: Cache
GFXOFF: Disabled
Pluton Security Processor: Disabled
DRTM Support: Disabled
SMM Isolation Support: Disabled
ABL Console Out Control: Disabled
App Compatibility Database: Disabled
Unused GPP Clocks Off: Enabled
Clock Power Management (CLKREQ#): Enabled
PM L1 SS: L1.1_L1.2
DDR5 Nitro Mode: Enabled
DDR5 Robust Training Mode: Auto
Nitro RX Data: Auto
Nitro TX Data: Auto
Nitro Control Line: Auto
Nitro RX Burst Length: Auto
Nitro TX Burst Length: Auto
PBO: Auto
PPT / TDC / EDC: Auto
Performance Preset (in OC Tweaker menu): Auto
Boost Override: Disabled
Curve Optimizer: Not forced
Manual CPU OC: Not used

SMT does stay on. Modern engines are multi-threaded and if I disabled SMT, Windows tasks compete with game threads which means more context switching and worse 1%/0.1% lows. X3D+SMT is explicitly optimized by AMD too. The old disable SMT/HT comes from an old Intel era with fewer cores, bad Windows scheduling, older engines and CPUs with limited cache. None of that applies anymore.

Some other BIOS settings with my comments:

AMD fTPM: Disabled (but can leave this enabled if you need it. AMD fully fixed the stutter issue people had when fTPM was enabled in later AGESA updates. Disabling this might introduce more background activity in Windows but haven't confirmed this yet. fTPM should have zero imapct on latency or latency as it doesn't run on game threads and does not poll during gameplay. None of the games I play require TPM though)

PSS Support: Enabled (I've read elsewhere to keep this disabled but I disagree. This enables fast boost transitions, it works together with CPPC & Preferred Cores and theres really no downside on modern systems. I could not find a performance penalty with this on)

NX Mode: Enabled (Has no impact on gaming performance or latency. It is enforced at the hardware page-table level and does not run during gameplay).

Re-Size BAR Support: Disabled (The GPU bandwidth of the 4090 is already far beyond what esports engines need. Frametime consistency, 1%/0.1% lows and input-to-photon latency were slightly hurt when I had this enabled. YMMV)

DF Cstates: Enabled (I've seen other guides that have this disabled but I disagree on a few points. DF Cstates control low power idle states for the Data Fabric and on AM5 DF Cstates are very light and the exit latency is extremely fast. They are designed to work with CPPC, boost and Windows scheduling. This doesn't work like old "deep sleep" behavior. Disabling this will hurt your latency consistency instead of helping it)

ECC: Disabled (My RAM is non-ECC DDR5 UDIMM, it has on-die ECC only which is not system ECC. Explicitly telling AGESA the "Disabled" setting says there is no system ECC and to use normal, lowest latency memory path)

And that's pretty much it. I've seen other guides telling people to max out tREFI but I think that's bad advice (only makes sense in short benchmark runs/LN2/XOC/etc). Auto already sets it high but safe and dynamically adjusts with temperature, plays well with Nitro Mode and it gives you 99% of the latency benefit already with 100% of the stability.

Windows Settings (still using 23H2)

Windows Game Mode: On (This prioritizes game threads and reduces background interference but more importantly works correctly with Ryzen CPPC + X3D)

Hardware-Accelerated GPU Scheduling (HAGS): On (This is controversial I know as there have been reports from different people that their esports games run worse, usually in the instance where there isn't a GPU bottleneck but supposedly Microsoft has made some fixes since. I haven't encountered any issues with stuttering that was reported. HAGS should in theory reduce CPU scheduling overhead, lower render queue latency which is especially beneficial with an RTX4090)

Xbox Game Bar: Off (prevents overlay hooks, avoids background capture logic, reduces DPC latency risk, etc). In addition to this all Background Capture options in Windows Settings has been turned off.

Notifications: Off

USB Power in Device Manager every USB Root Hub's Power Management tab unchecked “Allow the computer to turn off..."

Timer Resolution: W11+Game Mode+HAGS already manages this properly. No need to do registry hacks anymore or use ISLC.

Power plan: Balanced (and then in Power Mode set it also to "Balanced" in Windows Settings on W11), This preserves AMD CPPC logic and keeps cores ready and has the best 1%/0.1% lows in my testing. AMD supposedly tunes for Ryzen under Balanced. I believe AMD has a "Ryzen Balanced" power profile as well you can download with the chipset software but I never tried it. Setting a high performance or ultimate performance works against the X3Ds. Boost responsiveness is already maxed under Balanced. And a higher performance plan can increase boost oscillation, DPC latency and make frametimes less smooth.

The only things I did not touch are assigning things like affinities to cores as my benchmarks didn't demonstrate in actual meaningful difference (unlike prior Intel systems I had with say pining the last core to my GPU resulted in significant FPS gains).

If a game has NVIDIA Reflex I would keep NVCP Low Latency Mode OFF. If a game does NOT have Reflex I use NVCP Low Latency Mode ON (not Ultra). ULLM forces “just-in-time” frame submission which can cause frametime jitter, inconsistent pacing, input microstutter. At 600Hz consistency beats theoretical queue depth.

I could go into more depth in NVCP settings if anyone cares but most of the stuff I turned off.

Any questions or further elaboration on why I chose these particular settings please ask, willing to go through why I chose these.
Last edited by witega on 17 Dec 2025, 11:08, edited 2 times in total.

Hyote
Posts: 518
Joined: 09 Jan 2024, 18:08

Re: Ryzen vs Intel Input Lag/Latency

Post by Hyote » 15 Dec 2025, 14:31

witega wrote:
15 Dec 2025, 01:24
kyube wrote:
15 Feb 2025, 19:59
snip
Sorry to necro an old post here but it's been awhile since I've been gaming on the 9800X3D and wanted to share some things I've learned, and maybe if there's any agreement/disagreement here.

I think the biggest challenge for me on this AM5 platform is getting away from the old wisdom of Intel tweaking. A lot of the advice just is inapplicable on AM5, especially with the X3D chips. My brain had a hard time just accepting that leaving things on "Auto" or default is just the way things are now.

Your advice here (and elsewhere) seemed to echo similar restraint when it comes to tuning, that is minimal user input at a BIOS level for example. But I really liked how you approached things not just simply from a performance and low latency perspective but also placing a much greater importance on stability that you don't really see elsewhere on the internet (many people on Reddit and other forums just love to do risky benchmark chasing or applying values without thinking thru what those actually do). I appreciate your posts here.

Anyway just wanted to share some of my own BIOS/Windows settings that I've found to be a pretty good baseline for performance. My system is:

CPU: 9800X3D
Mobo: Asrock Steel Legend X670E
RAM: Kingston Fury Beast DDR5 32GB cl30@6000mt/s
GPU: Asus RTX4090 Strix OC
SSD: 2TB Samsung 990Pro NVME M2
PSU: Corsair AX1600i
Monitor: Benq Zowie 86x+ 600hz

My goals:
Lowest possible input-to-photon latency
Highest sustainable FPS
Tight frametimes (no microstutter)
Absolute reliability
Across esports titles, not just CS2

i think the first thing that should be acknowledged is that the 9800X3D is NOT a "tuning" CPU...it is an "algorithmically optimized" CPU. Manual tuning fights the silicon instead of helping it. That's kind of a huge paradigm shift away from old Intel based systems that I'm used to. So I want to maximize boost responsiveness and cache locality and NOT force static behavior. Anything that pins clocks, disables SMT, disables idle states, forces “always on” behavior etc. increases latency variance, even if average FPS looks similar. (And I know all this looks like heresy compared to Intel)

So the first thing I turned on was EXPO and my RAM kit speed at 6000mt/s is the AM5+X3D sweet spot. The X3D cache already hides most memory latency in esports titles anyway.

For the DRAM Performance Mode I selected the "Competitive" option. Slightly tighter secondary/tertiary memory timings, improved memory latency, excellent frametime consistency and no meaningful stability risk. The AMD AGESA Default profile mode is a little too conservative as it leaves some latency on the table and is meant for absolute baseline stability or workstation use. Asrock also gives you an "Aggresive" option but I found it overtightens sub-timings and reduces training margin. I'd be concerned it would introduce random crashes, WHEA errors, frametime spikes and so forth. The tiny theoretical gain just isn't worth it, especially without ECC.

Here are my other BIOS options. If a BIOS option isn't listed here I just left that setting on "Auto" or whatever the default is. (NOTE: whenever I wanted certainty in an option I would select something other than "Auto", for example with Power Supply Idle Control. "Auto" might often pick "Typical Current Idle" but not always. I need certainty so I selected that setting). I went through each one and painstakingly did as much research as I could and tested/validated where needed.

ZEN5 Gaming Optimizations: Enabled
DFE Read Training: Enabled
Gear Down Mode: Disabled
UCLK DIV1 Mode: UCLK=MEMCLK
SoC / Uncore OC Mode: Enabled
SoC Voltage VDDCR_SOC: Auto
VDDCR_SOC Load-Line Calibration = Level 3
Infinity Fabric Frequency: 2000Mhz
DRAM voltages left on Auto
Above 4G Decoding: Enabled
Suspend to RAM: Disabled
XHCI Hand-off: Disabled
Global C-state Control: Enabled
Power Supply Idle Control: Typical Current Idle
Local APIC Mode: xAPIC
ACPI _CST C1 Declaration: Enabled
MCA FruText: False
SMU and PSP Debug Mode: Disabled
MONITOR and MWAIT disable: Disabled
SVM Lock: Disabled
SVM Enable: Disabled
ACPI SRAT L3 Cache as NUMA Domain: Disabled
PSP Error Injection Support: False
Power Down Enable: Disabled
Disable Memory Error Injection: True
TSME: Disabled
Memory Context Restore: Enabled
IOMMU: Disabled
PCIe ARI Support: Disabled
PCIe All Port ECRC: Disabled
Advanced Error Reporting (AER): Not Supported
PCIe ARI Enumeration: Disabled
dGPU Only Mode: Enabled
UMA Version: Non-legacy
GPU Host Translation Cache: Disabled
NB Azalia: Enabled
PCIe Loopback Mode: Disabled
Persistence mode for legacy endpoints: Disabled
Retimer margining support: Disabled
FCH Spread Spectrum: Disabled
CPPC Dynamic Preferred Cores: Cache
GFXOFF: Disabled
Pluton Security Processor: Disabled
DRTM Support: Disabled
SMM Isolation Support: Disabled
ABL Console Out Control: Disabled
App Compatibility Database: Disabled
Unused GPP Clocks Off: Enabled
Clock Power Management (CLKREQ#): Enabled
PM L1 SS: L1.1_L1.2
DDR5 Nitro Mode: Enabled
DDR5 Robust Training Mode: Auto
Nitro RX Data: Auto
Nitro TX Data: Auto
Nitro Control Line: Auto
Nitro RX Burst Length: Auto
Nitro TX Burst Length: Auto
PBO: Auto
PPT / TDC / EDC: Auto
Performance Preset (in OC Tweaker menu): Auto
Boost Override: Disabled
Curve Optimizer: Not forced
Manual CPU OC: Not used

SMT does stay on. Modern engines are multi-threaded and if I disabled SMT, Windows tasks compete with game threads which means more context switching and worse 1%/0.1% lows. X3D+SMT is explicitly optimized by AMD too. The old disable SMT/HT comes from an old Intel era with fewer cores, bad Windows scheduling, older engines and CPUs with limited cache. None of that applies anymore.

Some other BIOS settings with my comments:

AMD fTPM: Disabled (but can leave this enabled if you need it. AMD fully fixed the stutter issue people had when fTPM was enabled in later AGESA updates. Disabling this might introduce more background activity in Windows but haven't confirmed this yet. fTPM should have zero imapct on latency or latency as it doesn't run on game threads and does not poll during gameplay. None of the games I play require TPM though)

PSS Support: Enabled (I've read elsewhere to keep this disabled but I disagree. This enables fast boost transitions, it works together with CPPC & Preferred Cores and theres really no downside on modern systems. I could not find a performance penalty with this on)

NX Mode: Enabled (Has no impact on gaming performance or latency. It is enforced at the hardware page-table level and does not run during gameplay).

Re-Size BAR Support: Disabled (The GPU bandwidth of the 4090 is already far beyond what esports engines need. Frametime consistency, 1%/0.1% lows and input-to-photon latency were slightly hurt when I had this enabled. YMMV)

DF Cstates: Enabled (I've seen other guides that have this disabled but I disagree on a few points. DF Cstates control low power idle states for the Data Fabric and on AM5 DF Cstates are very light and the exit latency is extremely fast. They are designed to work with CPPC, boost and Windows scheduling. This doesn't work like old "deep sleep" behavior. Disabling this will hurt your latency consistency instead of helping it)

ECC: Disabled (My RAM is non-ECC DDR5 UDIMM, it has on-die ECC only which is not system ECC. Explicitly telling AGESA the "Disabled" setting says there is no system ECC and to use normal, lowest latency memory path)

And that's pretty much it. I've seen other guides telling people to max out tREFI but I think that's bad advice (only makes sense in short benchmark runs/LN2/XOC/etc). Auto already sets it high but safe and dynamically adjusts with temperature, plays well with Nitro Mode and it gives you 99% of the latency benefit already with 100% of the stability.

Windows Settings (still using 23H2)

I mostly followed VOD's PC Tuning guide here: https://github.com/valleyofdoom/PC-Tuning

Windows Game Mode: On (This prioritizes game threads and reduces background interference but more importantly works correctly with Ryzen CPPC + X3D)

Hardware-Accelerated GPU Scheduling (HAGS): On (This is controversial I know as there have been reports from different people that their esports games run worse, usually in the instance where there isn't a GPU bottleneck but supposedly Microsoft has made some fixes since. I haven't encountered any issues with stuttering that was reported. HAGS should in theory reduce CPU scheduling overhead, lower render queue latency which is especially beneficial with an RTX4090)

Xbox Game Bar: Off (prevents overlay hooks, avoids background capture logic, reduces DPC latency risk, etc). In addition to this all Background Capture options in Windows Settings has been turned off.

Notifications: Off

USB Power in Device Manager every USB Root Hub's Power Management tab unchecked “Allow the computer to turn off..."

Timer Resolution: W11+Game Mode+HAGS already manages this properly. No need to do registry hacks anymore or use ISLC.

Power plan: Balanced (and then in Power Mode set it also to "Balanced" in Windows Settings on W11), This preserves AMD CPPC logic and keeps cores ready and has the best 1%/0.1% lows in my testing. AMD supposedly tunes for Ryzen under Balanced. I believe AMD has a "Ryzen Balanced" power profile as well you can download with the chipset software but I never tried it (as I don't have their chipset software anyway). Setting a high performance or ultimate performance works against the X3Ds. Boost responsiveness is already maxed under Balanced. And a higher performance plan can increase boost oscillation, DPC latency and make frametimes less smooth.

The only things I did not touch are assigning things like affinities to cores as my benchmarks didn't demonstrate in actual meaningful difference (unlike prior Intel systems I had with say pining the last core to my GPU resulted in significant FPS gains).

If a game has NVIDIA Reflex I would keep NVCP Low Latency Mode OFF. If a game does NOT have Reflex I use NVCP Low Latency Mode ON (not Ultra). ULLM forces “just-in-time” frame submission which can cause frametime jitter, inconsistent pacing, input microstutter. At 600Hz consistency beats theoretical queue depth.

I could go into more depth in NVCP settings if anyone cares but most of the stuff I turned off.

Any questions or further elaboration on why I chose these particular settings please ask, willing to go through why I chose these.
This is actually a very interesting write-up which I will save for the time I decide to buy an AMD system. I would be interested in two things: on my GitHub page there is an AMD power plan which has some necessary settings set to auto while keeping everything else optimized and some kernel events disabled.
The other thing is way riskier but I'm still going to ask: what about disabling AMD PSP (Intel Management Engine's equivalent)?

Sz3ypy
Posts: 11
Joined: 19 Jun 2025, 13:59

Re: Ryzen vs Intel Input Lag/Latency

Post by Sz3ypy » 15 Dec 2025, 15:46

When i back to home i do the same , copy and save in notepad 😆

User avatar
witega
Posts: 72
Joined: 08 Jun 2019, 11:40

Re: Ryzen vs Intel Input Lag/Latency

Post by witega » 15 Dec 2025, 16:27

Hyote wrote:
15 Dec 2025, 14:31
This is actually a very interesting write-up which I will save for the time I decide to buy an AMD system. I would be interested in two things: on my GitHub page there is an AMD power plan which has some necessary settings set to auto while keeping everything else optimized and some kernel events disabled.
Are these custom power profiles you created? I have no idea what yours does or tweak as I didn't see any info on your GH page.

"Balanced" is the recommended setting because it allows the CPU to boost faster and more accurately than on a higher performance plan.

I know that sounds counter intuitive but you have to understand how the X3D chips achieve their performance and what the Balanced power profile is doing and why it matters.

My 9800X3D chip has a massive L3 cache and has extremely fast and oppurtunistic boosting. And it has very fine grained power and thermal control. These chips are power and temp sensitive by their design. AMD limits sustained voltage, the boost behavior is bursty and precise and cache hits matter more than raw clocks. TGhe goal for the X3D chips is fast response and not staying at max clocks forever like you would on older Intel chips.

Now with that chip and W11, what the Balanced profile does is it uses CPPC and lets the CPU request boost states directly. It's responding in microseconds (not milisecoonds) and allows aggresive boost when load appears and fast downclock when load disappears. In short the Balanced profile gives CPU control which exactly what you want your X3D to have.

The problem with high/ultimate/etc plans is while they do sound good they end up hurting rather than helping.

The first thing is they flatten power behavior which means cores are kept at higher idle clocks which reduce idle residency and increase baseline power draw. The problem with that is it raises idle temperature which eats into thermal headroom thus reducing peak boost oppurtunites. X3D chips boost higher when they are cooler not when they idle hot.

The other thing is they interfere with CPPC hints. It reduces Windows' willingness to follow CPPC hints and is biased more towards "always on" clocks instead of burst boosting and can delay optimal core selection. That can worsen frametime consistentcy and boost precision on short game thread bursts.

When you keep everything "awake" that incrases background schedduling contention which makes short engine stalls more visible and does nothing to imprve cache bound workloads. You end up with more activity and not more performance.

Just remember this matters more for X3D chips than non-X3D chips. On non-X3D chips raw clock speed matters more, cache misses hurt more, high clocks can sometimes brute force issues, etc.

But on X3D the cache hits hide memory latency and short boost burstss matter more than sustained clocks and thermal headroom is precious. X3D wins by being smart and not something you can brute force. Again this is really different than how we understood tuning old Intel CPUs.

And AMD themselves have been very clear both internally and externally that X3D boost behavior is tuned around balanced, or more precisely W11 Balanced+CPPC is the target environment. Esports latency is best when boost decisions are agile.

It's an old myth that Balanced causes stutter or slow wake-up because modern CPUs are exiting C-states in microseconds and aren't sleeping too deeply for games and boost instantly on runnable threads. Balanced optimizes latency to boost, boost accuracy, thermal headroom and frametime consistency. Balanced lets the CPU sprint when it needs to whereas a higher performance plan will force it to jog all the time which is not what we want.

If all you cared about were taking benchmark screenshots to post online and wasting idle power, then sure choose a high/ultimate performance plan.
The other thing is way riskier but I'm still going to ask: what about disabling AMD PSP (Intel Management Engine's equivalent)?
There is no gaming or latency benefit turning it off and on modern AM5 systems that can cause real problems. It can also increase the risk of anticheat conflicts.

If someone says "it feels smoother" with disabled that "smoothness" usually comes from a reset (a reboot, reset BIOS state, clear cached firmware state, reduce background software temporarily, etc). If PSP were causing stutter you'd see periodic spikes across games and scheduler anomalies, but this doesn't happen because PSP isn't involved.

Hyote
Posts: 518
Joined: 09 Jan 2024, 18:08

Re: Ryzen vs Intel Input Lag/Latency

Post by Hyote » 15 Dec 2025, 18:02

witega wrote:
15 Dec 2025, 16:27
Hyote wrote:
15 Dec 2025, 14:31
This is actually a very interesting write-up which I will save for the time I decide to buy an AMD system. I would be interested in two things: on my GitHub page there is an AMD power plan which has some necessary settings set to auto while keeping everything else optimized and some kernel events disabled.
Are these custom power profiles you created? I have no idea what yours does or tweak as I didn't see any info on your GH page.

"Balanced" is the recommended setting because it allows the CPU to boost faster and more accurately than on a higher performance plan.

I know that sounds counter intuitive but you have to understand how the X3D chips achieve their performance and what the Balanced power profile is doing and why it matters.

My 9800X3D chip has a massive L3 cache and has extremely fast and oppurtunistic boosting. And it has very fine grained power and thermal control. These chips are power and temp sensitive by their design. AMD limits sustained voltage, the boost behavior is bursty and precise and cache hits matter more than raw clocks. TGhe goal for the X3D chips is fast response and not staying at max clocks forever like you would on older Intel chips.

Now with that chip and W11, what the Balanced profile does is it uses CPPC and lets the CPU request boost states directly. It's responding in microseconds (not milisecoonds) and allows aggresive boost when load appears and fast downclock when load disappears. In short the Balanced profile gives CPU control which exactly what you want your X3D to have.

The problem with high/ultimate/etc plans is while they do sound good they end up hurting rather than helping.

The first thing is they flatten power behavior which means cores are kept at higher idle clocks which reduce idle residency and increase baseline power draw. The problem with that is it raises idle temperature which eats into thermal headroom thus reducing peak boost oppurtunites. X3D chips boost higher when they are cooler not when they idle hot.

The other thing is they interfere with CPPC hints. It reduces Windows' willingness to follow CPPC hints and is biased more towards "always on" clocks instead of burst boosting and can delay optimal core selection. That can worsen frametime consistentcy and boost precision on short game thread bursts.

When you keep everything "awake" that incrases background schedduling contention which makes short engine stalls more visible and does nothing to imprve cache bound workloads. You end up with more activity and not more performance.

Just remember this matters more for X3D chips than non-X3D chips. On non-X3D chips raw clock speed matters more, cache misses hurt more, high clocks can sometimes brute force issues, etc.

But on X3D the cache hits hide memory latency and short boost burstss matter more than sustained clocks and thermal headroom is precious. X3D wins by being smart and not something you can brute force. Again this is really different than how we understood tuning old Intel CPUs.

And AMD themselves have been very clear both internally and externally that X3D boost behavior is tuned around balanced, or more precisely W11 Balanced+CPPC is the target environment. Esports latency is best when boost decisions are agile.

It's an old myth that Balanced causes stutter or slow wake-up because modern CPUs are exiting C-states in microseconds and aren't sleeping too deeply for games and boost instantly on runnable threads. Balanced optimizes latency to boost, boost accuracy, thermal headroom and frametime consistency. Balanced lets the CPU sprint when it needs to whereas a higher performance plan will force it to jog all the time which is not what we want.

If all you cared about were taking benchmark screenshots to post online and wasting idle power, then sure choose a high/ultimate performance plan.
The other thing is way riskier but I'm still going to ask: what about disabling AMD PSP (Intel Management Engine's equivalent)?
There is no gaming or latency benefit turning it off and on modern AM5 systems that can cause real problems. It can also increase the risk of anticheat conflicts.

If someone says "it feels smoother" with disabled that "smoothness" usually comes from a reset (a reboot, reset BIOS state, clear cached firmware state, reduce background software temporarily, etc). If PSP were causing stutter you'd see periodic spikes across games and scheduler anomalies, but this doesn't happen because PSP isn't involved.
What you write is factually correct but I'm always looking for the what ifs. beyond.pow was made by BEYONDPERF_LLG. The other one was modified by someone who is also using a 9800X3D, one option I remember he set to auto is Heterogeneous Thread Scheduling Policy.
I checked them for settings a while back and IIRC they are using beneficial settings for both Intel and AMD. The real magic is in the kernel event disabling which I have no idea about but it certainly works because if you want to switch back and forth between power plans, the PC bluescreens.
But again I only popped up these questions out of curiosity, but you seem very knowledgeable in general while I'm mostly just testing tweaks.

Hyote
Posts: 518
Joined: 09 Jan 2024, 18:08

Re: Ryzen vs Intel Input Lag/Latency

Post by Hyote » 15 Dec 2025, 18:18

One more thing that nobody actually knows is if Intel CPUs benefit from disabling power saving features and applying setting that look the best on paper. I'd say you dropped a banger of a small optimization guide packed into a comment.

User avatar
witega
Posts: 72
Joined: 08 Jun 2019, 11:40

Re: Ryzen vs Intel Input Lag/Latency

Post by witega » 15 Dec 2025, 18:30

Hyote wrote:
15 Dec 2025, 18:02
Heterogeneous Thread Scheduling Policy. I checked them for settings a while back and IIRC they are using beneficial settings for both Intel and AMD. The real magic is in the kernel event disabling which I have no idea about but it certainly works because if you want to switch back and forth between power plans, the PC bluescreens.
But again I only popped up these questions out of curiosity, but you seem very knowledgeable in general while I'm mostly just testing tweaks.
These are bad tweaks for my system, let me explain why.

That "Heterogeneous Thread Scheduling Policy" exists for heterogeneous CPUs (so Intel Alder/Raptor Lake) and it tells Windows how to bias scheduling across different core types.

My 9800X3D CPU is a single CCD with homogeneous cores and uniform cache topology. There's no P/E core split. There's nothing heterogenous to schedule so Windows would effectively ignore that policy for my CPU. AMD documentation and Windows guidance never recommend touching it on Ryzen which makes sense because there's nothing for it to control.

Regarding the "kernel event disabling" magic...this is dangerous. Windows power plans are validated as a set. When you disable kernel events and force mismatched policies and break the notification chain then switching power plans can trigger BSOD (as you pointed out), resume/sleep breaks and scheduler state desyncrhonizes. "It bluescreens if you switch back, but that means it works" is a huge warning sign and crashes are not proof of optimization.

I know where your advice comes from. It was back in the early Alder Lake tuning days where people fought Windows' hybrid scheduling before it matured and people misapplied Intel specific hacks. The thing is those hacks were never meant for Ryzen and are actively harmful on AMD platforms.
Last edited by witega on 15 Dec 2025, 18:58, edited 1 time in total.

User avatar
witega
Posts: 72
Joined: 08 Jun 2019, 11:40

Re: Ryzen vs Intel Input Lag/Latency

Post by witega » 15 Dec 2025, 18:48

Hyote wrote:
15 Dec 2025, 18:18
One more thing that nobody actually knows is if Intel CPUs benefit from disabling power saving features and applying setting that look the best on paper.
Well on much older Intel CPUs (Nehalem/Haswell) power management was coarse. C-state exits were slower and the turbo logic was simpler. OS scheduling was less aware too. Back then I remember to lock clocks, disable deep C-states and force high P-states to reduce the worst case latency spikes when I had a very specific low latency audio workload.

Alder Lake's launch caused some chaos because Windows scheduling was immature and E-cores confused thread placement and power policies behaved unpredictably. Disabling e-cores and c-states and forcing high clocks helped temporarily. But once Windows and drivers matured those hacks stopped helping.

Modern CPUs are designed around power management now. On both current Intel and AMD CPUs you have boost decisions that are happening in microseconds and the C-state exit latencies are negligible. And thermal headroom directly affects boost which is also why idle behavior is part of the performance model. When you disable power saving features on these modern CPUs you raise idle temperature which reduces turbo headroom, incrreases contention and can worsen sustained boost. I know the old intuitive logic of "If it never sleeps its always ready" but on modern silicon its the wrong approach.

And I know it looks good on paper having clocks appear stable, monitoring graphs that look flat and numbers that feel reassuring but the thing is games don't care about flat clocks, thjey care about fast reaction.

And yeah why people still ciaim "nobody knows" is because performance variance is noisy, games are inconsistent, human perception is unreliable, reboots reset transient state and placebo is extremely strong.

Ask yourself if disabling power saving features truly improved gaming performance, why wouldn't CPU vendors ship systems that way by default? They don't because it doesn't. Power management is now a performance feature and not a limitation. Disabled power saving features is largely a relic of older architectures and early hybrid core growing pains

urikawa
Posts: 15
Joined: 13 Mar 2024, 14:21

Re: Ryzen vs Intel Input Lag/Latency

Post by urikawa » 16 Dec 2025, 09:10

witega wrote:
15 Dec 2025, 01:24
kyube wrote:
15 Feb 2025, 19:59
snip
Sorry to necro an old post here but it's been awhile since I've been gaming on the 9800X3D and wanted to share some things I've learned, and maybe if there's any agreement/disagreement here.

I think the biggest challenge for me on this AM5 platform is getting away from the old wisdom of Intel tweaking. A lot of the advice just is inapplicable on AM5, especially with the X3D chips. My brain had a hard time just accepting that leaving things on "Auto" or default is just the way things are now.

Your advice here (and elsewhere) seemed to echo similar restraint when it comes to tuning, that is minimal user input at a BIOS level for example. But I really liked how you approached things not just simply from a performance and low latency perspective but also placing a much greater importance on stability that you don't really see elsewhere on the internet (many people on Reddit and other forums just love to do risky benchmark chasing or applying values without thinking thru what those actually do). I appreciate your posts here.

Anyway just wanted to share some of my own BIOS/Windows settings that I've found to be a pretty good baseline for performance. My system is:

CPU: 9800X3D
Mobo: Asrock Steel Legend X670E
RAM: Kingston Fury Beast DDR5 32GB cl30@6000mt/s
GPU: Asus RTX4090 Strix OC
SSD: 2TB Samsung 990Pro NVME M2
PSU: Corsair AX1600i
Monitor: Benq Zowie 86x+ 600hz

My goals:
Lowest possible input-to-photon latency
Highest sustainable FPS
Tight frametimes (no microstutter)
Absolute reliability
Across esports titles, not just CS2

i think the first thing that should be acknowledged is that the 9800X3D is NOT a "tuning" CPU...it is an "algorithmically optimized" CPU. Manual tuning fights the silicon instead of helping it. That's kind of a huge paradigm shift away from old Intel based systems that I'm used to. So I want to maximize boost responsiveness and cache locality and NOT force static behavior. Anything that pins clocks, disables SMT, disables idle states, forces “always on” behavior etc. increases latency variance, even if average FPS looks similar. (And I know all this looks like heresy compared to Intel)

So the first thing I turned on was EXPO and my RAM kit speed at 6000mt/s is the AM5+X3D sweet spot. The X3D cache already hides most memory latency in esports titles anyway.

For the DRAM Performance Mode I selected the "Competitive" option. Slightly tighter secondary/tertiary memory timings, improved memory latency, excellent frametime consistency and no meaningful stability risk. The AMD AGESA Default profile mode is a little too conservative as it leaves some latency on the table and is meant for absolute baseline stability or workstation use. Asrock also gives you an "Aggresive" option but I found it overtightens sub-timings and reduces training margin. I'd be concerned it would introduce random crashes, WHEA errors, frametime spikes and so forth. The tiny theoretical gain just isn't worth it, especially without ECC.

Here are my other BIOS options. If a BIOS option isn't listed here I just left that setting on "Auto" or whatever the default is. (NOTE: whenever I wanted certainty in an option I would select something other than "Auto", for example with Power Supply Idle Control. "Auto" might often pick "Typical Current Idle" but not always. I need certainty so I selected that setting). I went through each one and painstakingly did as much research as I could and tested/validated where needed.

ZEN5 Gaming Optimizations: Enabled
DFE Read Training: Enabled
Gear Down Mode: Disabled
UCLK DIV1 Mode: UCLK=MEMCLK
SoC / Uncore OC Mode: Enabled
SoC Voltage VDDCR_SOC: Auto
VDDCR_SOC Load-Line Calibration = Level 3
Infinity Fabric Frequency: 2000Mhz
DRAM voltages left on Auto
Above 4G Decoding: Enabled
Suspend to RAM: Disabled
XHCI Hand-off: Disabled
Global C-state Control: Enabled
Power Supply Idle Control: Typical Current Idle
Local APIC Mode: xAPIC
ACPI _CST C1 Declaration: Enabled
MCA FruText: False
SMU and PSP Debug Mode: Disabled
MONITOR and MWAIT disable: Disabled
SVM Lock: Disabled
SVM Enable: Disabled
ACPI SRAT L3 Cache as NUMA Domain: Disabled
PSP Error Injection Support: False
Power Down Enable: Disabled
Disable Memory Error Injection: True
TSME: Disabled
Memory Context Restore: Enabled
IOMMU: Disabled
PCIe ARI Support: Disabled
PCIe All Port ECRC: Disabled
Advanced Error Reporting (AER): Not Supported
PCIe ARI Enumeration: Disabled
dGPU Only Mode: Enabled
UMA Version: Non-legacy
GPU Host Translation Cache: Disabled
NB Azalia: Enabled
PCIe Loopback Mode: Disabled
Persistence mode for legacy endpoints: Disabled
Retimer margining support: Disabled
FCH Spread Spectrum: Disabled
CPPC Dynamic Preferred Cores: Cache
GFXOFF: Disabled
Pluton Security Processor: Disabled
DRTM Support: Disabled
SMM Isolation Support: Disabled
ABL Console Out Control: Disabled
App Compatibility Database: Disabled
Unused GPP Clocks Off: Enabled
Clock Power Management (CLKREQ#): Enabled
PM L1 SS: L1.1_L1.2
DDR5 Nitro Mode: Enabled
DDR5 Robust Training Mode: Auto
Nitro RX Data: Auto
Nitro TX Data: Auto
Nitro Control Line: Auto
Nitro RX Burst Length: Auto
Nitro TX Burst Length: Auto
PBO: Auto
PPT / TDC / EDC: Auto
Performance Preset (in OC Tweaker menu): Auto
Boost Override: Disabled
Curve Optimizer: Not forced
Manual CPU OC: Not used

SMT does stay on. Modern engines are multi-threaded and if I disabled SMT, Windows tasks compete with game threads which means more context switching and worse 1%/0.1% lows. X3D+SMT is explicitly optimized by AMD too. The old disable SMT/HT comes from an old Intel era with fewer cores, bad Windows scheduling, older engines and CPUs with limited cache. None of that applies anymore.

Some other BIOS settings with my comments:

AMD fTPM: Disabled (but can leave this enabled if you need it. AMD fully fixed the stutter issue people had when fTPM was enabled in later AGESA updates. Disabling this might introduce more background activity in Windows but haven't confirmed this yet. fTPM should have zero imapct on latency or latency as it doesn't run on game threads and does not poll during gameplay. None of the games I play require TPM though)

PSS Support: Enabled (I've read elsewhere to keep this disabled but I disagree. This enables fast boost transitions, it works together with CPPC & Preferred Cores and theres really no downside on modern systems. I could not find a performance penalty with this on)

NX Mode: Enabled (Has no impact on gaming performance or latency. It is enforced at the hardware page-table level and does not run during gameplay).

Re-Size BAR Support: Disabled (The GPU bandwidth of the 4090 is already far beyond what esports engines need. Frametime consistency, 1%/0.1% lows and input-to-photon latency were slightly hurt when I had this enabled. YMMV)

DF Cstates: Enabled (I've seen other guides that have this disabled but I disagree on a few points. DF Cstates control low power idle states for the Data Fabric and on AM5 DF Cstates are very light and the exit latency is extremely fast. They are designed to work with CPPC, boost and Windows scheduling. This doesn't work like old "deep sleep" behavior. Disabling this will hurt your latency consistency instead of helping it)

ECC: Disabled (My RAM is non-ECC DDR5 UDIMM, it has on-die ECC only which is not system ECC. Explicitly telling AGESA the "Disabled" setting says there is no system ECC and to use normal, lowest latency memory path)

And that's pretty much it. I've seen other guides telling people to max out tREFI but I think that's bad advice (only makes sense in short benchmark runs/LN2/XOC/etc). Auto already sets it high but safe and dynamically adjusts with temperature, plays well with Nitro Mode and it gives you 99% of the latency benefit already with 100% of the stability.

Windows Settings (still using 23H2)

I mostly followed VOD's PC Tuning guide here: https://github.com/valleyofdoom/PC-Tuning

Windows Game Mode: On (This prioritizes game threads and reduces background interference but more importantly works correctly with Ryzen CPPC + X3D)

Hardware-Accelerated GPU Scheduling (HAGS): On (This is controversial I know as there have been reports from different people that their esports games run worse, usually in the instance where there isn't a GPU bottleneck but supposedly Microsoft has made some fixes since. I haven't encountered any issues with stuttering that was reported. HAGS should in theory reduce CPU scheduling overhead, lower render queue latency which is especially beneficial with an RTX4090)

Xbox Game Bar: Off (prevents overlay hooks, avoids background capture logic, reduces DPC latency risk, etc). In addition to this all Background Capture options in Windows Settings has been turned off.

Notifications: Off

USB Power in Device Manager every USB Root Hub's Power Management tab unchecked “Allow the computer to turn off..."

Timer Resolution: W11+Game Mode+HAGS already manages this properly. No need to do registry hacks anymore or use ISLC.

Power plan: Balanced (and then in Power Mode set it also to "Balanced" in Windows Settings on W11), This preserves AMD CPPC logic and keeps cores ready and has the best 1%/0.1% lows in my testing. AMD supposedly tunes for Ryzen under Balanced. I believe AMD has a "Ryzen Balanced" power profile as well you can download with the chipset software but I never tried it. Setting a high performance or ultimate performance works against the X3Ds. Boost responsiveness is already maxed under Balanced. And a higher performance plan can increase boost oscillation, DPC latency and make frametimes less smooth.

The only things I did not touch are assigning things like affinities to cores as my benchmarks didn't demonstrate in actual meaningful difference (unlike prior Intel systems I had with say pining the last core to my GPU resulted in significant FPS gains).

If a game has NVIDIA Reflex I would keep NVCP Low Latency Mode OFF. If a game does NOT have Reflex I use NVCP Low Latency Mode ON (not Ultra). ULLM forces “just-in-time” frame submission which can cause frametime jitter, inconsistent pacing, input microstutter. At 600Hz consistency beats theoretical queue depth.

I could go into more depth in NVCP settings if anyone cares but most of the stuff I turned off.

Any questions or further elaboration on why I chose these particular settings please ask, willing to go through why I chose these.
Very interesting post, thank for your work i agree with most of your settings.
I got some question :p just to ask if you considered these topic no a judgement.

Engine side the CORE of performance :
- Why Local APIC Mode: xAPIC and not x2APIC because interrupt management with device with high frequency ask for alot of interrupt and x2apic is not only for 250 core cpu it for managing High frequency interrupt like gaming system or virtualisation with many vm so interrupt (8Khz mouse, 8Khz keyboard, needed for high frequency screen refresh rate like yours 600hz)
- You dont touch pbo negative to lower voltage so the temperature of the 9800x3d remain high and the boost of the core cant achieve 5.2ghz or more for gaming session so the 0.1 and 1% fps remain stable
- You disable memory saving options gdm, power down usw... but you dont touch the spread spectrum for vrm, cpu, and other stuff to stabilize the bclk to 100mhz and achieve better stability ?

Another question for usb device keyboard and mouse what port do you use CPU and Chipset attached usb port? do you have more than 1KHz frequency usb device ?

Also i through agesa and ZEN5 Gaming Optimizations: Enabled automatically disable SMT by maybe it mean different thing on asrock what agesa version do you have ?
Maybe you can post your SCEWIN dump :p

User avatar
witega
Posts: 72
Joined: 08 Jun 2019, 11:40

Re: Ryzen vs Intel Input Lag/Latency

Post by witega » 16 Dec 2025, 10:46

urikawa wrote:
16 Dec 2025, 09:10
Very interesting post, thank for your work i agree with most of your settings.
I got some question :p just to ask if you considered these topic no a judgement
No problem, I appreciate your questions!
Engine side the CORE of performance :
- Why Local APIC Mode: xAPIC and not x2APIC because interrupt management with device with high frequency ask for alot of interrupt and x2apic is not only for 250 core cpu it for managing High frequency interrupt like gaming system or virtualisation with many vm so interrupt (8Khz mouse, 8Khz keyboard, needed for high frequency screen refresh rate like yours 600hz)
So a couple things here. Initially I had set it to x2APIC but the BIOS forced it back xAPIC as the setting. This is normal and intentional. My understanding is that on consumer version of Windows it may downgrade xAPIC for compatibility, power mgmt, idle state coordination, firmware signaling reasons. The important thing is there is no performance loss if its set to xAPIC.

x2APIC can scale better, yes, but the practical reality on Windows gaming systems is different. Your point about 8khz mouse and keyboard, display refresh, GPU interrupts etc does NOT stress APIC scalability limits.

If we put numbers on it:
8kHz mouse=8,000 interrupts/sec
8kHz keyboard=8,000 interrupts/sec
GPU interrupts=MSI/MSI-X, batched, low frequency
600Hz display refresh doesn't have interrupts per refresh

Modern CPUs handle millions of interrupts/sec without stress. I'm nowhere near a regime where APIC exhaustion, MMIO contention, broadcast interrupt overhead, etc becomes a bottleneck.

x2APIC helps when you have things like 128+ cores, NUMA heavy servers, Virtualization hosts, SR-IOV-heavvy workloads, nested interrupts across many vCPUs, etc. It's not lower latency to select x2APIC.

xAPIC is already optimal for my interrupt rates. No input latency or frametime penalty exists
- You dont touch pbo negative to lower voltage so the temperature of the 9800x3d remain high and the boost of the core cant achieve 5.2ghz or more for gaming session so the 0.1 and 1% fps remain stable
This is probably the most misunderstood part of X3D tuning, just from reading elsewhere on the internet by other users.

Undervolting certainly sounds correct right? Lover voltage->lower temps, lower temp->higher boost, higher boost -> better FPS consistency. This works on non-X3D CPUs

But it doesn't really work on X3D because the cache stack is thermally sensitivve, your voltage margins are much tighter, cache errors dont always crash as they cause retries. And those retries show up as random frameitme spikes, rare 0.1%low dips, "engine stutter" that is actually micro instability. The dangerous part is CO instability often passes stress tests but fails in real games.

Remember AMD ships X3D already near optimal. AMD is using aggressive VF curve tuning, limits frequency to protect cache timing and prioritizes consistency over peak clocks. So the thing for me is that I'm not leaving safe headroom unused and I'm avoiding instability masquerading as engine problems.

And 5.2GHz isn't the goal. Games don't benefit from sustained all core clocks or locked frequencies. They benefit from fast burst boost, cache residency and low variance which is why the stock behavior delivers better.

So the bottom line here is CO on X3D risks silent instability which in turns hurts 0.1% lows. Stock behavior is already tuned for cahce+gaming and my consistency is better without CO.
- You disable memory saving options gdm, power down usw... but you dont touch the spread spectrum for vrm, cpu, and other stuff to stabilize the bclk to 100mhz and achieve better stability ?
Great question and yes BCLK should be ~100 MHz (CPU-Z showing 99.98->100.00 MHz is normal and that variation is intentional and harmless) And CPU-Z clock fluctuation is irrelevant. The CPU does PLL multiplication, internal clock domain synchronization anmd cycle-accurate timing internally. A +-0.03 MHz variation at BCLK does not propagate to instruction timing, does not affect cache timing or frame pacing. It's far below noise floor.

Spread spectrum modulates clock frequency slightly, reduces EMI, improves signal integrity under noise, etc. but it doesn't introduce timing jitter that affects gaming nor does it hurt stability on modern boards or affect frametimes. The modulation you see is extremely low frequency and far below CPU or memory timing resolution.

Disabling Spread Spectrum is usually a bad idea because it can increase EMI, worsen signal integritym, makes the system less tolerant of noise, only helps when doing manual BCLK overclocking (which is what I'm not doing).

Memory power saving is different. GDM/Power Down affects memory command scheduling, wake latency and timing determinsm. When you disable them you improve latency consistency and frametime stability. Spread spectrum doesnt participate in timing decision and only affects clock emission characteristics.

Overall my BIOS choices prioritize deterministic latency and stability over theoretical “on paper” optimizations, which is exactly why my system behaves correctly under real gaming workloads.
Another question for usb device keyboard and mouse what port do you use CPU and Chipset attached usb port? do you have more than 1KHz frequency usb device ?
You definitely want to use the CPU usb port if its a 8khz device. It reduces latency variance and avoids rare edge case hiccups. Like I said earlier, you are generating 8,000 interrupts per second. Any extra hop increases interrupt coalescing, scheduling jitter and occasional batching which increases variance which is what you feel as inconsistency. W11 uses MSI/MIS-X and distributes interrupts across cores but if using a Chipset path still introduce more arbitration whereas CPU port have cleaner delivery.
Also i through agesa and ZEN5 Gaming Optimizations: Enabled automatically disable SMT by maybe it mean different thing on asrock what agesa version do you have ?
No you're thinking of "Gaming Mode" on ASrock boards which if you enable that will disable SMT and CCDs (if you're on a multi-CCD CPU).

ZEN5 Gaming Optimizations is different. It applies AGESA gaming biased defaults instead of conservative server/general defaults. This improves inter-core wake latency, reduces uncessary thread bouncing and prioritizes fast burst responsiveness over throughput fairness. That's why the BIOS description says "Improve Zen 5 inter-core latency or core-to-core latencies".

So all it does is enables AMD’s latency biased Zen 5 scheduling and topology hints for games. It's a safe AGESA level optimization that improves consistency without overclocking or instability and it should be enabled for gaming.

Post Reply