sorry to bump an old thread.
i have a different on board network card (i225-v r3) on my z690 TuF Gaming Plus WIFI. when i unplug my ethernet, my mouse input is smoother and as you said exactly, its like the DPI doubled. i tested several times on cs2 using aim_botz, i can confirm without placebo that aiming feels way different. faster/snappier when ethernet is unplugged.
funnily enough, ive found this issue since 6 years ago myself while looking for latency fixes, but at the time i thought that was the intended behavior and all ethernets are supposed to do that because of some network processing by the CPU or something like that, but i guess not (i dont know why i thought that tbh)
im going to purchase a pci-e ethernet card and let you guys know in this thread how it feels
[Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
-
- Posts: 11
- Joined: 07 Nov 2022, 22:15
-
- Posts: 5
- Joined: 21 Jun 2024, 01:51
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
i know this is an old thread but how can i check which core my device is running on like this?JimCarry wrote: ↑17 Sep 2024, 20:21thanks i change gpu and usb to core 3.Slender wrote: ↑17 Sep 2024, 12:39you can try it.
it work on my 13700kf, but i think 6600 have less 1 core perf. and it be worse.
but, i dont check it on rx, maybe for amd card def is better, but for my setup usb and gpu on same core is better (+ nic controller is free on 0 core)
-
- Posts: 11
- Joined: 07 Nov 2022, 22:15
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
installed the card, updated drivers. immediately noticed my mouse input is way faster, just like OP said. i still believe grounding issues are a real thing with other users on here, but this definitely made aiming feel snappier, it did something for sure, this was not placebo.
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
I had similar problem with Mouse so i buy new one and at some point problem gone . What suprice me new Mouse came with Ferrite Core Ring ( old dont have) IT is possibile some EMI or somethink like this was the problem??
-
- Posts: 247
- Joined: 13 Sep 2021, 12:39
- Location: RUS
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
Still the same good?jeffeh12133 wrote: ↑24 Jul 2025, 01:24installed the card, updated drivers. immediately noticed my mouse input is way faster, just like OP said. i still believe grounding issues are a real thing with other users on here, but this definitely made aiming feel snappier, it did something for sure, this was not placebo.
-
- Posts: 11
- Joined: 07 Nov 2022, 22:15
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
sluggishness came back after a day or two but network performance is better overall. i ended up trying to use fiber media converters to convert ethernet to fiber, then fiber back to ethernet (since fiber removes EMI), more useless things that probably contribute some way to less EMI or issues but overall nah i'd say my PC still feels 10x smoother at night time compared to daytime, which is what i'm actively trying to diagnose/fix.InputLagger wrote: ↑21 Aug 2025, 01:51Still the same good?jeffeh12133 wrote: ↑24 Jul 2025, 01:24installed the card, updated drivers. immediately noticed my mouse input is way faster, just like OP said. i still believe grounding issues are a real thing with other users on here, but this definitely made aiming feel snappier, it did something for sure, this was not placebo.
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
kyube wrote: ↑30 Aug 2023, 09:37To adjust core affinities & RSS, you do the following: (thanks to Timecard, Amit, JerkoTAi, Duckling, calc and others)What is the reason for using "irqPolicySpecifiedProcessors" and limiting it to one physical processor vs using it in MSI mode (non X) and assigning it to that?For the 8125B & 8111H [steps for this are the same except driver] (fixing RSS and core affinities in particular):
Download this driver (for 8125B)
...
- Change limit from MSIUtil from 1 to 4
- Gointerruptpolicy - set "Realtek PCIe GbE Family Controller" from "IrqAllCloseProcessor" to "IrqPolicySpreadMessagesAcrossAllprocessors" or "irqPolicySpecifiedProcessors"
- RSS on, which you do by going in Control Panel\Network and Internet\Network Connections\ and right-clicking the adapter, pressing Properties => Configure => Advanced and turning on "Receive Side Scaling"
- Queues to 2
- RSSBaseProcessor on 2 in registry (...)
For the i225/i226, look at ...:
To change affinities and make them stick for other 1GbE Intel NICS (i217. i218, i219 in particular):
- Disable RSS if you're using older nic driver
- Set rssqueue to 1
- Set rssbaseprocessor to whatever core u want
Isn't it always better for gaming to assign the NIC to only one physical processor? Thus MSI mode only is superior in that use case?
From the Use the Interrupt-Affinity Policy Tool to bind network adapter interrupts to specific processors on multiprocessor computers section in the official microsoft guidelines: General Guidelines for Improving Operating System Performance, it says the following:
We recommend that you disable hyper-threading before configuring IntPolicy on a computer with CPUs that supports hyper-threading. This will ensure that interrupts are assigned to physical processors rather than logical processors. Assigning interrupt affinity to logical processors that refer to the same physical processor will not increase performance and could even degrade system performance.
Re: [Ethernet Onboard-vs-USB-vs-PCIe] I finally found the reason behind my input lag after 6 years
The reason is that RSS & affinities don't behave as intended when trying to manually choose different cores due to some bug in the driver.billblurr wrote: ↑10 Sep 2025, 13:15What is the reason for using "irqPolicySpecifiedProcessors" and limiting it to one physical processor vs using it in MSI mode (non X) and assigning it to that?
Isn't it always better for gaming to assign the NIC to only one physical processor? Thus MSI mode only is superior in that use case?
From the Use the Interrupt-Affinity Policy Tool to bind network adapter interrupts to specific processors on multiprocessor computers section in the official microsoft guidelines: General Guidelines for Improving Operating System Performance, it says the following:
We recommend that you disable hyper-threading before configuring IntPolicy on a computer with CPUs that supports hyper-threading. This will ensure that interrupts are assigned to physical processors rather than logical processors. Assigning interrupt affinity to logical processors that refer to the same physical processor will not increase performance and could even degrade system performance.
(It was related to hyperthreading). I forgot the intrinsics, however)
What I was talking about is changing execution from C0 (default) to any user-specified core (MSI is considered enabled)
evaluating xhci controller performance | audio latency discussion thread | "Why is LatencyMon not desirable to objectively measure DPC/ISR driver performance" | AM4 / AM5 system tuning considerations | latency-oriented HW considerations | “xhci hand-off” setting considerations | #1 tip for electricity-related topics | ESPORTS: Latency Perception, Temporal Ventriloquism & Horizon of Simultaneity