... I don't know how to be any clearer. Maybe I have to break it down into even simpler words but I feel like I'm repeating myself...
Idle latency is an indicator of how quickly your system can react to an input and then handle it. So it can certainly correlate to the total input lag chain duration (input-to-photon) with certain assumptions. If your absolute lowest Interrupt-to-Process latency is say 500us, then that's literally the lowest possible value for your input lag (which is certainly going to be higher if you're talking about game + GPU + display processing).
Measuring latency during gaming simply adds to the above mentioned "quickest, baseline latency" by generating competing interrupts, DPCs, and other processing, so of course you'll see higher latency.
How you interpret this value is completely up to you and I don't see a clear and standardized way to normalize that value across different systems/platforms with different loads and settings.
This has been quoted many times by others:
"Note: it is recommended to close all other running programs before running the interrupt to user process latency test, or before interpreting the value that it reports. This test simulates the workings of a real-time audio process. Unlike other tests that LatencyMon performs, it does not make sense to run this test while an audio program is active. "
https://www.resplendence.com/latencymon ... pt2process
However, I disagree when they say it makes zero sense because a latency value during load could be used in a DOE style approach for general comparisons with plenty of caution.
Both unless, as I mentioned, you eliminate contention and have plenty of resources (e.g. dedicated cores).
Then you simply can't measure it. From above explanations, it should be clear that comparing load latency across 2 different platforms with different loads doesn't make sense. Also, what you should be comparing is input-lag, since that is the end goal (aside from a preferred mouse sensitivity/feel).ffs_ wrote: ↑01 Nov 2020, 07:20I don't have access to anything of those. You can post your LatencyMon results (idle and in-game) so we could compare numbers and see how "bad" HPET is. I posted my LatencyMon results above and I think they're far from bad despite using HPET.howiec wrote: ↑01 Nov 2020, 05:40HPET overhead applies to both idle and load, and hence can increase latency in both cases as well, thus affecting input lag. Whether or not you notice it and/or prefer it on/off in your games or programs is a different thing.
If you are so adamant that HPET:On + useplatformclock:true does not increase input lag at all, simply put it to the test for your case.
Either use an LDAT if you have access to one, reflex analyzer, or very high FPS capture with all the correct camera settings like how Chief does it.
Analogy: If you ask someone to compare AC cooled temperatures inside 2 different homes (different location, volume, insulation, weather, people, appliances, HVAC, etc.) during different "loads" (1 house with an active fireplace and 3 refrigerators running max vs the other with none, etc.), what exactly can you surmise from the different temperatures?
And that's the point. You haven't even measured the input-lag. The mouse sensitivity/feel that you experience does not necessarily mean you have lower input lag. It is just different.ffs_ wrote: ↑01 Nov 2020, 07:20That's all cool and you can bash HPET as much as you want or "prove" how "bad" it is compared to TSC or other timer(s), but the thing is I didn't say HPET is generally better or faster or gives less ISR/DPC latency than TSC or other timer(s), I only said it gives better mouse input (better input-lag) on my system even if it costs a bit of performance and ISR/DPC latency. I'm aware that in theory TSC should be better, and I can even tell you a secret: I have HPET disabled in BIOS on my old Intel PC because it gives me better results in both latency and input-lag, but for some reason on my Ryzen PC it's vice versa. ¯\_(ツ)_/¯howiec wrote: ↑01 Nov 2020, 05:40Why are we still talking about this?
I gave general advice because HPET has increased overhead by design.
Then you mentioned your Ryzen system.
Then I mentioned Intel to be clear that I haven't tested Ryzen firsthand.
Then you said that's why you mentioned Ryzen.
We're saying the same thing: We're on different platforms...
Yes, I can say a word because simply looking at the actual HPET architecture vs TSC, it's obvious that it's slower/more costly to access.
The performance loss may not be as bad on a newer Ryzen vs current Intel systems but it still cannot be as "fast" as TSC.
There are various test and supporting data out there. Heck, even the Linux kernel code bashes HPET to some degree for various related and unrelated reasons.
https://github.com/torvalds/linux/blob/ ... nel/hpet.c