5800x vs 12700K competitive fps player

Everything about latency. Tips, testing methods, mouse lag, display lag, game engine lag, network lag, whole input lag chain, VSYNC OFF vs VSYNC ON, and more! Input Lag Articles on Blur Busters.
Eonds
Posts: 262
Joined: 29 Oct 2020, 10:34

Re: 5800x vs 12700K competitive fps player

Post by Eonds » 28 Jun 2022, 11:19

Boop wrote:
27 Jun 2022, 11:31
Eonds wrote:
27 Jun 2022, 00:50
Boop wrote:
26 Jun 2022, 04:40
https://cpu.userbenchmark.com/Compare/A ... 4085vs4119

Maybe you'd get 10-20% FPS improvement but any input lag would be negligible. I think it's more likely he had something wrong with his Windows OS or something in his config causing the input lag.
That's incorrect. It's not about FPS. FPS isn't latency.
1000fps = 1ms
500fps = 2ms

Higher FPS changes your overall system latency. If you're talking about comparing CPU A to CPU B with a framerate cap then I can see your point.
Obviously.... but FPS isn't latency. It's called Frames Per Second. If you comprehend exactly what that means then you wouldn't have written that reply. An average 10-20% fps boost doesn't mean anything. I'd like to see the 0.1% lows if anything. We're talking about architectural latency & improvements.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: 5800x vs 12700K competitive fps player

Post by Chief Blur Buster » 29 Jun 2022, 21:52

Eonds wrote:
28 Jun 2022, 11:19
Boop wrote:
27 Jun 2022, 11:31
Eonds wrote:
27 Jun 2022, 00:50
Boop wrote:
26 Jun 2022, 04:40
https://cpu.userbenchmark.com/Compare/A ... 4085vs4119

Maybe you'd get 10-20% FPS improvement but any input lag would be negligible. I think it's more likely he had something wrong with his Windows OS or something in his config causing the input lag.
That's incorrect. It's not about FPS. FPS isn't latency.
1000fps = 1ms
500fps = 2ms

Higher FPS changes your overall system latency. If you're talking about comparing CPU A to CPU B with a framerate cap then I can see your point.
Obviously.... but FPS isn't latency. It's called Frames Per Second. If you comprehend exactly what that means then you wouldn't have written that reply. An average 10-20% fps boost doesn't mean anything. I'd like to see the 0.1% lows if anything. We're talking about architectural latency & improvements.
FPS does affect lag.

Higher frame rate is lower lag. The GPU takes 1/240sec to render a frame when a game is 240fps.

Latency is a chain:

Image

Frames per second is a weak link in the latency chain, since lower frame rate = GPU took more time to paint a new frame.

<ELI5>

Wonder why they call it "frame"?
Wonder why they call it "paint a polygon"?
Wonder why they call it "draw a frame?"
Etc. They use drawing/painting/artist metaphors, for good reason.

So let's expand the metaphor.

Metaphorically, the GPU is like a high speed artist that draws (paints) a new frame, and 67fps means the GPU took 1/67sec to paint the frame (painting is something that is hidden inside the GPU chip and GPU memory) before it could be delivered to the screen. Drawing all those textures, triangles/squares/polygons, etc.

The math inside of a GPU is shockingly similar to what is used by a drafting table of the 1950s; billions of trigonometry math calculations per second is used to perspective-correct all the pixels, textures, polygons, that constitute a 3D-rendered scenery. And how things are parallelized, now means that an RTX 3080 Ti is now capable of over 1 trillion floating-point (numbers with a decimal) math operations per second. Math driving an automated artist called a GPU.

1 teraflops = 1 trillion flops
F.L.O.P.S. = Floating Point Operations Per Second
Floating Point = a math decimal point that moves left and right along a number,
such as 67.31243752 versus 67312437.52

All RTX GPUs on the market do many hundreds of billions math calculations per second in order to do its art. Figuring out where to begin drawing (X,Y,Z) to (X,Y,Z) and using trigonometry to rotate things. The GPU is blasting through millions and billions of tangents, sines, cosines, and arc equivalents, just to help you enjoy your game.

This is math-calculated at many thousands of math calculations for just ONE PIXEL
Read again:
Thousands of math calculations for just ONE PIXEL
Of just one frame.

GPUs are fast, but math takes a finite amount of time.
With GPU being a math artist, it takes time for GPU to paint-by-math.
So it takes a finite time to finish all the math and draw everything, even long before it remotely thinks of sending the pixels to the DisplayPort output. (also a finite-time manoever). One frame in Cyberpunk 2077 is impossible at less than approximately 8 billion math calculations, at minimum detail level. Ultra detail requires a lot more. One CS:GO frame still take many dozens of million calculations to finish, before doing the next frame. Try doing that with a pocket calculator! Even though the GPU is fast, it's not infinitely fast.

Yes, you can cap the frames, which means you're idling the GPU before letting it draw a new frame.

But in general, if you're not capping, the GPU will paint the next painting (Frame #2) after finishing artisting the the previous painting (Frame #1). The GPU is a serial artist that finishes a lot of paintings per second.

The more time the GPU spends painting a frame = the fewer frames per second. It can't spray as many picture frames per second if it takes too long to draw a single frame.

If a GPU can't finish a frame in less than 1/50sec, it is impossible for the GPU to do more than 50 frames per second (50 times 1/50sec = 1 second).

The boss is the game engine. It's the manager that orders the GPU (bosses your GPU around) to tell your GPU to paint "this and that", polygons, textures, pixels, sprites, and whatever you want. The framebuffer is the artist canvas the GPU is drawing to (hidden inside your GPU).

The software developer creates a game engine. The game engine is bossing the artist (GPU) around like a frankenstein merger of your math teacher and your art teacher (into one entity) talking in binary. The game engine is the Manager for your GPU to help it know what math it needs to do -- for the billions of math calculations per second it needs to to order your GPU around to tell it how to be an artist of a single frame. But it can only do one frame at a time. And it takes a finite amount of time for the GPU to draw the frame.

The more time the GPU spends painting a frame = the more lag before that frame gets delivered to the display pipeline (via the currently configured sync technology, VSYNC OFF, VSYNC ON, GSYNC, double buffer, triple buffer, fast sync, enhanced sync, RTSS Scanline sync, whateversync, etc). The sync technology happens AFTER the GPU finishes painting a frame.

But wait, it gets complicated: 100% GPU utilization can sometimes lag because it is too busy to do other things (e.g. garbage collection, or accepting new mouse deltas) and then the workflow lags badly from small software interruptions.

Metaphor version: Remember, the boss is still the game engine. And sometimes if the GPU is 100% busy, it's hard for the boss to interrupt the GPU for instructions. So lag spikes a little sometimes when GPU is 100%. There are times where GPU lag can spike when the GPU is 100% busy, it is rushing those frames out so fast, that the sync technology can fall behind (e.g. buffering lag, and the CPU is spending more time impatiently waiting for an overloaded GPU, because a boss can't easily interrupt a worker that's hurrying at 100% workload, without increasing lag).

You can have bad display lag with good GPU lag (240fps on a super-laggy 60Hz display). You can have good display lag with bad GPU lag (37fps on an ultra-fast 240Hz display). But it's true that uncapped 240fps 240Hz has less lag than uncapped 37fps 240Hz, because the GPU took less time to paint each frame. Yes, you want to lower the GPU portion of your input lag by increasing your frame rate, without overloading your GPU.

You have to realize when people talk about lag, the lag is not one item. The lag is a total of multiple things in a latency chain. Yes, GPU lag (framerate lag) is not the same thing as display lag, but it's part of the button-to-photons lag.

</ELI5>

TL;DR: More frames per second is generally less latency*
*Only if the GPU is not overworked
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Eonds
Posts: 262
Joined: 29 Oct 2020, 10:34

Re: 5800x vs 12700K competitive fps player

Post by Eonds » 30 Jun 2022, 02:25

Chief Blur Buster wrote:
29 Jun 2022, 21:52
Eonds wrote:
28 Jun 2022, 11:19
Boop wrote:
27 Jun 2022, 11:31
Eonds wrote:
27 Jun 2022, 00:50


That's incorrect. It's not about FPS. FPS isn't latency.
1000fps = 1ms
500fps = 2ms

Higher FPS changes your overall system latency. If you're talking about comparing CPU A to CPU B with a framerate cap then I can see your point.
Obviously.... but FPS isn't latency. It's called Frames Per Second. If you comprehend exactly what that means then you wouldn't have written that reply. An average 10-20% fps boost doesn't mean anything. I'd like to see the 0.1% lows if anything. We're talking about architectural latency & improvements.
FPS does affect lag.

Higher frame rate is lower lag. The GPU takes 1/240sec to render a frame when a game is 240fps.

Latency is a chain:

Image

Frames per second is a weak link in the latency chain, since lower frame rate = GPU took more time to paint a new frame.

<ELI5>

Wonder why they call it "frame"?
Wonder why they call it "paint a polygon"?
Wonder why they call it "draw a frame?"
Etc. They use drawing/painting/artist metaphors, for good reason.

So let's expand the metaphor.

Metaphorically, the GPU is like a high speed artist that draws (paints) a new frame, and 67fps means the GPU took 1/67sec to paint the frame (painting is something that is hidden inside the GPU chip and GPU memory) before it could be delivered to the screen. Drawing all those textures, triangles/squares/polygons, etc.

The math inside of a GPU is shockingly similar to what is used by a drafting table of the 1950s; billions of trigonometry math calculations per second is used to perspective-correct all the pixels, textures, polygons, that constitute a 3D-rendered scenery. And how things are parallelized, now means that an RTX 3080 Ti is now capable of over 1 trillion floating-point (numbers with a decimal) math operations per second. Math driving an automated artist called a GPU.

1 teraflops = 1 trillion flops
F.L.O.P.S. = Floating Point Operations Per Second
Floating Point = a math decimal point that moves left and right along a number,
such as 67.31243752 versus 67312437.52

All RTX GPUs on the market do many hundreds of billions math calculations per second in order to do its art. Figuring out where to begin drawing (X,Y,Z) to (X,Y,Z) and using trigonometry to rotate things. The GPU is blasting through millions and billions of tangents, sines, cosines, and arc equivalents, just to help you enjoy your game.

This is math-calculated at many thousands of math calculations for just ONE PIXEL
Read again:
Thousands of math calculations for just ONE PIXEL
Of just one frame.

GPUs are fast, but math takes a finite amount of time.
With GPU being a math artist, it takes time for GPU to paint-by-math.
So it takes a finite time to finish all the math and draw everything, even long before it remotely thinks of sending the pixels to the DisplayPort output. (also a finite-time manoever). One frame in Cyberpunk 2077 is impossible at less than approximately 8 billion math calculations, at minimum detail level. Ultra detail requires a lot more. One CS:GO frame still take many dozens of million calculations to finish, before doing the next frame. Try doing that with a pocket calculator! Even though the GPU is fast, it's not infinitely fast.

Yes, you can cap the frames, which means you're idling the GPU before letting it draw a new frame.

But in general, if you're not capping, the GPU will paint the next painting (Frame #2) after finishing artisting the the previous painting (Frame #1). The GPU is a serial artist that finishes a lot of paintings per second.

The more time the GPU spends painting a frame = the fewer frames per second. It can't spray as many picture frames per second if it takes too long to draw a single frame.

If a GPU can't finish a frame in less than 1/50sec, it is impossible for the GPU to do more than 50 frames per second (50 times 1/50sec = 1 second).

The boss is the game engine. It's the manager that orders the GPU (bosses your GPU around) to tell your GPU to paint "this and that", polygons, textures, pixels, sprites, and whatever you want. The framebuffer is the artist canvas the GPU is drawing to (hidden inside your GPU).

The software developer creates a game engine. The game engine is bossing the artist (GPU) around like a frankenstein merger of your math teacher and your art teacher (into one entity) talking in binary. The game engine is the Manager for your GPU to help it know what math it needs to do -- for the billions of math calculations per second it needs to to order your GPU around to tell it how to be an artist of a single frame. But it can only do one frame at a time. And it takes a finite amount of time for the GPU to draw the frame.

The more time the GPU spends painting a frame = the more lag before that frame gets delivered to the display pipeline (via the currently configured sync technology, VSYNC OFF, VSYNC ON, GSYNC, double buffer, triple buffer, fast sync, enhanced sync, RTSS Scanline sync, whateversync, etc). The sync technology happens AFTER the GPU finishes painting a frame.

But wait, it gets complicated: 100% GPU utilization can sometimes lag because it is too busy to do other things (e.g. garbage collection, or accepting new mouse deltas) and then the workflow lags badly from small software interruptions.

Metaphor version: Remember, the boss is still the game engine. And sometimes if the GPU is 100% busy, it's hard for the boss to interrupt the GPU for instructions. So lag spikes a little sometimes when GPU is 100%. There are times where GPU lag can spike when the GPU is 100% busy, it is rushing those frames out so fast, that the sync technology can fall behind (e.g. buffering lag, and the CPU is spending more time impatiently waiting for an overloaded GPU, because a boss can't easily interrupt a worker that's hurrying at 100% workload, without increasing lag).

You can have bad display lag with good GPU lag (240fps on a super-laggy 60Hz display). You can have good display lag with bad GPU lag (37fps on an ultra-fast 240Hz display). But it's true that uncapped 240fps 240Hz has less lag than uncapped 37fps 240Hz, because the GPU took less time to paint each frame. Yes, you want to lower the GPU portion of your input lag by increasing your frame rate, without overloading your GPU.

You have to realize when people talk about lag, the lag is not one item. The lag is a total of multiple things in a latency chain. Yes, GPU lag (framerate lag) is not the same thing as display lag, but it's part of the button-to-photons lag.

</ELI5>

TL;DR: More frames per second is generally less latency*
*Only if the GPU is not overworked
I figured I'd made it clear that FPS is it's own thing and can affect latency. FPS can be artificially increased with many different techniques at the expense of latency. FPS itself is an easily manipulated marketing number that consumers fall for. The only relevant metric would be the 0.1% lows and still manipulated. How about how fast X frame is delivered ? We keep talking about surface level seemingly more obvious topics instead of diving deeper into other things. I think there's more than enough information about FPS and all of the other topics. I think it's time we see more high quality research conducted about latency. I'll offer to leak lots of information if someone is willing to conduct proper tests with high accuracy. A topic I suggest we all should revisit or go deeper into would be GPUS & Dram latency. Sometimes I believe it to be a waste of time because no one will listen since it's often far out of people's expertise. No one talks about GPU memory latency either. Then there's the influx of posts that end up clouding the pages which had good information on them. It also is hell to read with a bright white background & I would love to be able to react to a reply/post with a thumbs up or something like that.

woodyfly
Posts: 91
Joined: 03 Jul 2020, 07:53

Re: 5800x vs 12700K competitive fps player

Post by woodyfly » 01 Jul 2022, 03:15

I did just that. I had a 5800x but felt like I had input lag. Tried a 12700kf to see if it was better and I FELT like it was better but there's a good chance it was placebo. It's been a few months later, I've switched back and forth and honestly, I regret buying the 12700KF. It was a waste of money. If there is a difference, it's small and probably placebo. Also the 5800x pushes higher frame rates for me, atleast in CSGO.

joseph_from_pilsen
Posts: 166
Joined: 01 Apr 2022, 23:51

Re: 5800x vs 12700K competitive fps player

Post by joseph_from_pilsen » 01 Jul 2022, 13:52

The comparison is a bit flawed, i noticed when i compared my 11800H and 5600X that the main issue are servers. My both computers run fine when server variance is low below 2ms and server var 0.000 and logically both run like a shit when server sucks.
All deathmatch servers (even 3rd party) are laggy.
MM servers the same, non laggy non input lag adding servers are 1 of 20 in MM, 6 of 10 in fct/ESEA
at LAN you need very decent machine even for 5v5, if someone hosts it, its laggy, you need dedicated server and not a total shit (at least 3rd generation ryzen or 14nm intel with dual channel rams and ssd). And i talk about 5v5, deathmatch is laggy at almost everything:

Here in central europe:

valve MM/DM servers:
I have banned warszaw lagserver (this server is a joke and lags ALWAYS performance like shit with sv 10ms + broken connectivity as a bonus) and stockholm (full of russians, same like lagszawa), unfortunatelly the addon works only when i play solo, in lobby it doesnt work, probably all the members need to install steam route tool.

Now 3rd party deathmatch servers:

WaSe - least laggy deathmatch among all but its one eyed king among blind people, still with noticable server sided input lag, server variance is around 4ms at full server (need to stay below 2 to not affect input). Var (server sided) also sometimes not zero point zero zero. But stable, not suffering from unstable movement, but when the servers are full, there is noticable skating effect of moving. When the servers are not crowded, you can enjoy non laggy csgo. The servers dont provide the quality you would expect from servers forcing you to pay. Few months ago it was better, its obvious that valve is incompetent and their game engine is every patch more and more CPU/RAM hungry not only client sided but also server sided. The servers are permanently crowded what makes them overloaded.
Epiczone.sk - good shooting input lag, bad movement input lag, the servers are specifically broken because they suffer from flapping var so they are constantly trashing all mouse movement. The game lags every second for few miliseconds so its not fluent. SV around 5ms when full but it has weird +-2ms diff and var is unstable. I made some investigation and they run it on normal PC hw which is somehow obsolete (intel coffee lake refresh and amd 3x00 cpus with many instances). Thats why it lags.
Cars - average deathmatch servers, sv usually around 5ms +-1.5ms in prime time. Not terrible but far away from real environment, when the server is full, it stutters hard.
Brutalcs - similar to cars, average DM, both have slight input lag, both bad when full.
Polish dm servers - total trash, sv 12ms and higher, red spikes, unplayable, aim stuttters. I absolutelly dont understand what the 20 players are making there. They cant get any better at server with so much stuttering in aim, movement.

Im very curious if there will be a non laggy deathmatch training possible when new generation of cpus will be released, current one obviously has not enough power to run 20+ slot servers at one core.

Slender
Posts: 573
Joined: 25 Jan 2020, 17:55

Re: 5800x vs 12700K competitive fps player

Post by Slender » 01 Jul 2022, 14:22

woodyfly wrote:
01 Jul 2022, 03:15
I did just that. I had a 5800x but felt like I had input lag. Tried a 12700kf to see if it was better and I FELT like it was better but there's a good chance it was placebo. It's been a few months later, I've switched back and forth and honestly, I regret buying the 12700KF. It was a waste of money. If there is a difference, it's small and probably placebo. Also the 5800x pushes higher frame rates for me, atleast in CSGO.
you configure your 5800x? bios settings, os settings? some default bios settings cause lag, like spread spectrum
+ if memory not in qvl list of your motherboard it may cause mouse lag.

uwinho
Posts: 11
Joined: 11 Sep 2019, 15:51

Re: 5800x vs 12700K competitive fps player

Post by uwinho » 02 Jul 2022, 00:32

having a 5800x myself and analysing some benchmarks etc that compared cs:go fps and frametimes for ryzens and new intels there is most likely no additional input lag for the ryzens. the 5800x has 32 cache, is single ccd and memory latency is good aswell. i think the only issue ryzen has is some struggle with the source engine so it tends to have some frame time spikes here and there. i found a frametime comparison where the ryzen 5600x went into the 6-7ms range and the 12700 was extremely stable below 4 ms. so what some guys consider as "input lag" is most likely a frame time spike in a very unfortunate situation cause they r not that often, especially if you play faceit 5on5. of course the game is starting to feel kinda bad on a packed 23 dm on dust2, cause new d2 feels like shit on deathmatch since day 1.
i dont think there is anything to worry about with ryzens latency but i would def run fps_max 0 to give the cpu the freedom to expand. there r def some weird fps behaviours if you cap it at 400

and btw cs:go was actually broken some weeks ago with huge frametime spikes every 3 seconds after a weird steam update. alot of pros complained aswell. it feels like the game still struggles with some frame time problems even after the fix.
pre "we completely break cs:go" patch i actually measured the frametimes on dms and retakes and especially on retakes my frame times never went above 4-5 i think and my total input lag was around 10 ms, so thats actually pretty damn good for cs

and joseph_from_pilsen is def right when it comes to servers aswell. most of the dm servers gonna feel like shit as soon as they hit player counts above 18-20

joseph_from_pilsen
Posts: 166
Joined: 01 Apr 2022, 23:51

Re: 5800x vs 12700K competitive fps player

Post by joseph_from_pilsen » 02 Jul 2022, 10:11

D2 is broken since antwerp major, dunno what they fucked up but the fps are rollercoastering from 500 to 150 and back at some spots (mostly mid, ct spawn, ct mid). Total trash map, dust2 was the map with most FPS now its the right opposite.
Im silently selling my inventory, hope the game gets its final blow after i go fully out, only 1500E remaining to sell + 2k Eur in cases which i dont want to sell now cuz they werent pumped yet (im looking at you, clutch case). There is also now a very risk of general loot box ban from EU this year.
Btw made a nice profit, my buy-in was 1K eur, turned into 7k :lol: Mostly by long hodling kato14 skins and even more, buying cases for 3 cents and selling x20 :ugeek:

Eonds
Posts: 262
Joined: 29 Oct 2020, 10:34

Re: 5800x vs 12700K competitive fps player

Post by Eonds » 02 Jul 2022, 12:42

uwinho wrote:
02 Jul 2022, 00:32
having a 5800x myself and analysing some benchmarks etc that compared cs:go fps and frametimes for ryzens and new intels there is most likely no additional input lag for the ryzens. the 5800x has 32 cache, is single ccd and memory latency is good aswell. i think the only issue ryzen has is some struggle with the source engine so it tends to have some frame time spikes here and there. i found a frametime comparison where the ryzen 5600x went into the 6-7ms range and the 12700 was extremely stable below 4 ms. so what some guys consider as "input lag" is most likely a frame time spike in a very unfortunate situation cause they r not that often, especially if you play faceit 5on5. of course the game is starting to feel kinda bad on a packed 23 dm on dust2, cause new d2 feels like shit on deathmatch since day 1.
i dont think there is anything to worry about with ryzens latency but i would def run fps_max 0 to give the cpu the freedom to expand. there r def some weird fps behaviours if you cap it at 400

and btw cs:go was actually broken some weeks ago with huge frametime spikes every 3 seconds after a weird steam update. alot of pros complained aswell. it feels like the game still struggles with some frame time problems even after the fix.
pre "we completely break cs:go" patch i actually measured the frametimes on dms and retakes and especially on retakes my frame times never went above 4-5 i think and my total input lag was around 10 ms, so thats actually pretty damn good for cs

and joseph_from_pilsen is def right when it comes to servers aswell. most of the dm servers gonna feel like shit as soon as they hit player counts above 18-20
How about DRAM latency, Core clock speeds, cache latency, and more ? Doesn't seem relevant until you realize how many operations occur within a second and you do some basic math. Bad FPS isn't magic, it's just the output of your systems performance (roughly). So if there's something going wrong some where then it MAY be shown with certain FPS metrics. All games AKA workloads are different and will SCALE VERY DIFFERENTLY with certain latency reductions. Often times 1080p low settings competitive games see a dramatic improvement with DRAM latency reduction. If you are able to classify that workload synthetically at least with some accuracy it would be a good benchmark. Sometimes your own perception is more reliable than some devices that are incapable of measuring latency overtime. At the end of the day if you feel and believe that your system is performing perfect and you think it's not holding you back then great. In reality though input latency scales exponentially with aiming (target acquisition & tracking) for most people.

lower latency = better gamer (over simplified)

DPRTMELR
Posts: 165
Joined: 12 Apr 2022, 13:42

Re: 5800x vs 12700K competitive fps player

Post by DPRTMELR » 02 Jul 2022, 14:13

Normally I know what's what and don't care, but I can't just watch this go on while everybodys stabbing things in the dark.

let's get something going proper so people can get a baseline to compare against. like you can't even google these stuff because everybodys on 64tick ulletical or whatever.


everything below is for my current system 10900k / 4400mhz c17 / 3080ti and windows 10 0.5ms timer

csgo settings (from the top with shadows)
low/medium/off/high/high/enabled/yes 4x aa 16x af at 1080p (if enough people hops on this i will rerun it on all low)
recorded for 2 minutes


128tick ulletical (128tick, roughly ~15 seconds of idle time after the run)
Image

looking out window on spectate mode @ wase7 mirage Dallas USA (24 players)
Image

deathmatch play @ wase7 mirage Dallas USA (24 players)
Image
Image


retake 9 players, 1 spec mirage (DM Frenzy no location given)
Image


and faceitpug starting from knife until half way into 2nd round iirc on chicago servers dust2
Image

I would like to think that every time there was a spike it was because of me pressing tab key for the scoreboard or when im dying, but who knows. But I actually don't feel any percievable lag that makes me feel like I lost my control of my character in game. I'd feel slowed down on certain corners of inferno (not graphed, but eyeballing net_graph FPS shows lower 300s) but no mouse skips, even on death transitions(well sometimes, rarely :p)
Last edited by DPRTMELR on 08 Jul 2022, 22:22, edited 1 time in total.
Most adults need 7-8 hours of sleep each night. - US FDA

Post Reply