What monitor for ~$500? Main priority is CSGO

Ask about motion blur reduction in gaming monitors. Includes ULMB (Ultra Low Motion Blur), NVIDIA LightBoost, ASUS ELMB, BenQ/Zowie DyAc, Turbo240, ToastyX Strobelight, etc.
User avatar
lexlazootin
Posts: 1251
Joined: 16 Dec 2014, 02:57

Re: What monitor for ~$500? Main priority is CSGO

Post by lexlazootin » 19 Oct 2017, 09:51

mello wrote:you might notice it in your hit registration, sometimes it might feel great, and other times you will see and feel that soemthing is off and you are not getting hits when you should be.
Honestly, i hate when people complain about hit registration in CS:GO, because it's not hit reg, they just shit and need a excuse. Source engine does a fantastic job with Latency differences and pack loss and will compensate in one way or another. unless you have such bad pack loss or variance that you can't move correctly, everything is working perfectly and everything is fair.
mello wrote:I think that all of it will go away when we will start getting monitors with very high refresh rates (480Hz/1000Hz) and when frame rate amplification technologies will arrive in all GPU's.
Why would high refreshrate monitors fix the misconception that G-Sync introduces latency?
mello wrote:There are still A LOT of people that are playing CS:GO on older PC's, so if they can't hit 500FPS or more and are in a 100-200range (sometimes even less!) then blur reduction and gsync are perfectly viable option for them.
I don't even think having low variable FPS is the most ideal way to use or buy G-Sync. I think it's more useful for the people that can hold a steady high fps. I would cap my CS:GO at 150fps (154hz) and just play like that, no fluctuations and no tearing is by far the best way to play with only 1-3ms of total added latency.

mello
Posts: 251
Joined: 31 Jan 2014, 04:24

Re: What monitor for ~$500? Main priority is CSGO

Post by mello » 19 Oct 2017, 12:49

lexlazootin wrote:
mello wrote:you might notice it in your hit registration, sometimes it might feel great, and other times you will see and feel that soemthing is off and you are not getting hits when you should be.
Honestly, i hate when people complain about hit registration in CS:GO, because it's not hit reg, they just shit and need a excuse. Source engine does a fantastic job with Latency differences and pack loss and will compensate in one way or another.
And here you are completely wrong. Hit registration problems are a real thing, and it has literally nothing to do with the game engine. People complain because head shots or body shots are not being registered despite aiming perfectly on the enemy model, or when you are clearly faster than you enemies (you have better reflex, you shoot faster but you still die). What affects hit registration is for example error correction (among few other things) on your line (it is called interleaving), and it may be set at different values (low interleaver depth or high interleaver depth). Someone who has higher interleaver depth will experience hit registration problems, but if he can request a change from his ISP, either to fast path (no interleaving = no error correction) or lowest interleaver depth, the problem can be fixed instantly and the differency will be visible in the first minute you play the game. Hit registration problems may skill cap you so bad, that the game will be unplayable and no longer enjoyable. That is why network performence is the most important thing when fps gaming via internet. Even a best gaming hardware in the world ($20000 for example) will not make you play any better, if you are bottlenecked by your internet connection.

If you have never experienced bad registration you really have no idea what people are talking about. Lucky you ! :)
lexlazootin wrote: unless you have such bad pack loss or variance that you can't move correctly, everything is working perfectly and everything is fair.
Nothing is, or ever was, "fair" when fps gaming via internet. And the same thing goes for many early LAN tournaments (early 2000's), where people were playing against each other on different computers and monitors (different sized monitors too!) that were provided by sponsors and people who organized the whole thing.

You can have hit reg problems even with low ping, no packet loss (or minimal packet loss) and perfect movement (not interrupted, no teleports). Your connection may seem to work perfectly for everything except fps gaming. You may experience a huge variance in your gaming performence and hit registration during different days in a week or during different hours in a day. It is because of network performance fluctuations that may affect positively or negatively of how your game feels. It is a very complex topic and a very real thing.
lexlazootin wrote:
mello wrote:I think that all of it will go away when we will start getting monitors with very high refresh rates (480Hz/1000Hz) and when frame rate amplification technologies will arrive in all GPU's.
Why would high refreshrate monitors fix the misconception that G-Sync introduces latency?
Because currently you can use G-SYNC only at a certain framerates. Which means that latency is a little higher than just comparing 144FPS G-SYNC vs 144FPS V-SYNC OFF. Now you are comparing for example 240FPS G-SYNC vs 500FPS V-SYNC OFF, and in that example latency difference is higher, just because of much higher FPS. That is why many people still prefer V-SYNC OFF (high fps) vs G-SYNC (limited fps).

High refreshrate monitors in combination with frame rate amplification technologies will make it possible to play at 480FPS@480Hz or 1000FPS@1000Hz, where latency differency will be negligible, maybe even for something like V-SYNC ON. I assume people will go for butter smooth experience instead of having 1ms latency advantage :)

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: What monitor for ~$500? Main priority is CSGO

Post by Chief Blur Buster » 19 Oct 2017, 17:03

Whether "GSYNC introduces latency" is a complex question so both sides are right under certain circumstances.

It all depends on how it is configured and the framerates you get. Readers have read GSYNC 101 by Jorim Tapley and other articles and know that GSYNC doesn't introduce latency (And actually reduces it) when properly configured for the right games. That said, overkill frame rates in VSYNC OFF (1000fps VSYNC OFF) can slightly edge ahead of GSYNC. But you are not going to be getting 1000fps in many games.

That said, the higher the frame rate, the less latency, and ideally you want a GSYNC limit above your game's maximum frame rate to be able to "Have cake and eat it too".

Games that fluctuate 100-200fps will have less latency on a 240Hz GSYNC monitor than a 144Hz GSYNC monitor.

And you can have less lag for 100-200fps fluctuating on 240Hz GSYNC than for 100-200fps fluctuating on 144Hz VSYNC OFF. Which means bumping to 240Hz makes it easier to choose GSYNC than VSYNC OFF.

Having more games run total frame rate freedom without hitting max VRR range or caps, is a big pro of 240Hz VRR.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

mello
Posts: 251
Joined: 31 Jan 2014, 04:24

Re: What monitor for ~$500? Main priority is CSGO

Post by mello » 22 Oct 2017, 06:20

RealNC wrote:Sure. A fastpath line with 50ms latency is better than than an interleaved one with 20ms. Sorry, that's not how it works. Latency is latency. How the packets are buffered doesn't play a role.
This is not what i said. And you really try to make things simple here, when they clearly aren't, which is not the right thing to do. Most of the time people who do not understand certain things, or have never experienced something try to make things plain and simple, while neglecting the complexity of certain things, the problems that are being described or the problem itself. I really hate that.

And you are obviously wrong in your example or at least not entirely correct i should say. Interleaving is not a problem, as long as interleaver depth is low enough, the gaming experience (hit registration) will not be affected in a major way, it will not be perfect, yes, but it will be still great and perfectly acceptable. The problem starts with higher and highest interleaver depths, which affects fps gaming and hit registration in a very negative way.

So, no more explaining things by the "feel" and the so called broscience.
Lets get straight into good stuff, an actual science. Games are using UDP packets.
UDP is suitable for purposes where error checking and correction are either not necessary or are performed in the application; UDP avoids the overhead of such processing in the protocol stack. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system.
Retransmission, essentially identical with Automatic repeat request (ARQ), is the resending of packets which have been either damaged or lost. Retransmission is one of the basic mechanisms used by protocols operating over a packet switched computer network to provide reliable communication (such as that provided by a reliable byte stream, for example TCP). Such networks are usually 'unreliable', meaning they offer no guarantees that they will not delay, damage, or lose packets, or deliver them out of order.
Reasons for an UDP delay and UDP slow response time:
- High network load
- The system that is being communicated to is too busy, or overloaded
- The connection is being made over an overloaded line

UDP Retransmissions:
Although UDP is a connectionless protocol, applications do retransmit UDP packets. Different applications use different methods for determining retransmissions (such as timers or sequence numbers). UDP retransmissions occur for a number of reasons, including the packet was lost, dropped or otherwise missing.
lexlazootin wanted proof, and this is as close i can get you one without me talking and explaining how "bad registration" feels and looks like on the screen. I could highlight the most important things from these quotes but i assume that people can do that by themselves. The only thing missing here is how interleaving affects UDP packets, fps gaming and hit registration. But it is pretty straightforward based on these quotes.

So now, how is that fast path gives us a perfect hit registration ? How is that low interleaving doesn't alter hit registration to a significant degree ? And why error correction, high interleaver depth, network overload messes with fps gaming and hit registration in a major and negative way ?

In fast path mode, packets are being:

- dropped do to errors (packet loss is being introduced)
- sent and delivered without delay (no artificial delay injection)
- sent and delivered in a perfectly proper order
- retransmitted by an application when neccesary (fast path mode does not mess with that)

This perfectly explains why in fast path mode we have a perfect hit registration. When someone has interleaving on line, he will experience dropped packets in a game when switching to fast path. But it will not be a problem, because the whole gaming experience and hit registration being that much better. Yes, packet loss in not really a problem, as long as everything else is being sent and delivered without delay and in proper order.

In interleaved mode (low interleaver depth), packets are:

- being delayed to avoid packet loss (you can still get some packet loss or no packet loss at all)
- being sent and delivered with an artificial delay (small delay, low variance)
- being sent and delivered in a correct but not perfect order (at least most of the time, do to delay variance)
- being retransmitted by an application when neccesary (but interleaving and artifical delays might mess with it in a slightly negative way)

Here you can see how interleaving starts messing up UDP packets and your gaming experience and hit registration. The changes are small and lower interleaver depths are perfectly acceptable for fps gaming, because it doesn't affect the game perfoemnce in a major way. You may notice hit registration problems from time to time, but it doesn't really affect your overall gameplay.

In interleaved mode (higher interleaver depths), packets are:

- being delayed to avoid packet loss completely (if your line is bad, you still might get minimal packet loss)
- being sent and delivered with an artificial delay (big delay, high variance)
- being sent and delivered in an order that is far from perfect or optimal due to big delay variance (high probability)
- being retransmitted by an application when neccesary (but high interleaving and big artifical delays might mess with it in a very negative way)

Here we are. Enter "bad hit registration" problem in fps games among many other things that negatively impact fps gaming. Delaying UDP packets is never optimal, but introducing big delays messes with them in a big time. Packets are being artificially delayed big time, you no longer experience packet loss, but at the same time packets are being sent and delivered out of order too (most likely), so once the packets reach the server they are already late (server calculates you missed) and once the packets reach you, they are already outdated (players position/action may be outdated). Either of these things happens (or all of them to a certain degree), and this is causing hit registration problems among other things. It also perfectly explains why the game feels more random (out of order) when you experience bad hit registration. And higher interleaving itself most likely interferes with UDP retransmission algorithm by an application (the game).

How network overload affects fps gaming and hit registration ?

Simple. Your ISP's network work in a such a way that everything is automatic. Devices and software are constantly monitoring your line parameters and are making changes on the fly to make the connection reliable and stable. The term "gaming traffic", there is no such thing for your ISP, the devices in the network and the software that is running all of it. It is completely ignored and disregarded, what matter is the whole network stability and reliability. Most people experience hit registration problems to a different degrees. Problem also randomly goes away and shows up again. It is because of network performance fluctuations, based on the network usage, meaning how many people are using internet in your area/city at the time you play the game. And your ISP's devices and software is monitoring your line at all times, and it changes various parameters based on what is reported, if you have more errors and more packet loss it increases error correction, interleaving and delays on the line, if the network usage is light, meaning no errors (or small amount of errors) it will remove limitations and decrese interleaving. This is why people randomly experience "good hit registration" (no problems at all) and "bad hit registration". I even suspect that overall high network load and overloaded lines play a big role in "hit registration" problem, even if you take out interleaving (!) out of the equation. Why ? Because if there was only 1 person using internet in your area/city (you) then you would have perfect gaming experience and hit registration regardless of the network parameters that are being applied on your line at that time. How that would be possible ? Because there would be zero interferences on the line (cables), and it would perform to the fullest of its capabilities with no errors, and the only limiting factor would be a ping time.

----

There are few slight assumptions here and there but it perfectly supports science an everything i said and what people around the world are experiencing.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: What monitor for ~$500? Main priority is CSGO

Post by Chief Blur Buster » 22 Oct 2017, 15:22

Everyone, (RealNC, you too)

It's not that simple.

Good points are being made by multiple parties, but they neglect to consider additional factors.

Where I am in Canada, ten years ago, I was able to turn on/off Fast Path on an ADSL2 connection at will. Ping variability goes up/down with these modes on older pre-VDSL modems! It makes a huge difference in certain things. It can mean the difference between super bullet effects (chunked packets) and no super bullet effects (consistently spacedd packets).

At a smaller delta, higher latency fastpath can outperform slightly lower latency interleaved. This is due to less packet chunking that can cause surge effects (e.g. superbullets in certain games). Plus dozens of other subtleties not bothered to be mentioned here.

As a rule of thumb, one prefers a consistent-ping internet connection that's just a few milliseconds higher average latency than a highly-variable-ping Internet connection. The threshold of what is preferred (more frags in competitive gaming) varies quite a lot, and is very game-dependant / latency-volatility / server tick-rate / router buffering behavior dependant. There's so many variables.

But, there indeed is overlap. A good point is where one prefers to play on a reliable (big SNR) 50MBp VDSL2 connection on interleaved, than an old 1 MBps ADSL connection on fastpath, especially one with bad SNR causing lost packets/Momentary game freezes, etc.

That said, if you had a choice between turning on/off interleaved, on a 5Mbps classic ADSL connection, on an old modem, on the same connection, over the same traceroute, to the same server, and you have enough SNR, then fast path can definitely be highly preferable (on sufficiently low-SNR connection) with very clearly huge human-noticeable latency savings. (I once saved 20ms on FastPath -- that's bigger than 1/64sec). Changes to SNR will affect FastPath-versus-Interleaved competitive advantages (lost packets, freezes, ping variabilities, etc) and flip the tables around.

Perhaps we should ask Battle(non)sense about these subtleties of network differences caused by those Interleaved/FastPath, especially as they affect to older DSL connection with older modems where changing to FastPath can sometimes save several dozen milliseconds off the link (at sufficient SNR margins). On a 6Mbps ADSL1 (Original), on a certain modem about ten to fifteen years ago, I've seen FastPath change 40ms to 7ms. A whopping >30ms latency savings is big enough to drive a truck through. But changing the interleave depth on a VDSL2 connection (especially those 100Mbps attainable, 50Mbps syncd -- good SNR ratio) changes ping only slightly and ping variability changes a lot less than the old ADSL days. It's not nearly as dramatic as it used to be, unless you're certainly dealing with an old AT&T area or a country that's stubbornly sticking to older DSL standards and crap lines that veers advantages towards one or the other (FastPath vs Interleave). Also, VDSL2 doesn't have a FastPath, though it does something functionally similar with an interleave depth of 0 -- which can save approximately 10 milliseconds on a good connection -- not nearly as much as the olden ADSL1 Original days, so it's much harder to consistently benchmark the pros/cons, given if you keep your speed fixed, FastPath packet losses becomes much worse than Interleaved reliability (Even at 10ms higher lag). The crossover point of preference is very hard to predict.

What this means is there's really no hard-and-fast rule about FastPath-vs-Interleaved outperforming each other. I've seen it happen both ways.

In short, all sides have points (to varying extents). Fastpath and interleave is not exactly simple cut, there ARE situations where one or the other performs better for online gaming, but it is so variables-ridden, and old outdated info, that it is easy to statistically spin to your forum-debate advantage on either side.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
RealNC
Site Admin
Posts: 3730
Joined: 24 Dec 2013, 18:32
Contact:

Re: What monitor for ~$500? Main priority is CSGO

Post by RealNC » 23 Oct 2017, 09:24

He's not talking about packet loss. He's saying that interleaving on its own affects hit registration. In other words, the redundancy/error-correction algorithm of interleaving affecting the game because of packets being sent in batches rather than on their own.

To put it another way:

100% stable, 0 packet loss, 0 variance fast path 50ms - Good
100% stable, 0 packet loss, 0 variance low-depth interleaved 20ms - Bad (just because it's interleaved.)
100% stable, 0 packet loss, 0 variance high-depth interleaved 40ms - Really bad (just because it's interleaved.)

Which I call shenanigans on :P
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

mello
Posts: 251
Joined: 31 Jan 2014, 04:24

Re: What monitor for ~$500? Main priority is CSGO

Post by mello » 23 Oct 2017, 12:54

Great post Chief, i will answer to what you said later as you are making really good points, but now i would like to quickly address what RealNC said.
RealNC wrote:He's not talking about packet loss.
I would assume that packet loss doesn't affect hit registration (and coincidentally it looks like it may make it better! <>), at least not in the same sense that interleaving does. As long as packet loss % is low enough it should not interfere with online fps gaming and with everything else you do. On lines with lots of errors and highest intereaver depths and then going fast path, it may make your connection unstable and unusable. There is always a sweet spot, and avoiding extreme ends from both sides should be preferable.
RealNC wrote: He's saying that interleaving on its own affects hit registration.
Because it does (but at the same time it isn't the only thing that affects it), and there is enough proof of that. I obviously don't know the extent of this effect on every possible line/connection (ADSL, ADSL2, VDSL, VDSL2), and speeds, for example 2Mb, 5Mb, 10Mb, 20Mb, 50Mb, 80mb, but on slower lines switching from higher interleaver depth to lower interleaver depth or fast path improves hit registration instantaneously (as long as you are not getting crazy packet loss that is and the line remains stable).

And the science clearly tells us this "Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system." which clearly shows that it is better to drop packets altogether (fast path does that) than artifically delay or retransmit them. So it makes perfect sense that it directly interferes with gaming and proper delivery of packets to a certain degree.
RealNC wrote: In other words, the redundancy/error-correction algorithm of interleaving affecting the game because of packets being sent in batches rather than on their own.
"Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system." - packet loss is reduced or it disappears completely as long as you keep increasing the interleaving depth. What happens then with all these time sensitive packets that should be delivered as fast as possible without any interference ? They are getting delayed and/or retransmitted but when they reach their final destination some of them are already late (obsolete, out of order) which creates glitches in your online gaming experience. It makes perfect sense and confirms what people are experiencing.
RealNC wrote: To put it another way:

100% stable, 0 packet loss, 0 variance fast path 50ms - Good
100% stable, 0 packet loss, 0 variance low-depth interleaved 20ms - Bad (just because it's interleaved.)
100% stable, 0 packet loss, 0 variance high-depth interleaved 40ms - Really bad (just because it's interleaved.)

Which I call shenanigans on :P
Again, it is not that simple, and interleaving is only a part of the problem, not the only problem. Your example is bad because it all depends on what scenario we will use. Fast path reduces ping times, and going higher and higher with interleaving increases ping times. Interleaving is introduced because line is not stable, and there are a certain number of errors in a given time (counted daily possibly, doubt its per hour). In theory fast path should always be superior as long as:

- line is stable and packet loss % is low enough
- there is no huge variance in ping times

And i never said that interleaving is "BAD", i said that as far as fps gaming is concerned it is not optimal or perfect, and most of the time this is the only thing you end up with anyway (all lines now are basically interleaved as it improves overall network stability). Also, going from higher interleaver depth to lower one can make a huge difference (night and day difference) in your gaming and hit registration, you don't need fast path for that. But there is a number (depth number) at which problems start to occur, which i assume is entirely dependent on the line, how long it is, its speed and parameters, number of errors and few other factors (like network load).

You should also be aware that it is not always about ping times (!), as you can get a vastly different gaming performance (and hit registration) within a few hour window, without any changes to your ping time or network parameters that are reported by your router (!). This is exactly why people say that this problem randomly goes away and shows up again at any given time, why that is and why it happens i have explained it my earlier post, go read that part: "How network overload affects fps gaming and hit registration ?".

So the main point i try to make here regarding your example... the question i ask:

"Can you have great hit registration on a line with high interleaving depth ?

And my answer is:

"Yes, you can have great hit registration with high interleaving depth. But it will only happen when network load (internet usage, meaning how many people are using internet at the time you play) in your area/city is low enough, and when lines are not overloaded. So the change will temporary. And this is why there tends to be a huge variance in a perceived hit registration and a seemingly randomness of this problem."

User avatar
RealNC
Site Admin
Posts: 3730
Joined: 24 Dec 2013, 18:32
Contact:

Re: What monitor for ~$500? Main priority is CSGO

Post by RealNC » 23 Oct 2017, 15:22

I'm not sure how interleaving plays into any of that. As you said:

"Time-sensitive applications often use UDP because dropping packets is preferable to waiting for packets delayed due to retransmission, which may not be an option in a real-time system."

It doesn't matter if it's preferable to drop packets or not, because you're never going to drop any packets at the DSL level. UDP and TCP are two layers above the DSL level (the ATM layer sits in between,) which is where interleaving works at. Interleaving works between your modem and the cabinet (or DSLAM) the modem connects to. TCP packet re-transmission has nothing to do with DSL interleaving. You cannot drop TCP packets (nor UDP packets) between your modem and the DSLAM. There is always going to be a re-transmission if there's an error, because you're not actually sending TCP or UDP to the DSLAM. In fact, fast path results in more re-transmissions than interleaved, because errors are not recoverable, while interleaving does recover from errors without requiring re-transmission. So going by what you've said, interleaving would be better than fast path for hit reg.

I've read what you wrote, but I have a hard time imagining how DSL interleaving could affect the TCP/IP layer in any way other than just latency :-/
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

gimmejokers
Posts: 5
Joined: 29 Jul 2017, 12:37

Re: What monitor for ~$500? Main priority is CSGO

Post by gimmejokers » 23 Oct 2017, 15:59

hi guys, seeking adivce for CSGO on my XL2540
i play only CSGO on it and have a 1000hz mouse. When everyting runs well I really love it. But something feels inconsistent with performance, i'm really sensitive about it.

first i had problems with letting the FPS being capped at default "300". i just had like hitreg problems because of somekind of input lag. then i uncapped it and it became way better. but sometimes it's still like "inconsistent". I noticed that the game feels best on this monitor when i'am over 300fps (around 330-400) but my PC cant handle it. some maps just make it drop to about 220fps on specific spots. I have a G1 Gaming GTX970 (fastest 970) and my i5 4670 @4.2Ghz. I have raw input on. I dont use any pseudo tweaks in command line options or my config.

can someone share any thought and settings on it? and I still cant understand CS fps limiter. some say go over, some say go under. can someone demystify it for me please :D maybe i'm just crazy but i would like to proof myself wrong ;D

User avatar
lexlazootin
Posts: 1251
Joined: 16 Dec 2014, 02:57

Re: What monitor for ~$500? Main priority is CSGO

Post by lexlazootin » 23 Oct 2017, 20:51

Uncapped FPS will always feel better, less latency, less noticeable tearing but it's hard to have this all the time unless you speed a lot on your system. You can try playing at lower settings or at a lower resolution.

Another option is using G-Sync-FreeSync and capping the game below your maximum refresh rate to get a super consistent no tearing gaming experience with little to no latency.

Post Reply