Page 2 of 6

Re: AMD Radeon AntiLag

Posted: 12 Jun 2019, 02:40
by RealNC
This looks like nvidia is just trying to downplay AMD. And they're contradicting themselves while doing so. They say they don't know what AMD's tech actually is, and yet in the same statement claim nvidia already had it for a decade?

They should hire someone at nvidia to proofread statements before sending them out...

Re: AMD Radeon AntiLag

Posted: 12 Jun 2019, 03:04
by MatrixQW
Is it possible that Intel HD graphics has less input lag than Nvidia or AMD?
I'm thinking that since it's integrated in the cpu maybe it takes less time to process things.
Was this ever measured by real tests?
Also, there is no pre-rendered frames option with Intel, so how many is it using?

Re: AMD Radeon AntiLag

Posted: 12 Jun 2019, 16:06
by Chief Blur Buster
MatrixQW wrote:Is it possible that Intel HD graphics has less input lag than Nvidia or AMD?
I'm thinking that since it's integrated in the cpu maybe it takes less time to process things.
Short answer: No

Proof: My Tearline Jedi Research research and my understanding of Present()-to-photons science.

However, there are latency differences at the software driver level, which has nothing to do with whether the GPU is integrated or not -- that will only produce microsecond differences that are meaningless.

The granularity of horizontal scanrate is all that matters in practical considerations (single-scanline movements of a tearline = the smallest possible latency granularity possible = one unit of horizontal scanrate, which is 1/160000sec for a 160KHz horizontal scanrate. Anything less than that is a rounding error -- including signal propagation difference between in-CPU versus PCI-Express, or the memory access/latency differences

It's how the software does it (driver level, frame buffers, etc) that makes the latency differences significant enough to affect latency (and/or location of tearlines) -- even simple nuances such as power management can affect latency (lengthening the time from inputread to photons)

Re: AMD Radeon AntiLag

Posted: 12 Jun 2019, 17:11
by Q83Ia7ta
I'm sure it's just a gimmick in most use cases. Maybe for some games or in some situations it will help/work. Btw value of prerendered frames or flip queue size are ignored/overridden in most competitive games. It's just set by the game engine.

Re: AMD Radeon AntiLag

Posted: 12 Jun 2019, 17:49
by Stitch7
jorimt wrote:For what it's worth:
https://www.pcgamesn.com/nvidia/nvidia- ... lternative
Tony Tamasi, VP of technical marketing, says [Nvidia] has an alternative to [Radeon Anti-lag] too… and has done for over a decade [...] That’s the setting ‘maximum pre-rendered frames’ in the GeForce control panel, which controls the size of the Flip Queue. The higher the number here, the higher the lag between your input and the displayed frame, while lower queue lengths result in less lag but increased strain on your CPU.
1-3 frames at 60fps is more than 15ms.
I think this is something different.

Re: AMD Radeon AntiLag

Posted: 12 Jun 2019, 22:28
by jorimt
^ Sure, let's hope; if it ends up being something entirely new, obviously none of us can know until they tell us. Just best (educated) guesses at this point.

Re: AMD Radeon AntiLag

Posted: 13 Jun 2019, 13:06
by Chief Blur Buster
That said...

I can confirm there's inefficiencies with current graphics workflows (including VSYNC ON, or power-management-induced latency increases, or other factors) that can create additional latencies specific to driver software versions, computer system setups, and graphics vendors.

Virtual reality development has bore this out; in pushing innovations for lower latency. Including doing things like building in raster-based beam-raced rendering workflows and/or even simply VBI-beamraced pageflipping (like RTSS Scanline Sync) -- it's theoretically possible that AntiLag is simply a variant of driver-based scanline sync but I'm not sure. If so, then this will be great news that AMD has copied a "Blur Busters suggested invention" under popular demand. ;) Probably not, I am just speculating -- the Present()-to-Photons science is a Pandora Box that is very hard to optimize (reducing lag and stutters can often make things difficult for game developers).

Maybe that's not it -- who knows what is being done behind the scenes?

I would not be surprised if there's some latency reductions in certain graphics drivers, but actual testing will need to bear this out -- actual hard data.

Re: AMD Radeon AntiLag

Posted: 13 Jun 2019, 14:25
by mello
This is what AMD rep says about AntiLag feature: https://youtu.be/uPpdtXFx7gQ?t=232

Re: AMD Radeon AntiLag

Posted: 14 Jun 2019, 09:06
by ko4
mello wrote:This is what AMD rep says about AntiLag feature: https://youtu.be/uPpdtXFx7gQ?t=232
like i said
this will only work when gpu limited/bottlenecked

Re: AMD Radeon AntiLag

Posted: 14 Jun 2019, 09:08
by jorimt
mello wrote:This is what AMD rep says about AntiLag feature: https://youtu.be/uPpdtXFx7gQ?t=232
^ Yeah, hmm, he still makes it sound something like a reduced pre-rendered frames queue for GPU-limited and/or usage-maxed (which does indeed affect both synced and unsynced) scenarios, at least under specific (still to-be specified) conditions (and possibly only at lower/certain framerates).

That's one of the very reason we recommend having the framerate limited by an FPS cap (and sustaining it there) at all times where possible (regardless of synced/unsynced), if you desire the lowest input lag at all times.

Seems like one of the only potential differences from Nvidia here is that AMD may have found a way to guarantee override (and/or auto regulation, tuned for balance of lowest lag + highest framerate performance) of each game's internal pre-rendered queue value at the driver level, something that can't currently be said for Nvidia's MPRF setting (which has no guarantee of overriding the behavior in the given game).

That said, I'm still hoping for it being something more revolutionary, I just haven't seen anything that tells me that it is (yet...).