Re: Pre-rendered frames etc. (continued from G-Sync 101 arti
Posted: 29 Dec 2018, 19:20
That would be great. From a programmers standpoint a buffer is an "area" in memory. For it to be used it has to be allocated and a pointer has to be created. This is necessary for data to be stored there and data to be retrieved as well. This does not happen accidentally. If there are multiple parallel "areas" in memory being able to hold multiple successively created frame buffers, this was coded. And therefore it likely is intentionally.jorimt wrote:As far as I know, it's exclusive to double buffer V-SYNC, as well as "faked" triple buffer V-SYNC (basically double buffer with a third buffer; no relation to true, traditional triple buffer). G-SYNC is also based on a double buffer, as it is only meant to work within the refresh rate that it adjusts to do it's magic.pegnose wrote:Before I read your article I never knew about the over-queuing of rendered frames. I myself had only worked with simple front/back buffer flips, one per unit. I would be particularly interested in
- whether this is an intended feature
And no, over-queuing isn't intended, it's simply a limitation of syncing the GPU’s render rate to the fixed refresh rate of the display. Others here can probably breakdown the "why" for you in more details, as my explanation would likely come across more conceptual than technical.
I think I have understood a bit more again. The setting is called _Max_ pre-rendered frames. This queue can be filled if we are not operating in the CPU limit, if the CPU has free resources, i.e.. If we _are_ in the CPU limit, it won't get filled and can't protect against frame time spikes. Also, if we are constantly hitting our in-game or RTSS frame limiter, which basically function by "pausing" the game's relevant CPU threads for a short while, we are effectively in the CPU limit. And therefore we are effectively at a max pre-rendered frames of "0".jorimt wrote:As far as I currently understand it myself (anyone is free to chime in with corrections), think of the pre-rendered frames queue as less of a direct input lag modifier (which is actually a peripheral effect of it's primary function), and more as a CPU-side throttle for regulating the average framerate.pegnose wrote:Would it be valid to say that what pre-rendered frames are for the CPU, piling-up rendered frames are for the GPU? A somewhat hidden queue making sure there is always new data to work with or to present?
So the "safety buffer" on the CPU side - as opposed to the one on the GPU side - is able to always pick the most recent set of data?
The less powerful the CPU is in relation to the paired GPU, the larger the queue needs to be in order to keep a steady flow of information being handed off from the CPU to the GPU, which in most cases, equals a lower average framerate.
This is why in instances where the CPU is more of a match to the GPU, you'll find people reporting that a pre-rendered frame queue setting of "1" actually increases their average framerate (if only ever so slightly) when compared to higher queue values, as the higher queue values actually begin to throttle the average framerate (a.k.a slow CPU hand off to the GPU) unnecessarily. Whereas for the weaker systems (where the CPU is less of a match to the GPU), lower values may decrease performance and/or cause more frametime spikes (complete absence of frames for a frame or frames at a time).
So like my article already states, the effects of the "Maximum pre-rendered frames" setting truly "Depends" on the given system, the given game, the given queue setting, and the interaction between the three.