4K IPS FreeSync monitor. 60Hz.
Republic Of Gamers - ROG branded
Article:
http://www.pcper.com/news/Displays/CES- ... NC-and-IPS
![Image](http://www.blurbusters.com/wp-content/uploads/2015/01/image2.jpg)
Forum Rules wrote: 1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
2. Please report rule violations If you see a post that violates forum rules, then report the post.
3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!
The liked article names this one a GSYNC and no FreeSync-Monitor...Chief Blur Buster wrote:ASUS PG27AQ
4K IPS FreeSync monitor. 60Hz.
Republic Of Gamers - ROG branded
Article:
http://www.pcper.com/news/Displays/CES- ... NC-and-IPS
G-sync is exclusive to anything else, including any form of secondary or non-DisplayPort input.RadonGOG wrote:The liked article names this one a GSYNC and no FreeSync-Monitor...
Wouldn't it be better to have two competing solutions constantly trying to outperform each other? There is still a lot of room for improvement.jts888 wrote:Assuming no nasty Adaptive-sync/FreeSync surprises, the sooner G-sync dies the better IMO.
No.Sparky wrote:Wouldn't it be better to have two competing solutions constantly trying to outperform each other? There is still a lot of room for improvement.
I'm not talking about minor tweaks, I'm talking about huge improvements that will require a lot of R&D: Combining variable refresh and low persistence. It's not going to happen if everybody settles on a single standard and tries to compete on price alone.jts888 wrote:No.Sparky wrote:Wouldn't it be better to have two competing solutions constantly trying to outperform each other? There is still a lot of room for improvement.
Adaptive-sync is the simplest possible (and now the standard) communications protocol to achieve the end result of variable refresh rates. FreeSync is just the hardware/driver on the GPU side feeding any arbitrary, freely licensed monitors.
G-sync is both a proprietary, needlessly complicated bidirectional communications protocol and the proprietary hardware on both GPU and monitor ends. Its cost comes from its (~$100-$150) FPGA board, which is essentially a hardware emulator for a custom scaler, which allows firmware upgrades to markedly change the logic in the future.
While G-sync monitor logic allows the possibility of doing variable-refresh-aware pixel overdrive (why the buffer appears to be needed), etc., there is no innate technical reason that the custom driver logic needs to talk to GPUs over the G-sync protocol instead of the simpler, more effective, and now standard Adaptive-sync.
What G-sync really does is:In all likelihood, G-sync and FreeSync chains will have identical latency characteristics, but G-sync will have some faint pixel transition quality advantage over at least the first generation of Adaptive-sync scalers, but at the cost of a $150 or more premium. I expect Nvidia to pound on these differences once the first Adaptive-sync monitors are released, but it'll be tough to convince the market unless they are really substantial.
- keep AMD/Intel GPUs from feeding Nvidia's fancy scalers
- keep established scaler makers from directly competing against Nvidia's monitor hardware by not feeding them Adaptive-sync
- establish an overall "walled garden" for the entire display chain, keeping gamers locked reciprocally into Nvidia GPUs/monitors, buying one to not lose an already sunk cost in the other
If G-sync pixel overdrive tweaks and special GPU frame pacing etc. were really worth $100 or more, Nvidia could easily unbundle the halves via a monitor firmware update, supporting Adaptive-sync and letting them each freely compete on the market.
But they realistically won't even consider unchaining the G-sync monitors and cards until they've definitively lost the market race.
But G-sync doesn't even do that, and nothing about the proprietary G-sync protocol is necessary to do that.Sparky wrote: I'm not talking about minor tweaks, I'm talking about huge improvements that will require a lot of R&D: Combining variable refresh and low persistence. It's not going to happen if everybody settles on a single standard and tries to compete on price alone.
I don't think you fully understand the problem. To start, you need high persistence at low framerates, to avoid flicker and PWM artifacts. You then need to smoothly transition to high framerate low persistence in order to minimize motion blur, without any perceptible change in brightness or colors. THEN you need to figure out how to do all of those things when you don't know how long it's going to take to get the next frame. I think it's solvable, but it's as much a biology problem as an electronics one.jts888 wrote:But G-sync doesn't even do that, and nothing about the proprietary G-sync protocol is necessary to do that.Sparky wrote: I'm not talking about minor tweaks, I'm talking about huge improvements that will require a lot of R&D: Combining variable refresh and low persistence. It's not going to happen if everybody settles on a single standard and tries to compete on price alone.
Scaler and/or monitor manufacturers just need to experiment and decide when a frame update starts coming down the wire:Those are literally the only factors to worry about, since GPUs can't pre-announce the transmission of a frame, sending them out immediately upon completion anyway, so you're just trying to find an optimal tradeoff point between muted motion blur trails when following moving objects and reduced strobing artifacts when scanning across static content.
- for how long (if at all) should the backlight be turned off early, before the start of the redraw?
- following a full intensity strobe (duration determined by brightness level), should the backlight brightness go immediately to the lower "holding" level, transition smoothly to it, or even be briefly shut off completely?
Letting the different manufacturers accept standardized protocol input and come up with competing parameters to the preceding questions is certainly within their talents and is plainly the way to go forward. This sort of flexibility may well already be within the capabilities of existing scaler ASICs given that strobing and blur reduction solutions come from several different vendors now.
This certainly isn't rocket science, and you're drinking too much Kool-Aid if you think this has to be a multi-million dollar research effort. G-sync's custom FPGA hardware is really only needed to allow a 2-way link to specifically only GeForce cards and to allow Nvidia to sell prototype scaler chips to the public at lower cost/risk than possibly buggy fabricated ASICs.
I don't think any of us enthusiasts fully understand the problem, but as a practicing computer engineer I'm pretty confident I get at least the major relevant concerns, and I still don't think it's rocket science.Sparky wrote:I don't think you fully understand the problem. To start, you need high persistence at low framerates, to avoid flicker and PWM artifacts. You then need to smoothly transition to high framerate low persistence in order to minimize motion blur, without any perceptible change in brightness or colors. THEN you need to figure out how to do all of those things when you don't know how long it's going to take to get the next frame. I think it's solvable, but it's as much a biology problem as an electronics one.
I agree that each panel needs to be independently parameter tuned, but the problem isn't about research. It's just about scaler ASICs exposing backlight driver timing/level controls via firmware tables, and letting monitor manufacturers tweak those tables to their individual preferences. It escapes me why you think it's necessary for Nvidia to do anything more than sell a flexibly configurable component, unless you actually want them to subsume the monitor manufacturers into component vendors for their own monitors.Sparky wrote: What's actually needed to do that research? Tens to hundreds of different panels, different panel technologies, engineers to interface prototype scalers to different panels and calibrate them, test subjects(young people, old people, different types of colorblindness, etc.), and a lot of time to spend working on the problem.
Yes, the industry needed to be shaken up, but G-sync seems to be dubiously over-engineering approach to the problem.Sparky wrote: Think about why g-sync exists in the first place. Do you really think Nvidia would have gotten involved in a traditionally low margin industry if the major display manufacturers didn't need a kick in the pants?