Stizzie wrote: ↑23 Aug 2024, 02:52
Such neat magical trickery.
It's such a minor modification to a 100-year-old raster topology, to simply vary the size of VBLANK by keeping extra lines to the end of the blanking interval ...until the software is ready. Instead of the computer having to sync to the display, the display is now syncing to the software through this roundabout way.
Now that being said, as refresh rates get higher (1000fps 1000Hz), the fine granularity of fixed refresh cycles tend to behave more and more like VRR since you can begin a new refresh cycle at the nearest ultra-fine-granularity fixed refresh cycles.
So ironically, as we get to extremely high refresh rates, VRR becomes less useful. Stutters are much more visible at 144Hz than at, say, 1000Hz. 1/144sec = 6.9ms. And 6.9ms from a missed VSYNC at 1000 pixels/sec is a 6.9 pixeljump. Very easy to see. Still possible to see VSYNC stutters at 500 Hz but tends to require faster motion speeds and higher detail graphics (e.g. 4000 pixels/sec at 1/500sec error = 8 pixeljump).
I wrote an article about brute framerate-based motion blur reduction someday replacing flicker-based motion blur reduction. 1000fps 1000Hz combines the benefit of VRR, strobing, flickerfree, blurfree, all at the same time, while keeping HDR brightness, so it's kind of the Blur Busting holygrail nowadays --
www.blurbusters.com/framegen#dev for the Developer Best Practices. Now, there are reprojection algorithms that avoids double-image artifacts (on sample hold displays) and converts a varying frame rate to constant framerate=Hz so that is an alternative method to skip needing VRR. A bit brute of a hammer, but it does have other benefits (motion blur reduction). VR headsets flicker out of necessity because we don't have enough framerate to kill blur strobelessly. Quest 2 flickers at 0.3ms MPRT, which would require 3333fps 3333Hz to do strobelessly.
And with OLED recently unlocking refresh rate race (120Hz vs 480Hz on OLED is much more visible than 60Hz vs 120Hz on LCD, even to mass-market mainstream and office productivity), some developments may also occur to produce technologies that eventually complete with VRR, while providing other benefits to humankind.
Stizzie wrote: ↑23 Aug 2024, 02:52
I've always assumed that the Vsync signal in the middle is the VBLANK that APIs try to wait for in "wait for VBLANK", and the 2 porches act as sort of a timing padding (similar to a delay for hotkeys, if a hotkey is send to window in real time, often time it will not get registered). To reiterate Chief: the whole VBI ( Front porch + vsync + back porch ) is read from the perspective of the API as VBLANK; meaning at any point of time as long as the current pixel signal falls in the VBI then it is considered VBLANK and the API can flip if it's V-synced.
In theory the VRR implementation is GPU could loop on the middle VSYNC scanlines, and GPU simply begin transmitting the fixed (low) count of Vertical Back Porch (as specified in EDID) before the visible refresh cycle. That would only delay the refresh cycle by microseconds at the most. VRR could be achieved this way with exactly the same visual benefits, without delaying the active refresh cycle much (just a few scanlines of Back Porch).
However, the industry decided over ten years ago that we'd just keep duplicating the last scanline of VBLANK (which is always the Vertical Back Porch) in an infinite loop that only exits when either (A) software presents new frame, or (B) VRR range is violated. That produces the maximum flexibility and maximum backwards compatibility with displays not normally designed for VRR.
Even before VRR, there were laptop panels that could switch refresh rates seamlessly for power management, my old IBM Thinkpad could switch to 50Hz automatically in year 2005 during battery saver mode, with no mode changes. Those panels even were able to function with VRR, there are ways for an EDID override to force VRR over DVI and VRR over VGA! (In fact, there were forum members on Guru3D who successfully tested a couple of Multisync CRTs without digitally mode-change stabilizers or watchdog firmware, functioned fine with VRR). Basically 800x600 with a VRR range of 56-75Hz over VGA, forced into some old Diamondtron. As long as framerates didn't suddenly vary too much, it worked fine without mode-change blackouts. This only worked with certain very old "dumbest" HDMI-to-VGA adaptors that did a blind 1:1 conversion and ignored all the weird behaviors on the HDMI. It is incompatible with most newer HDMI-to-VGA adaptors that fritz on signal weirdnesses. (It is convenient that 1080p is identical analogly and digitally, timing-wise, for syncs and porches, so VRR is a bolt-on in both analog and digital domains, which makes all of this all possible, even in the first place).
VRR existed on vector CRTs, if you had a Vectrex system you could see the erratic flickers, as more vectors meant the global refresh rate was lower. And the 1983 Star Wars video game arcade machine flickered erratically all the time (most especially during the Death Star explosion that used a massive number of vectors). But VRR actually works on a few raster CRTs, as long as it's a multisync CRT that doesn't have very strict mode-change blackouts.
Changes to Vertical Porch on such a tube tended to be fairly gentle even if it sometimes slightly shifted the position of the CRT image (slightly upwards and slightly squatter, as framerates changed), but the variable flicker of the raster CRT worked fine with the 800x600 at 56-75Hz range.
- Requires Radeon card
- Requires ToastyX CRU EDID override to force blind output to an EDID-less display
- Requires only certain tubes (multisync CRTs with less mode-change blackout circuitry or firmware range cop)
- Requires only certain adaptors (dumbest and oldest HDMI 1.0 adaptors)
It is much harder to reproduce now, as you have to pull some legacy equipment (extremely old HDMI 1.0 spec adaptors that converted HDMI to VGA much more blindly, fully preserving blind VRR behavior without glitching). Another technique to verify that your adaptor chain works with VRR, is simply to use HDMI-to-DVI adaptor (single link), and use a separate DVI-to-VGA adaptor. DVI-to-VGA adaptors are guaranteed to be "dumber" than HDMI-to-VGA adaptors, and therefore a two-adaptor chain may sometimes work more reliable for VGA CRT VRR experiments.
This belies the fact that modern raster VRR is such as clever minor modification to a 100-year-old raster topology, and is backport-able retroactively to older tubes.
DVI sometimes worked fine with blind-forced generic VRR too, as some old DVI monitors (2006-2010ish) could do a VRR range of 50-72Hz(ish) when similar trick is done. The protocol of DVI and HDMI 1.0 is identical bitstream (it's why audio mysteriously worked over DVI to some early HDMI 1.0 monitors, as audio was a simple "audio-in-blankings" extension). HDMI 1.0 is just merely extensions to DVI and a brand new connector, and displays recycled the same chips for both DVI and HDMI. Funny how sound worked fine over DVI on certain computer monitors.
So there is a heavy specification blur between VGA and DVI (1080p=1080p temporally identical), a heavy specification blur between DVI and HDMI (same bitstream), and also VRR specification can be fully backported all the way back into VGA analog (all the same variable Vertical Back Porch trick). Voila.
VESA Adaptive-Sync is VRR at the panel level, and is used by all major technologies (GSYNC, FreeSync, etc), as GSYNC and FreeSync piggybacks various extensions, requirements, and certification logos. FreeSync is just AMD-certified generic VRR, and G-SYNC Compatible is just NVIDIA-certified generic VRR. G-SYNC native (module) is proprietary from GPU to scaler/tcon, but the panel may still be VESA Adaptive Sync internally, etc. It kind of blends together, but it all has the same commonalty of variable VBLANK via end-of-VBI-extension (loop on Back Porch).
________
Anyway...
More Information about Digital Screens And Special Considerations
There are proprietary differences in how VRR range is handled. At the panel level, all of them are nigh identical (since they have to use the same factory LCD panels). But at the display scaler/tcon level, NVIDIA G-SYNC module does things a bit differently from FreeSync/AdaptiveSync/HDMI VRR for frame rates below refresh rate.
When too much time elapsed before a new refresh cycle, there are problems -- like an image fading to black or white gradually. (This explains some of the VRR flicker behaviors too, by the way; the gamma of minimum framerate is slightly different from the gamma of maximum framerate, simply because the pixels have decayed a bit more at minimum framerate).
Modern screens are in some ways, very similar to giant glass DRAM chips, essentially. Or 8LC/10LC SSD chips (8-bit to per visible subpixel, unlike 3-bit-per-storage TLC SSDs). Yes, screens are essentially giant lithographed glass computer chips, complete with the same kind of semiconductor transitors (Albiet much larger), with the similar circuitry of a DRAM chip, except at ginormous scales, controlling light valves (liquid crystals) or light emitters (OLED/LED), just for our human eyes to see a picture on displays.
So just like a DRAM refresh, DRAM memory chips can go corrupt when not refreshed a few times a second. And flash memory can go bad if they're not refreshed roughly annually. For digital displays, there are visible artifacts if we wait more than a fraction of a second (fade to white, fade to white). Even a 1% fade = flicker. So we want to avoid that.
Which is why VRR displays have a minimum Hz.
If frametimes exceed minimum Hz (Hz < 1/frametime), the screen has to refresh again.
That's Low Framerate Compensation (LFC).
Generic VRR will require the graphics drivers to automatically refresh (software-driven repeat refreshing)
Proprietary VRR (G-SYNC module) uses the framebuffer stored in monitor's memory to repeat-refresh (display-side repeat refreshing).
Now, a lower MinHz range is sometimes good, but sometimes I prefer a higher MinHz, and simply relying on LFC. Sometimes the LFC performs better than ultra-low native MinHz. For generic FreeSync, 30Hz can flicker too much, so LFC 30Hz (via 60Hz + repeat refreshing 30fps) often looks better.
That's why you see some people do range edits to set MinHz to different numbers, like 55 or 65 or even 80 (e.g. 360Hz OLEDs often look better with an 80-360Hz range since LFC-driven 79fps-and-below can bypass some of the OLED flicker reports of low-OLED framerates).
Now, LFC is not without disadvantages. Just like Ethernet, a new refresh cycle frame from software can "collide" with a monitor busy repeat-refreshing. Which adds stutter, if a software is suddenly ready with a new frame while the monitor is still repeat-refreshing. That new frame must wait for the display to finish repeat-refreshing. Ooops. Now the user has to endure a stutter caused by LFC algorithm.
Displays with "1-240Hz" LFC is simply extremely smart display-side LFC that uses algorithms to predictively avoid those collisions. G-SYNC native is really good at LFC nowadays, while generic LFC can vary widely in quality (if it's just a very rudimentary LFC algorithm that doesn't try to predictively avoid collisions between new attempts to deliver refresh cycles while repeat refreshing).
Repeat-refresh cycle forces the monitor to be busy for (1/MaxHz) second, where it cannot start a new refresh cycle. MaxHz generally dictates how long a monitor needs to refresh one refresh cycle (no matter what framerate or frametime), that's your shortest interval between refresh cycles, regardless of new frames or repeat-refresh frames. "Frames" and "refresh cycles" are essentially the same thing in VRR, so you may notice sometimes I use the terms interchangebly.
Either way, so if you have variable framerates near LFC region, you can have unwanted erratic stutters that show up if (A) the LFC algorithm isn't good, (B) the framerates are erratic to defeat the LFC algorithm, (C) the MaxHz is relatively low. The average stutter error margin of LFC will be approximately half MaxHz refreshtime, so that's why LFC becomes a nonissue at high MaxHz. On modern 480Hz VRR displays, repeat-refresh events are only 1/480sec and stutter of LFC is only 0.5/480sec average (average penalty of any random collision between native and repeat refreshes). Those 1ms stutters aren't going to be human visible at 33ms frametimes (30fps), so LFC looks native-refresh with ginormous VRR ranges, which is why I recommend 80-480Hz VRR ranges for 480Hz OLEDs, forget about 30-48Hz minimums, since 80-480Hz looks exactly like 1-480Hz VRR range, but without the OLED VRR flicker.
Generalities:
1. Get a high MaxHz whenever you want to buy a brand new VRR display.
2. Make sure your Min:Max is at least 3:1 for very good LFC that looks native
3. Purchasing VRR ranges bigger than framerate ranges, lets you avoid worrying bout the capping disadvantages
4. Even 100fps at 480Hz VRR is much lower lag than 100fps at 144Hz VRR/VSYNC ON/VSYNC OFF.
5. Even a very crappy LFC algorithm performs well if you have a 4:1 Min:Max Hz range, so you bought insurance if you got a 360-480Hz display.
So, my advice to new VRR purchasers who have no budget concerns want maximum VRR benefits, is to... just get way more MaxHz than you think you need. The new 240-480Hz OLEDs actually are great for office productivity too, and even some mainstream writers now agree high Hz is fantastic ergonomically for non-gaming uses too. LCDs are great as well, although refresh rate differences are massively more visible on OLED than on LCD, which is why the refresh rate bang for the buck is usually more apparent.