I don't think I denied that in any of my previous posts here. I was only referring to G-SYNC vs. no sync with framerates inside the refresh rate.
No sync is indeed "superior" to G-SYNC in raw input lag with framerates well above the refresh rate (even if not by very much at 240Hz+, and my graphs are generous to no sync due to counting first reaction instead of middle screen, where the differences would be more equalized), but the gap between it and G-SYNC closes as the max refresh rate increases due to the ever shrinking duration of individual scanout cycles.
I've mentioned this many times before, but once we hit 1000Hz, there will virtually be no difference between G-SYNC and no sync, at which point you really won't need any syncing method to prevent visible tearing artifacts.
G-SYNC is ultimately a tearing-prevention stopgap due to currently achievable framerate/refresh rate ratios, much like strobing is to motion clarity. Both will ultimately be circumvented and replace by ultra high refresh rates + framerates (be it amplifications techniques and/or native performance) in the future.