Re: New low latency nvidia drivers
Posted: 21 Aug 2019, 16:22
I would also love if someone did some research about the measurable diffrence when using this ultra setting.
Who you gonna call? The Blur Busters! For Everything Better Than 60Hz™
https://forums.blurbusters.com/
just tested it i feel no difference from 1 to ultra using driver 436.02 rtx 2080 ti vsync on Gsync On tested reduce buffering overwatch on and off so i dont knowNocebo wrote:I would also love if someone did some research about the measurable diffrence when using this ultra setting.
Sorry, I'm in New York, and I am not able to switch jobs until around next year July. Also, I'm unqualified to work on a GPU driver team, because my experience in this specific area (graphics) is quite low. I wouldn't be useful for anything else related to the driver side of graphics. Frame timing is a combination of math and design, and my specialties are math and design (and programming).HiAlgoBoost wrote:@ad8e That sounds great! Are you around Santa Clara /Bay Area by a chance? We are looking for good people to join our (AMD) driver team...
That sounds very interesting! 1 horizontal tearline std - that is way more accuracy than is required, wow! Let's continue the discussion offline, I am interested - email me at hialgoboost(at)gmail.com.ad8e wrote:Sorry, I'm in New York, and I am not able to switch jobs until around next year July. Also, I'm unqualified to work on a GPU driver team, because my experience in this specific area (graphics) is quite low. I wouldn't be useful for anything else related to the driver side of graphics. Frame timing is a combination of math and design, and my specialties are math and design (and programming)..
Amazing! Just out of curiosity what is the time that the prediction took to run and which frequency it is updated? You said "from the past frametimes..." but it seems very uncertain by nature to have only 1 line of deviation if your observable are just the past frametimes. Even for good performers for timeseries like xgb and LSTM and a large event window it seems unlikely. I can't imagine such a good result with simplier algo.ad8e wrote:Sorry, I'm in New York, and I am not able to switch jobs until around next year July. Also, I'm unqualified to work on a GPU driver team, because my experience in this specific area (graphics) is quite low. I wouldn't be useful for anything else related to the driver side of graphics. Frame timing is a combination of math and design, and my specialties are math and design (and programming).HiAlgoBoost wrote:@ad8e That sounds great! Are you around Santa Clara /Bay Area by a chance? We are looking for good people to join our (AMD) driver team...
...
You misunderstand - what I'm doing is much less impressive than the notion you're thinking of. I am timing the vblank points, not the render frame times. Frame time estimation and vsync timing are two separate parts, both necessary.andrelip wrote:Amazing! Just out of curiosity what is the time that the prediction took to run and which frequency it is updated? You said "from the past frametimes..." but it seems very uncertain by nature to have only 1 line of deviation if your observable are just the past frametimes. Even for good performers for timeseries like xgb and LSTM and a large event window it seems unlikely. I can't imagine such a good result with simplier algo.
"etc/Blur Busters Forums.html" was a long and fruitful private discussion with ChiefBlurBuster. I'm not comfortable sharing it publicly unless he gives permission, so I removed it from the zip.The code can be built with the given Visual Studio project, and has been done so in blurbusters.exe, which is a demo. To get 1 tearline std, set battery settings to High Performance. On my system, Balanced battery settings sometimes gives 2 tearline std and sometimes 40 tearline std.
Controls: right click drag to control tearline. There's also a demoscene effect by pressing 2; switch back with 1. It may crash occasionally (intentionally), because the estimation part has CPU spikes when the P-only controller jumps back and forth too much. The system checks for CPU spikes, and on detection, it intentionally crashes to make those spikes noticeable. The crashes are optional. A simple loopcount limiter should be applied to the estimation part but I didn't bother to do that yet.
The important algorithm is vsync.cpp. The algorithm component is well documented but it is still largely inscrutable because the math is so heavy. I can understand it perfectly but that is mainly because I wrote it and my math background.
The benchmarking and auto-optimizer are poorly documented, but those are for determining optimal constants during debugging. Those are in the #define blocks of VSYNC_BENCH and FIND_CONSTANTS. All code and algorithms in vsync.cpp are original, so I place it in the public domain (CC0).
The Windows-specific glue is in platform_vsync.cpp. Note the credits at the top of this file, which may cause issue with copyright. The full origins of all components of this file are listed in that credits section. I place everything in this file in the public domain (CC0) to the extent which I am able.
There's some interesting documentation in "etc/Blur Busters Forums.html". Jerry Jongerius also did some tests in the top of the source file in http://www.duckware.com/test/chrome/467 ... e-code.zip
...
3. Note that my code doesn't detect variable sync (it could if necessary). A different algorithm should be used for variable sync, depending on the sync's implementation details.