Meowchan wrote: β23 Nov 2020, 04:24
schizobeyondpills, all that is well and good. But in the end of the day if the difference between AMD and NVidia cannot be measured in miliseconds then I don't see why should I care about it for gaming latency wise. Can it be shown or can't it? Last thing I want is another Roach with discussions turning into feelorycraft and bedtime stories.
Nobody's paying you anything. Share, or don't. So far I remain unconvinced.
evolution is not religion to need believers.
did i ever say it cant be shown? absolutely it can. you just need proper accurate and precise tools as well as knowing where and how to apply them. like that github tool above to compare different rasterization and tiling modes between gpus.
what all these reviewers dont understand or chose not to due to their mass apeal goals for revenue is to do an in depth accurate benchmark rather than click to photon joke of a single pixel. last time i checked video games are a continuous stream of frames onto display and dont require clicking to have a frame updated. but everyone of such reviewers keeps circlejerking with click to photon and then once they see a small difference say 0.05ms they call it an error. but u see. if you take 240fps and multiply by 0.05ms you get 12ms per second difference. these are just made up numbers but its still showing how no one is testing actual frame latency but both fails at proper test and how to reason about it. theres also consistency, skew, heat problems, testing under real life scenario and not on local server without network load or low rendering load etc..
i chose not to reveal it. simple as that. why would i give out things for free when no one appreciates what i have to say.
theres displayport and hdmi analyzers, oscillioscopes, logging of frames, OS. etc. if you arent paying me at least 6 figures then feel free to set all of it up yourself. i dont need mass apeal for ad revenue to nerf science and falsify untuned bench environments with invalid results to favor both sides of A vs B for affiliate sales.
its an insult to mathematics itself when i look at these benchmarks that use per second values. the cpus operate at sub nanosecond cycles meaning
5 000 000 000
vs
5Ghz
people have been brainwashed into wrong perception of numbers by marketing companies. 100.0000 "fps" is far far far more smoother and responsive than unstable 400fps with 400.291 for example. when you reason about time then its measures are different than that of space. meaning 10.5ms SHOULD UNDER NO CIRCUMSTANCES EVER BE INTERPRETED SAME AS 10.5m (meters).
0.5ms is interpretation of jitter while 10ms is interpretation of frequency/latency.
so how do u expect to see anything about your system with MULTIPLE SUB NANOSECOND COMPONENTS being "benchmarked" using miliseconds or per seconds? when you strip away 9 digits of measured value?????
and no. those digits all have huge meaning and importance because the perspective and context is CPU. not your tick tock mind and its perception of time.
if one pixel on amd takes (for example) 10.0008ms and on nvidia it takes 10.0009851ms then wtf will using miliseconds rounded up show? oh wait we multiply that "small" difference by number of pixels and by number of frames per second and by speed of raster line and omg?!?!!?
anyone can do click to photon tests for under $100. do they have any value? sure if you measure 60hz vs 144hz where the gap is so huge even a blind man can tell the difference. for decimals or smaller margins you need 10x 100x resolution and more