I'm picking up a pair of 2080 Tis the second they are available.
These cards are super exciting to me because I'm doing ray tracing research and making my own indie games using RTX.
Check out these pics of Battlefield V with RTX reflections activated:
https://twitter.com/EA_DICE/status/1031894972209844224
Finally, my GTX 970 can relax a bit in my wife's computer. Although we'll still use it to play some co-op VR games on over the network, once I get a second VR headset.
I'm all about the VR now, and using 2080 Tis in SLI should double the performance of ray-tracing games due to the ridiculously parallel nature of RT and DX12 implicit and explicit multi-GPU support for such things. I wrote the SLI code for GTA V and it was unfortunately limited to AFR (a royal pain to code for, trust me) so NVidia opening up stereo-VR SLI, in the driver, in an opaque way to the developer (one GPU per eye), should help adoption. And explicit SFR (split-frame-rendering) will be tremendous too, it can allow load-balancing between heterogeneous GPUs, even using your on-board GPU to give you a 10% FPS boost perhaps by doing the postprocessing on it or just a sliver of each frame's rendering.
One thing I noticed in the Quadro RTX reveal at Siggraph was that the new SLI bridge connector, NVLink, actually doubles the VRAM when you use two GPUs, so you're not simply "wasting" money where buying the 11gb vs 8gb cards is concerned. This to me is HUGE.
That should mean, theoretically, that an SLI-ed 2080 Ti will have a total of 22 gb of usable memory (by both cards), instead of 11 gb, and the 2070/2080s will have 16gb instead of 8. Which are both massive upgrades and makes SLI way more appealing. Not only would you get double the rasterization / ray tracing performance, but also double the framebuffer too.
I tried to confirm whether this is true for the consumer RTX cards, but haven't been able to yet, so take this with a grain of salt. It would be an NVidia thing to do to only allow shared framebuffers on the Quadro cards for movie shading only, which really need the extra RAM in their 48 gb + 48gb SLI high-end Quadro setups to fit all the triangles of those scenes in.
The big deal about RTX isn't just that it's six times faster in ray tracing per se than Pascal, it's that it does it via DXR (and eventually Vulkan) extensions to DX12, in a standardized way, which will boost adoption. And going further, for global illumination which is the real holy grail of computer graphics, the only way to do that in real time on these cards (or anything, really, except maybe a supercomputer cluster), is by doing 1 sample per pixel path tracing (1 spp) and then AI denoising on top. That is the thing that truly opens up these cards to eventually get to do real-time, noise-free GI, probably using a hybrid rasterization + ray tracing approach, at least initially. But there are plenty of new rendering algorithms that completely eschewing rasterization will open up too, like native foveated rendering, frequency-based rendering (putting more GPU power where it counts in the image, in higher-frequency areas such as edges or more specular areas). With reflections, you might need a hundred samples to capture the actual behaviour of light in radiometrically complex scenes with caustics for example. At some point then you need to implement things like photon mapping which apparently can be used to render 1080p 60 fps already. It's all very exciting stuff and I look forward to the revolution.
I really hope AMD's next cards will implement DXR in some way, if they don't I think they are done for, on PC at least, quite honestly. And I say this as a fan of AMD, I bought a new Threadripper 2 workstation and would much prefer having Freesync than being stuck with G-sync for my workstation monitor, to save cash. Competition is good but I am worried that RTX will be a knockout blow.