A Graphics Programmer’s Thoughts on RTX

A Graphics Programmer’s Thoughts on RTX

As a programmer and computer graphics enthusiast it is always an exciting time when a new line of graphics cards is announced. This is certainly true about Nvidia’s launch of their new line of RTX branded cards. Much has been written about the launch from the perspective of gamers, and frankly if you just look at today’s games the picture doesn’t look so good. But in this post I’m going to look a bit more at the future and the promise that the new RTX architecture holds for games yet to come.

First we need to talk about the price. These cards are expensive. I know some people try to shrug it off as Nvidia re-aligning their naming convention or some claptrap like that. Sure a 2070 has similar performance to a 1080 for about the same price but in every previous new generation the new cards provided better performance at the same price point. So what happened? Well, it looks like a perfect storm of events. One obvious issue is that AMD is still not remotely competitive with Nvidia at the high end, meaning these cards are mostly competing against the 10xx generation GTX cards. A lack of competition is never good for pricing (at least not from the eyes of the consumer, I’m sure Nvidia is laughing all the way to the bank). The other issue is that these new chips are huge. All those extra features for realtime raytracing and the tensor cores for machine learning all come at a hefty price in terms of silicon real estate. They offer exciting possibilities for the future, and I will talk about that more later, but right now all of those transistors are doing very little to speed up current games. Given the massive die sizes on these chips it leads me to believe that Nvidia was counting on a process shrink to 7nm when they planned the RTX chips initially. Then, when it became obvious that the 7nm node wouldn’t be ready in time for RTX they decided to go ahead with the raytracing and tensor core features anyway rather than wait and push them out another generation. Thus we are left with huge (754mm² for the 2080Ti), and therefore expensive, chips. Of course all of that is speculation on my part, but it would explain a lot about the pricing of the RTX cards.

Now let’s talk about what makes RTX exciting: all the new features. I’m actually going to start with the tensor cores. I’m somewhat surprised that they left these in the consumer cards actually, rather than keeping them for the Titan line and up. I don’t actually think these will do much for gaming graphics directly at all. I know Nvidia has made a big deal out of their deep learning antialiasing solution but frankly it looks a bit gimmicky to me and the fact that you need to train it specifically for every game is not great. However, the tensor cores are fantastic news for anyone playing around with deep learning. In fact the 2080Ti provides almost identical deep learning performance to the Titan V at half the price. This should go a long way to making machine learning accessible to many more people, and that will further accelerate progress in what is already a fast-moving field. Fantastic news for students and hobbyists that want to experiment with deep learning, but also perhaps a bit of wasted silicon for what is still the biggest market for these cards: gamers. Only time will tell if game developers find interesting uses for the tensor cores in their games, but my hunch is that they won’t, at least not before the current cards are long obsolete.

Now on to the real meat of RTX: raytracing. This is where this new generation of chips promises a true revolution in computer graphics. It really is exciting, and I say this as someone who is actually rather skeptical as to the value of raytracing. It is often billed as a magic bullet solution to producing realistic computer graphics, and it really isn’t (something I talk about in my previous post comparing raytracing and pathtracing here). I’m actually really glad that the realtime raytracing solutions (DXR, RTX) focus on a hybrid approach rather than trying to push pure raytracing. Why? Because it would require radical changes in the tooling and development process for making games, as well as sidelining a lot of silicon (which would be needed to be retained to support all the non-raytracing games anyway) and years of R&D that have pushed the limits on rasterization.

So what can we expect from raytracing? Going by the likes of Battlefield 5 it looks like it will be mostly fancy reflections to begin with, which makes sense since that’s something that raytracing excels at (and something that requires a ton of nasty hacks to simulate in a rasterization-based engine). This also mirrors the way that shaders were first used when they became a standard feature: we initially saw a wave of games that added a single effect to an otherwise fixed-function pipeline (such as the shiny water effect in Morrowind). So that’s the majority of what we can expect from raytracing in games for the next few years: nicer reflections and fancier shadows, since those are relatively simple to execute. Far more interesting I think is the possibility of far improved realtime global illumination (something that the new Metro game promises for RTX owners), as well as other creative uses for the raytracing capability, because casting rays into a scene has all sorts of interesting applications. Consider for example parallel occlusion mapping. This technique creates a very good 3D offset effect using a 2D texture and ray-casting. This is quite expensive to do using current shader hardware, but might be possible to accelerate vastly with raytracing capabilities of RTX. Another possibility is to use the raytracing hardware to render voxel scenes without the need for rasterization at all. Again this is possible using current shader technology, but it is taxing (see for example the GigaVoxels rendering library). An RTX-optimised version of this might be significantly faster. Outside of games there is also a possibility to accelerate GPU-based offline raytracers/pathtracers such as OTOY Octane Render, or the GPU mode of Cycles, making these much faster in the future.

So where does that leave us? Well, I think for now we can mostly expect gimmicks like nicer reflections and/or shadows (at significant framerate cost), and I think right now the RTX cards are mostly of interest to graphics developers looking to experiment with the next generation of graphics techniques. Adoption will also depend of AMD and Intel also supporting DXR in their upcoming graphics chips (AMD has already hinted that they are working on some sort of raytracing support for their future chips). As the hardware becomes more common outside of just the very high-end enthusiast crowd I think we’ll start to see some really exciting things done with it, but I would also expect this take at least a couple of years (at which point we should also see the second or maybe even third-generation of raytracing chips from Nvidia). The future for graphics continues to be interesting, I can’t wait to see it.

Comments are closed.