It feels strange comparing 2 GPU’s with a $500 price difference, but that’s what this round of GPUs has set up with AMD’s top offering, the Radeon RX 6900 XT going toe to toe with Nvidias RTX 3090 – whilst retailing at just two thirds the price. It’s hotting up to be a fantastic end to the year at the top end of the GPU market, so let’s take a preliminary look at AMD’s flagship RX 6900 XT vs the RTX 3090 and see which enthusiast graphics card may be best suited to your needs.
RX 6900 XT vs RTX 3090: Specifications
|Specifications||RX 6900XT||RTX 3090|
|VRAM||16GB GDDR6||24GB GDDR6X|
|Die Size||536 mm²||628 mm²|
|Transistors||26.8 Bn||28.3 Bn|
|Pixel fill rate||288 GP/s||189.8 GPixel/s|
|Texture Rate||720.0 GTexel/s||556.0 GTexel/s|
|Ray tracing||60 ‘Ray tracing units’||82 ‘Ray Tracing Cores’|
|PCIe bus||PCIe 4.0||PCIe 4.0|
Nvidia opted to use Samsungs ‘8nm’ process to manufacture the Ampere GPUs because of cost and capacity concerns at TSMC. AMD, however, enjoys the benefit of their established relationship with TSMC and use the 7nm process to manufacture their RDNA2 GPUs. It’s not really possible to directly compare specification between the two GPUs: Because of architectural and processing difference you can’t draw conclusions from differing clock speeds, ‘Core’ counts or attributes of that nature. We do know that the RDNA2 GPU’s feature hardware Ray tracing for the first time and this looks to be primarily via the DX12 API’s so should enjoy relatively widespread support and that the 7nm process is well refined and offers exceptional efficiency. There will likely be subtleties as to which architectures perform best under different demands and API’s and it’s these difference that results in the tit-for-tat in different game benchmarks. In particular, we know that the RTX 3090’s shader cores are optimized for 4K performance giving it non-linear performance scaling with resolution increases. We’ll have to wait and see which wins out overall when consumer versions of the RX 6900X are available to be put to the test.
VRAM: 24GB GDDR6X plays 16GB GDDR6
One area of clear difference is the memory of these two cards in both type and capacity.
Nvidia has forged an exclusive relationship with Micron to develop ‘GDDR6X’ VRAM, very high-performance graphics memory that helps boost the performance of the RTX 3080 and RTX 3090. This memory doesn’t come cheap at around $150 per 10Gb. With 24GB VRAM this alone accounts for a large proportion of the price difference between an RTX 3090 and the RX 6900XT. The RX 6900XT has stuck with more mainstream GDDR6. This VRAM is available from a number of vendors helping drive down costs, and the GPU has 16GB not 24GB. Indeed AMD has opted for 16GB VRAM for all three confirmed RDNA2 GPUs. 16GB is certainly ample for current and future gaming requirements without incurring additional costs for VRAM that doesn’t benefit most users. The relative impact of the VRAM being slower remains to be seen but AMD’s have pioneered ‘Smart Memory Access’ for RDNA 2. This technology allows high bandwidth communication across the PCIe 4.0 Bus between GPU and CPU and results in around a 5% performance increase. To access this you need to have the trinity of a Zen 3CPU, B550 or X570 motherboard, and an RDNA2 GPU. This makes an interesting case for sticking with an AMD ecosystem for a gaming PC in this generation, particularly when Zen 3 CPUs gaming performance is taking into consideration.
The real reasons for these differences lies in how the companies foresee their GPU’s being used: The 24Gb VRAM and absence of a ’titan’ model for Ampere show us that the RTX 3090 is in fact aimed at content creators, machine learning and other applications that use the prodigious VRAM capacity and advanced technologies such as the tensor cores. This fast expanding market is where the RTX 3090 really excels and also where it justifies it’s price: This is a GPU designed to make you money by crunching workflows at top rate, not just a slightly better gaming GPU. 24GB VRAM is excessive even for 4K gaming at the highest settings and will remain so for the foreseeable future. Meanwhile the blindingly fast VRAM makes perfect sense when manipulating massive data sets for machine learning and other GPU based computation tasks.
AMD, however, have their sights set firmly on the gaming crown with the RX 6900XT and their use of 16Gb GDDR6 VRAM signifies this. It is still however plenty for almost any reasonable productivity or computational workload as well: Until now the only GPU’s with 16GB VRAM were AMD’s ill-fated Radeon VII and Nvidia Quadro GPU’s specifically configured for computation, not gaming.
To summarize, whilst VRAM capacity is an obvious point of difference it’s not one that will likely be decisive in the gaming performance of either GPU. The 24GB VRAM of the RTX 3090 should only be considered an advantage where it will be put to use.
Gaming performance: Speculation vs reality
The RX 6900 XT lays down some impressive core specifications including 26.8 Billion transistors, and very high Pixel fill rates and ‘Floating Point’ metrics that exceed those of the RTX 3090. However, we can’t state with certainty that these will translate into better gaming performance. They are indicative of an exceptionally high-performance GPU however and owing to the focus on rasterization rather than machine learning or ray tracing cores it seems likely that in straight gaming performance the RX 6900XT will have the measure of the RTX 3090. This is a huge shift for AMD who haven’t been able to compete at the very top of the market for years now. To do so at a price point $500 less than the RTX 3090 is truly remarkable and makes the RTX 3090 look like bad value for gaming – even if the performance of the AMD card falls a little short. Issues here could be down to the use of slower GDDR6 memory although the 16GB VRAM capacity is likely to be ample for all gaming usages.
Nvidia has been shouting loudly about its use of advanced technologies like DLSS 2.0 which uses trained AI networks to upscale images in real-time, giving a quality and performance boost. It is impressive technology but it’s also very limited with just a few compatible titles. Therefore it’s not a ‘killer blow’ that this is lacking AMD’s products since for the most games it will still be pure rendering performance that matters. Overall there is certainly a more feature-rich environment for NVIDIA with DLSS 2.0, RTX, Nvenc, Tensor cores and a host of other bespoke Nvidia technologies. AMD meanwhile has features like an embedded stream encoder, Radeon Anti-Lag, AMD link and Radeon Boost which seek to improve performance and responsiveness in gaming. Ultimately it comes down to whether you actually use these features, and how much value you feel they bring to your gaming experience as to which feature set has more value to you. There’s no doubt that Nvidia has the more attractive and better-developed feature set right now – at a premium price point of course.
AMD’s own benchmarks have to be taken with healthy scepticism, however, they’re touting equal or sometimes better performance than the RTX 3090 across a range of titles at 4K resolution. That’s truly impressive for a GPU that’s $500 cheaper. We’ll have to wait for broader independent testing to be sure, but it’s safe to say that at the price the RX 6900XT will be the high-end GPU to beat at 4K.
Machine learning and Productivity: RTX 3090 is the card to beat.
If you’re looking for a GPU to use as a workhorse, not a plaything, it’s likely the RTX 3090 will be the card of choice. The host of features such as 24GB VRAM, unsurpassed numbers of Tensor cores, extremely high rendering capacity and ray tracing abilities make it a hot ticket for content creators everywhere. Meanwhile crunching data and manipulating machine learning algorithms are what this GPU is really targeted at. CUDA optimized code has become standard in scientific research and more cores mean more performance whilst AI is a vast and growing industry that will define the next generation of computing. The Tensor cores of the RTX 3090 mean that for small scale AI development it’s an excellent tool.
Meanwhile, the RX 6900XT is likely to have utility as a computational device as well: Open CL gives access to a huge library of computational tools that are compatible with RDNA 2 architecture. Optimized code will undoubtedly run well and with these GPU’s being substantially cheaper it may be that a trio of RX 6900XT’s out-perform a pair of RTX 3090’s. However, the lack of Tensor cores and the Ray tracing cores being an unknown entity at this point means that it will likely find its niche, rather than being the go-to GPU for such tasks.
Workloads in machine learning, computation and productivity are so specific that if you’re building a PC to handle them you’d be wise to check benchmarks specific to the exact tasks you’re targeting prior to specifying your system.
Power and efficiency- Unknown for now but likely in AMD’s favor.
Efficiency may not be the most exciting metric but it’s incredibly important in dictating the ultimate performance of a GPU. Heat output eventually overwhelms the silicon meaning that clock speeds have to be reduced to protect the chip itself: The more efficient the design, the more performance can be squeezed out of it at the same power levels. AMD claim 300W TDP for the RX 6800 XT but interestingly have an 850W PSU recommendation indicating that peak loads may be substantially higher than that. The RTX 3090’s hunger for power is well documented, with 350W TDP being a relatively conservative estimate and a 750W PSU recommended to support it via 3 8 pin PCIe plugs (or the founder’s edition proprietary 12 pin plug). If you’re considering either of these GPUs then ensures the power supply you’ve chosen is up to the challenge.
Pricing- A clear win for AMD.
Finally, we come to pricing. Whilst the RTX 2080 Ti broke the $1000 glass ceiling of GPU’s it’s clear that AMD and Nvidia have substantially different opinions of this price point. Nvidia is happy to keep reaching upwards with their flagship GPUs. Whilst the stated launch price is $1499 most partner cards are significantly more expensive than this: This GPU can approach $2,000 for the most sought after models.
Meanwhile, AMD obviously baulks at exceeding $1k for their flagship GPU. This may be down to a number of reasons: Theres less VRAM and fewer hardware features. They may know that performance isn’t quite on a par with the RTX 3090, or else they may feel that this GPU doesn’t have the additional productivity applications that warrant such a high price tag. There’s also the possibility that they simply want to ensure no-one buys a $1500 GPU from a competitor when their superior product exists at $1000.
Nevertheless, the price point is extremely high for both GPU’s and in order to be justified for gaming you need to be at high resolution and refresh rates, with a top tier system supporting them. With an overall system cost likely to be around $3000 it’s possible that the $500 extra doesn’t matter so much – but that doesn’t mean that the RTX 3090 is ‘good value’ – merely that it can perhaps be justified for those who want the very best or need the additional functionality.
Once we know the real-world performance of the RX 6900XT we’ll be able to make an informed decision on the value it offers at the $1000 price point but it looks like it will be a compelling option in meeting the performance of the RTX 3090 at two-thirds of the price.
The RTX 3090 has been available for over a month now and those lucky few who have been able to obtain them are enjoying the current most powerful GPU ever made. With around 15% more performance than the RTX 3080 in most games whilst costing twice as much, it’s certainly a ‘cost no object’ choice for those demanding the very best. However, the reality is this GPU is aimed at a different market despite Nvidia’s claims of viable 8K gaming.
The RX 6900XT on the other hand appears to be much more focused on gaming. Sitting just shy of four figures at $999 and with 16GB VRAM not 24GB, it becomes a much more justifiable top tier GPU if gaming is your focus. The difference in price buys you a monitor worth of this GPU after all. Combined with AMD’s implementation of ‘Smart Access Memory’ which optimizes communication between a Zen 3 AMD CPU and the GPU and provides a modest but measurable performance boost, the 6900XT begins to look very attractive. Right now we’d recommend the RTX 3090 if you use the Tensor or RT cores for productivity or machine learning workflows. If you’re looking for the ultimate Gaming GPU then it makes sense to wait and see what AMD have in store with the RX 6900XT because it looks like it will offer the best performance $1000 has ever been able to buy.