GPU technology has been advancing relentlessly in recent years, primarily driven by the demands of gamers wanting the very best in graphical fidelity and framerates. Fundamentally GPU’s are massively parallel computers in their own right, with dedicated BIOS, specialised processing cores and RAM. So long as a task is parallelisable, and can be written into a format that takes advantage of the strengths of a GPU, it’s possible to leverage the technology to perform a wide range of tasks far faster than CPU’s. From things that make sense for 3D hardware, like rendering 3D scenes to esoteric uses including scientific simulation and analysing vast volumes of date hunting for patterns, GPU’s are coming to the fore in a wide range of practical applications. In this article I’ll run through some of the specialised tasks GPU’s can excel in and make recommendations for professionals who want to perform processes on a desktop PC that until recently were the preserve of distributed computing networks.
Best GPUs for Workstations – Our Recommendations
|Best Budget GPU for CUDA acceleration||EVGA RTX 2060 KO|
|Best GPU for Graphic Design||Zotac GTX 1660 Super Twin Fan|
|Best GPU for Video Editing||MSI RTX 2060 Super Ventus OC|
|Best GPU for 3D Rendering||Gigabyte RTX 2080 Super Turbo (x2)|
|Best Graphics Card for CAD (Autodesk)||NVIDIA Quadro P2000|
|Best GPU for Machine Learning||Asus RTX 2080 Ti Turbo|
Best Budget GPU for CUDA Acceleration
EVGA’s recently released revamp of the RTX 2060 was meant to be a simple price drop of the RTX 2060 to compete with the AMD RX 5600 XT launch. However, under test, these GPU’s showed a number of interesting performance discrepancies. The reason was in the GPU die used: The RTX 2070 and 2070 Super cards use the TU 104 GPU die but some chips don’t make the cut in quality control validation. These useable but incomplete chips are ‘fused off’ to reduce the render pipelines down to an RTX 2060 specification and have found their way into the RTX 2060 KO from EVGA. However it’s clear that not all of the die was deactivated, and in certain compute tasks the 2060 KO performs like the cards for which it’s processor was destined. This makes it a fantastic value purchase at $300-$320 if you intend on doing compute intensive tasks such as rendering in Blender – it’ll perform nearly as well as the RTX 2070 super in many cases.
Of course, in gaming, it’s an RTX 2060 and will perform as such – which is to say it’s excellent at 1080p and more than capable of good 1440p gaming so long as you’re not intent on using the RTX features in the handful of games that offer them. If you want a versatile and good value GPU with the knock out compute performance that punches well above its price tag, the EVGA RTX 2060 KO or KO ultra is the GPU for you.
Best GPU for Graphic Design
For most graphic artists working with photographs or 2D design, a GPU provides a small but noticeable boost in performance particularly with Adobe suite products which can utilise CUDA cores to accelerate computationally intensive tasks like transforms. There’s also the consideration that going from onboard graphics to a dedicated GPU can allow larger screens and higher refresh rates, as well as an easier pathway to multiple monitor setups which professionals may find beneficial. However it’s rarely worthwhile opting for a high end GPU, and as a do-it all solution entry-level Nvidia GTX cards offer good value for money and performance. Certification is not required for visual design where it is for engineering or other critical design application, and the Geforce line up now supports 10-bit displays for ultimate colour accuracy (using the studio drivers). Acceleration is likely to be modest unless you do heavy 3D work (see ‘rendering’) so its wise to save money on a capable but not outrageous GPU. That’s why we recommend a GTX 1660 Super for most graphic design and digital art applications.
Amongst the AIB GTX 1660 Super cards offering this GPU we’d recommend the Zotac Twin fan at $239. The compact form factor squeezes in two cooling fans for lower temperatures and quieter operation, and it boasts 3 Displayport sockets which could be vital for a graphics artist with a multiple display set up. Zotac have focussed on the performance and utility of this card, not on flashy gamer aesthetics, making it a sound choice for a professional graphics manipulation workstation.
Best GPU for Video Editing
Video editing and rendering of completed videos uses a combination of GPU and CPU, but the bias in both editing/preview performance and the final render lies with the CPU for the most part. In terms of GPU, 3D effects and transforms can be accelerated in both live view and the final render by a capable GPU, but the effect isn’t marked in use unless you’re working with high resolution footage.
Whilst it depends on your precise requirements, the RTX 2060 Super provides a good balance of cost effectiveness, 8gb VRAM, and support in the most widely used video editing packages (premier pro and Davinci resolve). Our pick, the MSI Ventus GP OC is a low priced card but features a backplate, a robust twin fan cooling solution and the support of MSI’s well-regarded warranty. It will do sterling work in a cost-effective video production workstation.
Best GPU for 3D Rendering
3D rendering in applications such as Blender can heavily utilise a GPU to reduce scene output times. Blender performance scales almost linearly with GPU count. Since 2 RTX 2080 Supers come in at only just over the price of a single RTX 2080 Ti it’s a strong option to minimise render times. RTX 2080 Ti’s and Titans offer the possibility of even more VRAM but these are only likely to be necessary in a professional Studio environment where production quality needs to be top notch. As a single card solution, the RTX 2080 Ti is the best option, provided 11Gb VRAM is enough for the projects in question.
Our recommendation goes to a pair of RTX 2080 Supers. It’s the most versatile and cost effective way of maximising performance without breaking the bank. Ensure that Nvlink is also employed to minimise communication latency between the two GPU’s; this can bring an additional 5-10% performance boost.
Because rendering is time-consuming and the heat load on the case can be extreme, this is a rare occasion where a blower GPU cooler is a good option: The heat is exhausted from the case directly from the GPU, ensuring that the case exhaust fans aren’t overwhelmed. Because we’re stacking 2 GPU’s here the blower design means neither card will suffer from being starved of cool air. Only consider using standard axial fan cards in this configuration if you can ensure they will obtain ample airflow.
The Gigabyte RTX 2080 Super Turbo is a strong option at $699, giving a good performance at a low cost. It boasts a standard 1815 Mhz boost clock although we’d expect to see much higher in actual use, and the throughput of air will prevent thermal throttling under heavy load. A great option for SLI and creating that ultimate workstation build for 2020.
Best Graphics Card for CAD (Autodesk)
CAD, and specifically Autodesk is a slightly oddball workload for a GPU. Whilst Pascal or Turing architecture cards from Nvidias GTX or RTX ranges offer strong performance, they aren’t certified by Autodesk. That means technical problems or driver instabilities are entirely your problem. It also means that project output isn’t certified, and in industrial and production environments when critical components are being designed, that’s not acceptable.
This is why we look to the Nvidia Quadro range. These GPU’s are made by PNY and are certified by Autodesk, meaning full technical support as well as enhanced performance. Even entry-level Quadros outperform high-end consumer RTX cards in CAD specific tasks. Quadro cards range from the P2000 through to monstrous RTX Turing based cars with 24Gb VRAM and deep four-figure price tags. However, for almost all Autocad work the P2000 will be ample, sporting 5Gb VRAM and compact single-slot GPU design. A CAD workstation still benefits more from a powerful CPU, so the cost-effective $450 P2000 is a wise choice in most circumstances. There’s only one model available which simplifies matters considerably. If you’re building a workstation and know that your needs are modest, this is the go-to GPU for you.
Best GPU for Machine Learning
Machine learning has come to the fore in recent years, having stepped out of the realms of dedicated supercomputers and into the ATX form factor of affordable consumer hardware. Machine learning is a distributed workload which means that it will happily consume as much – and as many – GPU’s as you can afford to throw at the problem. For most single box solutions this means a practical limit of 4 GPU’s. Whilst Titan V and Titan RTX cards offer supreme single card performance they are cost prohibitive, meaning that consumer RTX 2080 Ti’s will outperform them for the money. $5,000 will get you two Titan RTX’s but four RTX 2080 Ti’s with spare change for the Power Supply to run them all, and this is a wise starting point to assess your machine learning needs and decide if you want to scale with additional GPU’s.
Machine learning occupies GPU’s full time and for long sessions, so it makes sense to return to ‘turbo’ style coolers to ensure that exhaust air isn’t recirculated. Bearing in mind an RTX 2080 Ti has a TDP of around 280W, it’s entirely possible to create a 1Kw hot box if you opt for standard cooling in even a large case!
Asus make the most cost-effective RTX 2080 Ti Turbo model, simply called the ‘Turbo’. It’s available at $1149 on Amazon making it one of the cheapest models of this flagship card. Since we don’t care about looks, video outputs or anything except buying as many cores as possible this GPU is the way forwards.
I hope this whistle stop tour of GPU’s in other applications has been informative and perhaps assisted you in identifying the correct GPU’s for your needs. As ever, you know your requirements best so please use this as a guide not a shopping list. With creative people, scientists, engineers and designers all harnessing the power of GPU’s that have their roots in gaming, we can only guess at the advances the availability this level of computing power will herald.