Are you building or upgrading a PC to run DaVinci Resolve in 2023? Your GPU choice is the most important component to consider. For the average content creator working with 4K or even 8K video from a typical mirrorless camera, the CPU in your desktop build is much less of a bottleneck than it used to be. The GPU is what powers your real-time color grading and playback performance, and determines your rendering speed. This is because the GPU, not the CPU is responsible for real-time decoding and encoding of common consumer and prosumer camera video codecs like H.265/HEVC, as well as all the complex color math, noise reduction, stabilization, tracking and most plugins.
If you are reading this, you have probably ignored my reasons and recommendation to buy a Apple Silicon powered Mac. Maybe you are just firmly in the PC camp, and that’s fine. Here are the best GPU’s to consider for DaVinci Resolve in 2023, or for any other video (or graphics, or photo) post production use.
I’ve added Amazon product links where there is availability. These are affiliate links.
DaVinci Resolve NVIDIA and AMD GPU Ranking 2023
I have updated my previous list now adding the Nvidia 40-series and the Radeon RX 7900. These are the best GPU’s for DaVinci Resolve in 2023 to date.
GPU | Architecture | H.264/H.265 Encoding/Decoding | Clock | Memory | Cores | Memory Bandwidth | Power Consumption |
---|---|---|---|---|---|---|---|
Nvidia GeForce RTX 4090 | Ada Lovelace | Y (NVENC / NCDEC) | 2300MHz (2520MHz Boost Clock) | 24GB GDDR6X | CUDA Cores: 16384 | 450W | |
Nvidia GeForce RTX 4080 | Ada Lovelace | Y (NVENC / NCDEC) | 2210MHz (2510MHz Boost Clock) | 16GB GDDR6X | CUDA Cores: 9728 | 320W | |
AMD Radeon RX 7900 XTX | RDNA3 | Y | 1500MHz (2400MHz Boost Clock) | 20GB GDDR6 | Compute Units: 96 Stream Processors: 6144 | 800GB/s | 355W |
Nvidia GeForce RTX 3090 Ti | Ampere | Y (NVENC / NCDEC) | 1560MHz (1860MHz Boost Clock) | 24GB GDDR6X | CUDA Cores: 10752 | 450W | |
Nvidia GeForce RTX 3090 | Ampere | Y (NVENC / NCDEC) | 1700MHz | 24GB GDDR6X | CUDA Cores: 10496 | 350W | |
Nvidia GeForce RTX 3080 Ti | Ampere | Y (NVENC / NCDEC) | 1370MHz (1670MHz Boost Clock) | 12GB GDDR6X | CUDA Cores: 10240 | 350W | |
Nvidia GeForce RTX 4070 Ti | Ampere | Y (NVENC / NCDEC) | 2310MHz (2610MHz Boost Clock) | 12GB GDDR6X | CUDA Cores: 10496 | 285W | |
AMD Radeon RX 6900 XT | RDNA2 | Y | 2015MHz | 16GB GDDR6 | Compute Units: 80 Stream Processors: 5120 | 512GB/s | 300W |
Nvidia GeForce RTX 3080 | Ampere | Y (NVENC / NCDEC) | 1710MHz | 10GB GDDR6X | CUDA Cores: 8704 | 320W | |
AMD Radeon RX 6800 XT | RDNA2 | Y | 1815MHz | 16GB GDDR6 | Compute Units: 72 Stream Processors: 4608 | 512GB/s | 250W |
AMD Radeon RX 6800 | RDNA2 | Y | 1815MHz | 16GB GGDR6 | Compute Units: 60 Stream Processors: 3840 | 512GB/s | 250W |
Nvidia GeForce RTX 3070 | Ampere | Y (NVENCE / NCDEC) | 1730MHz | 8GB GDDR6 | CUDA Cores: 5888 | 220W | |
Nvidia Titan RTX | Turing | Y (NVENC / NCDEC) | 1770MHz | 24GB GDDR6 | CUDA Cores: 4608 | 672GB/s | 280W |
Nvidia GeForce RTX 2080 Ti | Turing | Y (NVENC NCDEC) | 1635MHz | 11GB GDDR6 | CUDA Cores: 4352 | 616GB/s | 250W |
Nvidia GeForce RTX 2080 Super | Turing | Y (NVENC / NCDEC) | 1815MHz | 8GB GDDR6 | CUDA Cores: 3072 | 496GB/s | 250W |
Nvidia GeForce RTX 2080 | Turing | Y (NVENC / NCDEC) | 1710MHz | 8GB GDDR6 | CUDA Cores: 294 | 448GB/s | 215W |
Nvidia GeForce RTX 2070 Super | Turing | Y (NVENC / NCDEC) | 1770MHz | 8GB GDDR6 | CUDA Cores: 2560 | 448GB/s | 215W |
AMD Radeon VII | GCN 5th Gen | Y (UVD, VCE) | 1905MHz | 16GB HBM2 | Stream Processors: 3840 | 1000GB/s | 300W |
Nvidia Titan X | Pascal | Y (NVENC / NCDEC) | 1480MHz | 12GB GDDR5X | CUDA Cores: 3584 | 480GB/s | 250W |
Nvidia GeForce GTX 1080 Ti | Pascal | Y (NVENC / NCDEC) | 1582MHz | 11GB GDDR5X | CUDA Cores: 3584 | 484GB/s | 250W |
AMD Radeon RX 5700 XT | RDNA | Y (VCN) | 1905MHz | 8GB GDDR6 | Stream Processors: 2560 | 448GB/s | 225W |
Nvidia GeForce RTX 2070 | Turing | Y (NVENC / NCDEC) | 1620MHz | 8GB GDDR6 | CUDA Cores: 2304 | 448GB/s | 215W |
Nvidia GeForce RTX 2060 Super | Turing | Y (NVENC / NCDEC) | 1650MHz | 8GB GDDR6 | CUDA Cores: 2176 | 448GB/s | 175W |
Nvidia GeForce GTX 1080 | Pascal | Y (NVENC / NCDEC) | 1733MHz | 8GB GDDR5X | CUDA Cores: 2560 | 320GB/s | 180W |
AMD Radeon RX 5700 | RDNA | Y (VCN) | 1725MHz | 8GB GDDR6 | Stream Processors: 2304 | 448GB/s | 180W |
AMD Radeon RX Vega 64 | GCN 5th Gen | Y (UVD, VCE) | 1546MHz | 8GB HBM2 | Stream Processors: 4096 | 484GB/s | 175W |
Nvidia GeForce RTX 2060 | Turing | Y (NVENC / NCDEC) | 1680MHz | 6GB GDDR6 | CUDA Cores: 1920 | 336GB/s | 175W |
AMD Radeon RX Vega 56 | GCN 5th Gen | Y (UVD, VCE) | 1471MHz | 8GB HBM2 | Stream Processors: 3584 | 410GB/s | 210W |
Nvidia GeForce GTX 1070 Ti | Pascal | Y (NVENC / NCDEC) | 1683MHz | 8GB GDDR5 | CUDA Cores: 2432 | 256GB/s | 180W |
Nvidia GeForce GTX 1660 Ti | Turing | Y (NVENC / NCDEC) | 1770MHz | 6GB GDDR5 | CUDA Cores: 1536 | 288GB/s | 120W |
Nvidia GeForce GTX 1660 Super | Turing | Y (NVENC / NCDEC) | 1785MHz | 6GB GDDR5 | CUDA Cores: 1408 | 336GB/s | 125W |
Nvidia GeForce GTX 1660 | Turing | Y (NVENC / NCDEC) | 1785MHz | 6GB GDDR5 | CUDA Cores: 1408 | 192GB/s | 125W |
Nvidia GeForce GTX 1070 | Pascal | Y (NVENC / NCDEC) | 1683MHz | 8GB GDDR5 | CUDA Cores: 1920 | 256GB/s | 150W |
AMD Radeon RX 590 | GCN 4th Gen | Y (UVD, VCE) | 1545MHz | 8GB GDDR5 | Stream Processors: 2304 | 256GB/s | 225W |
AMD Radeon RX 580 | GCN 4th Gen | Y (UVD, VCE) | 1340MHz | 8GB GDDR5 | Stream Processors: 2304 | 256GB/s | 185W |
AMD Radeon RX 570 | GCN 4th Gen | Y (UVD, VCE) | 1244MHz | 4GB GDDR5 | Stream Processors: 1328 | 224GB/s | 150W |
Nvidia GeForce GTX 1650 | Turing | Y (NVENC / NCDEC) | 1665MHz | 4GB GDDR6 | CUDA Cores: 896 | 128GB/s | 75W |