Home
Compare
GPU Benchmarks NVIDIA H100 NVL (PCIe) vs. NVIDIA RTX 4090 vs. NVIDIA RTX 4080
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 NVL (PCIe)
NVIDIA H100 NVL (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 NVL (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 NVL (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA H100 NVL (PCIe)
X
VS
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 NVL (PCIe)
NVIDIA H100 NVL (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 NVL (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 NVL (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA RTX 4090
X
VS
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 NVL (PCIe)
NVIDIA H100 NVL (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 NVL (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 NVL (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA RTX 4080
X
+
Quick links:
Best GPUs for deep learning, AI development, compute in 2023–2024. Recommended GPU & hardware for AI training, inference (LLMs, generative AI).
GPU training, inference benchmarks using PyTorch, TensorFlow for computer vision (CV), NLP, text-to-speech, etc.
We benchmark NVIDIA H100 NVL (PCIe) vs NVIDIA RTX 4090 vs NVIDIA RTX 4080 GPUs and compare AI performance (deep learning training; FP16, FP32, PyTorch, TensorFlow), 3d rendering, Cryo-EM performance in the most popular apps (Octane, VRay, Redshift, Blender, Luxmark, Unreal Engine, Relion Cryo-EM).
Our benchmarks will help you decide which GPU (NVIDIA RTX 4090/4080, H100 Hopper, H200, A100, RTX 6000 Ada, A6000, A5000, or RTX 6000 ADA Lovelace) is the best GPU for your needs. We provide an in-depth analysis of the AI performance of each graphic card's performance so you can make the most informed decision possible. We offer deep learning and 3d rendering benchmarks that will help you get the most out of your hardware.
Looking for a GPU workstation or server for AI/ML, design, rendering, simulation or molecular dynamics?
Explore BIZON AI workstations or GPU servers . Contact us today or explore our various customizable AI solutions.
Featured GPU benchmarks:
Deep Learning GPU Benchmarks 2024–2025 [Updated]↑ As of April 2025
Resnet50 (FP16) 1 GPU
NVIDIA H100 NVL (PCIe)
3042 points
NVIDIA RTX 4090
1720 points
4 GPU
NVIDIA H100 NVL (PCIe)
11989 points
NVIDIA RTX 4090
5934 points
8 GPU
NVIDIA H100 NVL (PCIe)
30070 points
Resnet50 (FP32) 1 GPU
NVIDIA H100 NVL (PCIe)
1350 points
NVIDIA RTX 4090
927 points
4 GPU
NVIDIA H100 NVL (PCIe)
5513 points
NVIDIA RTX 4090
1715 points
8 GPU
NVIDIA H100 NVL (PCIe)
14725 points
Resnet152 (FP16) 1 GPU
NVIDIA H100 NVL (PCIe)
1232 points
4 GPU
NVIDIA H100 NVL (PCIe)
4858 points
8 GPU
NVIDIA H100 NVL (PCIe)
12411 points
Resnet152 (FP32) 1 GPU
NVIDIA H100 NVL (PCIe)
520 points
4 GPU
NVIDIA H100 NVL (PCIe)
2124 points
8 GPU
NVIDIA H100 NVL (PCIe)
5950 points
Inception V3 (FP16) 1 GPU
NVIDIA H100 NVL (PCIe)
1835 points
4 GPU
NVIDIA H100 NVL (PCIe)
7232 points
8 GPU
NVIDIA H100 NVL (PCIe)
15827 points
Inception V3 (FP32) 1 GPU
NVIDIA H100 NVL (PCIe)
854 points
4 GPU
NVIDIA H100 NVL (PCIe)
3487 points
8 GPU
NVIDIA H100 NVL (PCIe)
9837 points
Inception V4 (FP16) 1 GPU
NVIDIA H100 NVL (PCIe)
848 points
4 GPU
NVIDIA H100 NVL (PCIe)
3343 points
8 GPU
NVIDIA H100 NVL (PCIe)
6819 points
Inception V4 (FP32) 1 GPU
NVIDIA H100 NVL (PCIe)
380 points
4 GPU
NVIDIA H100 NVL (PCIe)
1551 points
8 GPU
NVIDIA H100 NVL (PCIe)
4496 points
VGG16 (FP16) 1 GPU
NVIDIA H100 NVL (PCIe)
1085 points
4 GPU
NVIDIA H100 NVL (PCIe)
4276 points
8 GPU
NVIDIA H100 NVL (PCIe)
15908 points
VGG16 (FP32) 1 GPU
NVIDIA H100 NVL (PCIe)
776 points
4 GPU
NVIDIA H100 NVL (PCIe)
3171 points
8 GPU
NVIDIA H100 NVL (PCIe)
8762 points
3D, GPU Rendering Benchmarks 2024–2025 [Updated]↑ As of April 2025
V-Ray 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
NVIDIA RTX 4090
5556 points
NVIDIA RTX 4080
4048 points
Octane 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
NVIDIA RTX 4090
1445 points
NVIDIA RTX 4080
986 points
Redshift 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
NVIDIA RTX 4090
1.16 minutes
NVIDIA RTX 4080
1.47 minutes
Blender 1 GPU
NVIDIA H100 NVL (PCIe)
5069.86 score
NVIDIA RTX 4090
12123.96 score
NVIDIA RTX 4080
9258.94 score
Luxmark 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
NVIDIA RTX 4090
158815 points
Unreal Engine 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
RELION Cryo-EM Benchmarks 2024-2025 [Updated]↑ As of April 2025
Total run time 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
4 GPU
NVIDIA H100 NVL (PCIe)
n/a
Llama3 70B Inference Benchmark 2024–2025 [Updated]↑ As of April 2025
Eval rate 1 GPU
NVIDIA H100 NVL (PCIe)
n/a
NVIDIA RTX 4090
9.95 tokens/s
2 GPU
NVIDIA H100 NVL (PCIe)
n/a
NVIDIA RTX 4090
19.99 tokens/s
Board Design NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Length 11 in / 268 mm 13 in / 336 mm 13 in / 336 mm Outputs No outputs 1x HDMI, 3x DisplayPort 1x HDMI, 3x DisplayPort Power Connectors 8-pin EPS 1x 16-pin 1x 16-pin Slot width Dual-slot Triple-slot Dual-slot TDP 700 W 450 W 320 W
Clock Speeds NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Boost Clock 1837 MHz 2520 MHz 2505 MHz GPU Clock 1665 MHz 2235 MHz 2205 MHz Memory Clock 5300 MHz 21200 MHz 23000 MHz
Graphics Card NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Bus Interface PCIe 5.0 x16 PCIe 4.0 x16 PCIe 4.0 x16 Generation Server Hopper (Hxx) GeForce 40 GeForce 40
Graphics Features NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 OpenCL 3 3 3 CUDA 9 8.9 8.9 DirectX - 12 Ultimate (12_2) 12 Ultimate (12_2) OpenGL - 4.6 4.6 Shader Model - 6.7 6.7
Graphics Processor NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Architecture Hopper Ada Lovelace Ada Lovelace Die Size 814 mm2 608 mm2 379 mm2 GPU Name GH100 AD102-300-A1 AD103-300-A1 Process Size 5 nm 5 nm 5 nm Transistors 80000 million 76300 million 45900 million
Memory NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Bandwidth 3360 GB/s 1018 GB/s 735.7 GB/s Memory Bus 5120 bit 384 bit 256 bit Memory Size 96 GB 24 GB 16 GB Memory Type HBM3 GDDR6X GDDR6X
Render Config NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 ROPs 24 192 96 Shading Units/ CUDA Cores 16896 16384 9728 TMUs 528 512 304 Tensor Cores 528 512 304 RT Cores - 128 76
Theoretical Performance NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 FP16 (half) performance 248.3 TFLOPS 82.58 TFLOPS 48.74 TFLOPS FP32 (float) performance 62.08 TFLOPS 82.58 TFLOPS 48.74 TFLOPS FP64 (double) performance 31040 GFLOPS 1290 GFLOPS 761.5 GFLOPS Pixel Rate 44.09 GPixel/s 483.8 GPixel/s 240.5 GPixel/s Texture Rate 969.9 GTexel/s 1290 GTexel/s 761.5 GTexel/s
Price NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Release Date Mar 21st, 2023 Oct 12th, 2022 Nov 16th, 2022 MSRP - $1,599.00 $1,199.00
Test bench configuration NVIDIA H100 NVL (PCIe) NVIDIA RTX 4090 NVIDIA RTX 4080 Hardware BIZON X5000 More details BIZON X5500 More details BIZON X5500 More details Software 3D Rendering:
VRay Benchmark: 5
Octane Benchmark: 2020.1.5
Redshift Benchmark: 3.0.28 Demo
Blender: 2.90
Luxmark: 3.13D Rendering:
Nvidia Driver:
VRay Benchmark:
Octane Benchmark:
Redshift Benchmark:
Blender:
Luxmark:3D Rendering:
Nvidia Driver:
VRay Benchmark:
Octane Benchmark:
Redshift Benchmark:
Blender:
Luxmark: