Home
Compare
GPU Benchmarks NVIDIA H100 NVL (SXM5) vs. NVIDIA RTX 4090
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 NVL (PCIe)
NVIDIA H100 NVL (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 NVL (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 NVL (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA H100 NVL (SXM5)
X
VS
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 NVL (PCIe)
NVIDIA H100 NVL (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 NVL (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 NVL (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA RTX 4090
X
+
Quick links:
Best GPUs for deep learning, AI development, compute in 2023–2024. Recommended GPU & hardware for AI training, inference (LLMs, generative AI).
GPU training, inference benchmarks using PyTorch, TensorFlow for computer vision (CV), NLP, text-to-speech, etc.
We benchmark NVIDIA H100 NVL (SXM5) vs NVIDIA RTX 4090 GPUs and compare AI performance (deep learning training; FP16, FP32, PyTorch, TensorFlow), 3d rendering, Cryo-EM performance in the most popular apps (Octane, VRay, Redshift, Blender, Luxmark, Unreal Engine, Relion Cryo-EM).
Our benchmarks will help you decide which GPU (NVIDIA RTX 4090/4080, H100 Hopper, H200, A100, RTX 6000 Ada, A6000, A5000, or RTX 6000 ADA Lovelace) is the best GPU for your needs. We provide an in-depth analysis of the AI performance of each graphic card's performance so you can make the most informed decision possible. We offer deep learning and 3d rendering benchmarks that will help you get the most out of your hardware.
Looking for a GPU workstation or server for AI/ML, design, rendering, simulation or molecular dynamics?
Explore BIZON AI workstations or GPU servers . Contact us today or explore our various customizable AI solutions.
Featured GPU benchmarks:
Deep Learning GPU Benchmarks 2024–2025 [Updated]↑ As of May 2025
Resnet50 (FP16) 1 GPU
NVIDIA H100 NVL (SXM5)
4197 points
NVIDIA RTX 4090
1720 points
4 GPU
NVIDIA H100 NVL (SXM5)
15945 points
NVIDIA RTX 4090
5934 points
8 GPU
NVIDIA H100 NVL (SXM5)
41496 points
Resnet50 (FP32) 1 GPU
NVIDIA H100 NVL (SXM5)
1903 points
NVIDIA RTX 4090
927 points
4 GPU
NVIDIA H100 NVL (SXM5)
7828 points
NVIDIA RTX 4090
1715 points
8 GPU
NVIDIA H100 NVL (SXM5)
20616 points
Resnet152 (FP16) 1 GPU
NVIDIA H100 NVL (SXM5)
1701 points
4 GPU
NVIDIA H100 NVL (SXM5)
6461 points
8 GPU
NVIDIA H100 NVL (SXM5)
17128 points
Resnet152 (FP32) 1 GPU
NVIDIA H100 NVL (SXM5)
733 points
4 GPU
NVIDIA H100 NVL (SXM5)
3015 points
8 GPU
NVIDIA H100 NVL (SXM5)
8331 points
Inception V3 (FP16) 1 GPU
NVIDIA H100 NVL (SXM5)
2532 points
4 GPU
NVIDIA H100 NVL (SXM5)
9618 points
8 GPU
NVIDIA H100 NVL (SXM5)
21842 points
Inception V3 (FP32) 1 GPU
NVIDIA H100 NVL (SXM5)
1204 points
4 GPU
NVIDIA H100 NVL (SXM5)
4952 points
8 GPU
NVIDIA H100 NVL (SXM5)
13771 points
Inception V4 (FP16) 1 GPU
NVIDIA H100 NVL (SXM5)
1170 points
4 GPU
NVIDIA H100 NVL (SXM5)
4446 points
8 GPU
NVIDIA H100 NVL (SXM5)
9411 points
Inception V4 (FP32) 1 GPU
NVIDIA H100 NVL (SXM5)
535 points
4 GPU
NVIDIA H100 NVL (SXM5)
2202 points
8 GPU
NVIDIA H100 NVL (SXM5)
6294 points
VGG16 (FP16) 1 GPU
NVIDIA H100 NVL (SXM5)
1497 points
4 GPU
NVIDIA H100 NVL (SXM5)
5687 points
8 GPU
NVIDIA H100 NVL (SXM5)
21953 points
VGG16 (FP32) 1 GPU
NVIDIA H100 NVL (SXM5)
1095 points
4 GPU
NVIDIA H100 NVL (SXM5)
4503 points
8 GPU
NVIDIA H100 NVL (SXM5)
12267 points
3D, GPU Rendering Benchmarks 2024–2025 [Updated]↑ As of May 2025
V-Ray 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
5556 points
Octane 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
1445 points
Redshift 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
1.16 minutes
Blender 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
12123.96 score
Luxmark 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
158815 points
Unreal Engine 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
RELION Cryo-EM Benchmarks 2024-2025 [Updated]↑ As of May 2025
Total run time 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
4 GPU
NVIDIA H100 NVL (SXM5)
n/a
Llama3 70B Inference Benchmark 2024–2025 [Updated]↑ As of May 2025
Eval rate 1 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
9.95 tokens/s
2 GPU
NVIDIA H100 NVL (SXM5)
n/a
NVIDIA RTX 4090
19.99 tokens/s
Board Design NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Outputs No outputs 1x HDMI, 3x DisplayPort Power Connectors None 1x 16-pin Slot width N/A Triple-slot TDP 700 W 450 W -250 W (-36%) Length - 13 in / 336 mm
Clock Speeds NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Boost Clock 1837 MHz 2520 MHz 683 MHz (37%) GPU Clock 1665 MHz 2235 MHz 570 MHz (34%) Memory Clock 5300 MHz 21200 MHz 15900 MHz (300%)
Graphics Card NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Bus Interface PCIe 5.0 x16 PCIe 4.0 x16 Generation Server Hopper (Hxx) GeForce 40
Graphics Features NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference OpenCL 3 3 - CUDA 9 8.9 DirectX - 12 Ultimate (12_2) OpenGL - 4.6 Shader Model - 6.7
Graphics Processor NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Architecture Hopper Ada Lovelace Die Size 814 mm2 608 mm2 -206 mm2 (-25%) GPU Name GH100 AD102-300-A1 Process Size 5 nm 5 nm - Transistors 80000 million 76300 million -3700 million (-5%)
Memory NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Bandwidth 3360 GB/s 1018 GB/s -2342 GB/s (-70%) Memory Bus 5120 bit 384 bit -4736 bit (-92%) Memory Size 96 GB 24 GB -72 GB (-75%) Memory Type HBM3 GDDR6X
Render Config NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference ROPs 24 192 168 (700%) Shading Units/ CUDA Cores 16896 16384 -512 (-3%) TMUs 528 512 -16 (-3%) Tensor Cores 528 512 -16 (-3%) RT Cores - 128
Theoretical Performance NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference FP16 (half) performance 248.3 TFLOPS 82.58 TFLOPS -165.72 TFLOPS (-67%) FP32 (float) performance 62.08 TFLOPS 82.58 TFLOPS 20.5 TFLOPS (33%) FP64 (double) performance 31040 GFLOPS 1290 GFLOPS -29750 GFLOPS (-96%) Pixel Rate 44.09 GPixel/s 483.8 GPixel/s 439.71 GPixel/s (997%) Texture Rate 969.9 GTexel/s 1290 GTexel/s 320.1 GTexel/s (33%)
Price NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Release Date Mar 21st, 2023 Oct 12th, 2022 MSRP - $1,599.00
Test bench configuration NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Hardware BIZON X5000 More details BIZON X5500 More details Software 3D Rendering:
VRay Benchmark: 5
Octane Benchmark: 2020.1.5
Redshift Benchmark: 3.0.28 Demo
Blender: 2.90
Luxmark: 3.13D Rendering:
Nvidia Driver:
VRay Benchmark:
Octane Benchmark:
Redshift Benchmark:
Blender:
Luxmark:
Recommended hardware NVIDIA H100 NVL (SXM5) NVIDIA RTX 4090 Difference Best GPU workstations - BIZON X5500