Home
Compare
GPU Benchmarks NVIDIA RTX A6000 vs. NVIDIA A100 40 GB (PCIe) vs. NVIDIA A40 vs. NVIDIA RTX 4090
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 (PCIe)
NVIDIA H100 (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA RTX A6000
X
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 (PCIe)
NVIDIA H100 (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA A100 40 GB (PCIe)
X
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 (PCIe)
NVIDIA H100 (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA A40
X
NVIDIA RTX 2080 Ti
NVIDIA Titan RTX
NVIDIA Quadro RTX 6000
NVIDIA Quadro RTX 8000
NVIDIA GTX 1080 Ti
NVIDIA Titan V
NVIDIA Tesla V100
NVIDIA GTX 780
NVIDIA GTX 780 Ti
NVIDIA GTX 960
NVIDIA GTX 980
NVIDIA GTX 980 Ti
NVIDIA GTX 1050
NVIDIA GTX 1050 Ti
NVIDIA GTX 1060
NVIDIA GTX 1070
NVIDIA GTX 1070 Ti
NVIDIA GTX 1080
NVIDIA GTX 1650
NVIDIA GTX 1660
NVIDIA GTX 1660 Ti
NVIDIA RTX 2060
NVIDIA RTX 2060 Super
NVIDIA RTX 2070
NVIDIA RTX 2070 Super
NVIDIA RTX 2080
NVIDIA RTX 2080 Super
NVIDIA Titan Xp
NVIDIA Quadro M6000
NVIDIA Quadro P1000
NVIDIA Quadro P2000
NVIDIA Quadro P4000
NVIDIA Quadro P5000
NVIDIA Quadro P6000
NVIDIA Quadro GP100
NVIDIA Quadro GV100
NVIDIA Quadro RTX 4000
NVIDIA Quadro RTX 5000
NVIDIA RTX 3070
NVIDIA RTX 3080
NVIDIA RTX 3090
NVIDIA RTX A6000
NVIDIA RTX 3060
NVIDIA RTX 3060 Ti
NVIDIA A100 40 GB (PCIe)
NVIDIA A40
NVIDIA A10
NVIDIA A16
NVIDIA A30
NVIDIA RTX A4000
NVIDIA RTX A5000
NVIDIA RTX 3070 Ti
NVIDIA RTX 3080 Ti
NVIDIA A100 80 GB (PCIe)
NVIDIA A100 80 GB (SXM4)
NVIDIA RTX A4500
NVIDIA T400
NVIDIA T600
NVIDIA T1000
NVIDIA RTX A5500
NVIDIA RTX A2000
NVIDIA H100 (PCIe)
NVIDIA H100 (SXM5)
NVIDIA H100 CNX
NVIDIA RTX 3090 Ti
NVIDIA RTX 4090
NVIDIA RTX 4080
NVIDIA RTX 4070 Ti
NVIDIA RTX 6000 Ada
NVIDIA L40
NVIDIA RTX 4090 Ti (Unreleased)
NVIDIA RTX 4000 SFF Ada
NVIDIA RTX 4070
NVIDIA L40s
NVIDIA RTX 4000 Ada
NVIDIA RTX 4500 Ada
NVIDIA RTX 5000 Ada
NVIDIA A800 40 GB (PCIe)
NVIDIA A800 40 GB Active
NVIDIA H200 (SXM5)
NVIDIA GH200
NVIDIA L4
NVIDIA B200 (SXM)
NVIDIA RTX 5090
NVIDIA H200 (PCIe)
NVIDIA RTX 5080
NVIDIA RTX 5070 Ti
NVIDIA RTX 5070
NVIDIA RTX 5060 Ti
NVIDIA RTX 5060
NVIDIA RTX PRO 6000 Blackwell Server
NVIDIA RTX PRO 6000 Blackwell
NVIDIA RTX PRO 6000 Blackwell Max-Q
NVIDIA RTX PRO 5000 Blackwell
NVIDIA RTX PRO 4500 Blackwell
NVIDIA RTX PRO 4000 Blackwell
AMD Radeon RX 9070 XT
AMD Radeon Instinct MI300
AMD Radeon Instinct MI300X
AMD Radeon Instinct MI325X
NVIDIA RTX 4090
X
+
Quick links:
Best GPUs for deep learning, AI development, compute in 2023–2024. Recommended GPU & hardware for AI training, inference (LLMs, generative AI).
GPU training, inference benchmarks using PyTorch, TensorFlow for computer vision (CV), NLP, text-to-speech, etc.
We benchmark NVIDIA RTX A6000 vs NVIDIA A100 40 GB (PCIe) vs NVIDIA A40 vs NVIDIA RTX 4090 GPUs and compare AI performance (deep learning training; FP16, FP32, PyTorch, TensorFlow), 3d rendering, Cryo-EM performance in the most popular apps (Octane, VRay, Redshift, Blender, Luxmark, Unreal Engine, Relion Cryo-EM).
Our benchmarks will help you decide which GPU (NVIDIA RTX 4090/4080, H100 Hopper, H200, A100, RTX 6000 Ada, A6000, A5000, or RTX 6000 ADA Lovelace) is the best GPU for your needs. We provide an in-depth analysis of the AI performance of each graphic card's performance so you can make the most informed decision possible. We offer deep learning and 3d rendering benchmarks that will help you get the most out of your hardware.
Looking for a GPU workstation or server for AI/ML, design, rendering, simulation or molecular dynamics?
Explore BIZON AI workstations or GPU servers . Contact us today or explore our various customizable AI solutions.
Featured GPU benchmarks:
Deep Learning GPU Benchmarks 2024–2025 [Updated]↑ As of April 2025
Resnet50 (FP16) 1 GPU
NVIDIA RTX A6000
1424 points
NVIDIA A100 40 GB (PCIe)
2179 points
NVIDIA RTX 4090
1720 points
4 GPU
NVIDIA RTX A6000
5383 points
NVIDIA A100 40 GB (PCIe)
8561 points
NVIDIA RTX 4090
5934 points
8 GPU
NVIDIA RTX A6000
11610 points
NVIDIA A100 40 GB (PCIe)
16797 points
Resnet50 (FP32) 1 GPU
NVIDIA RTX A6000
558 points
NVIDIA A100 40 GB (PCIe)
1001 points
NVIDIA RTX 4090
927 points
4 GPU
NVIDIA RTX A6000
2126 points
NVIDIA A100 40 GB (PCIe)
3849 points
NVIDIA RTX 4090
1715 points
8 GPU
NVIDIA RTX A6000
4494 points
NVIDIA A100 40 GB (PCIe)
7557 points
Resnet152 (FP16) 1 GPU
NVIDIA RTX A6000
577 points
NVIDIA A100 40 GB (PCIe)
930 points
4 GPU
NVIDIA RTX A6000
2181 points
NVIDIA A100 40 GB (PCIe)
3557 points
8 GPU
NVIDIA RTX A6000
4792 points
NVIDIA A100 40 GB (PCIe)
6809 points
Resnet152 (FP32) 1 GPU
NVIDIA RTX A6000
215 points
NVIDIA A100 40 GB (PCIe)
409 points
4 GPU
NVIDIA RTX A6000
819 points
NVIDIA A100 40 GB (PCIe)
1498 points
8 GPU
NVIDIA RTX A6000
1816 points
NVIDIA A100 40 GB (PCIe)
2851 points
Inception V3 (FP16) 1 GPU
NVIDIA RTX A6000
859 points
NVIDIA A100 40 GB (PCIe)
1283 points
4 GPU
NVIDIA RTX A6000
3247 points
NVIDIA A100 40 GB (PCIe)
5218 points
8 GPU
NVIDIA RTX A6000
6111 points
NVIDIA A100 40 GB (PCIe)
10122 points
Inception V3 (FP32) 1 GPU
NVIDIA RTX A6000
353 points
NVIDIA A100 40 GB (PCIe)
658 points
4 GPU
NVIDIA RTX A6000
1345 points
NVIDIA A100 40 GB (PCIe)
2568 points
8 GPU
NVIDIA RTX A6000
3002 points
NVIDIA A100 40 GB (PCIe)
5058 points
Inception V4 (FP16) 1 GPU
NVIDIA RTX A6000
397 points
NVIDIA A100 40 GB (PCIe)
616 points
4 GPU
NVIDIA RTX A6000
1501 points
NVIDIA A100 40 GB (PCIe)
2377 points
8 GPU
NVIDIA RTX A6000
2633 points
NVIDIA A100 40 GB (PCIe)
4532 points
Inception V4 (FP32) 1 GPU
NVIDIA RTX A6000
157 points
NVIDIA A100 40 GB (PCIe)
290 points
4 GPU
NVIDIA RTX A6000
598 points
NVIDIA A100 40 GB (PCIe)
1031 points
8 GPU
NVIDIA RTX A6000
1372 points
NVIDIA A100 40 GB (PCIe)
1950 points
VGG16 (FP16) 1 GPU
NVIDIA RTX A6000
508 points
NVIDIA A100 40 GB (PCIe)
1249 points
4 GPU
NVIDIA RTX A6000
1920 points
NVIDIA A100 40 GB (PCIe)
4989 points
8 GPU
NVIDIA RTX A6000
6142 points
NVIDIA A100 40 GB (PCIe)
10733 points
VGG16 (FP32) 1 GPU
NVIDIA RTX A6000
321 points
NVIDIA A100 40 GB (PCIe)
529 points
4 GPU
NVIDIA RTX A6000
1223 points
NVIDIA A100 40 GB (PCIe)
2215 points
8 GPU
NVIDIA RTX A6000
2674 points
NVIDIA A100 40 GB (PCIe)
4278 points
3D, GPU Rendering Benchmarks 2024–2025 [Updated]↑ As of April 2025
V-Ray 1 GPU
NVIDIA RTX A6000
2599 points
NVIDIA A100 40 GB (PCIe)
1555 points
NVIDIA RTX 4090
5556 points
Octane 1 GPU
NVIDIA RTX A6000
644 points
NVIDIA A100 40 GB (PCIe)
498 points
NVIDIA RTX 4090
1445 points
Redshift 1 GPU
NVIDIA RTX A6000
3.01 minutes
NVIDIA A100 40 GB (PCIe)
n/a
NVIDIA RTX 4090
1.16 minutes
Blender 1 GPU
NVIDIA RTX A6000
5301.36 score
NVIDIA A100 40 GB (PCIe)
3788 score
NVIDIA RTX 4090
12123.96 score
Luxmark 1 GPU
NVIDIA RTX A6000
83609 points
NVIDIA A100 40 GB (PCIe)
n/a
NVIDIA RTX 4090
158815 points
Unreal Engine 1 GPU
NVIDIA A100 40 GB (PCIe)
n/a
RELION Cryo-EM Benchmarks 2024-2025 [Updated]↑ As of April 2025
Total run time 1 GPU
NVIDIA RTX A6000
197.6 Min
NVIDIA A100 40 GB (PCIe)
178.9 Min
4 GPU
NVIDIA A100 40 GB (PCIe)
50.3 Min
Llama3 70B Inference Benchmark 2024–2025 [Updated]↑ As of April 2025
Eval rate 1 GPU
NVIDIA RTX A6000
14.82 tokens/s
NVIDIA A100 40 GB (PCIe)
n/a
NVIDIA RTX 4090
9.95 tokens/s
2 GPU
NVIDIA A100 40 GB (PCIe)
n/a
NVIDIA RTX 4090
19.99 tokens/s
Board Design NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 Length 11 in / 267 mm 11 in / 267 mm 11 in / 267 mm 13 in / 336 mm Outputs 4x DisplayPort No outputs 3x DisplayPort 1x HDMI, 3x DisplayPort Power Connectors 8-pin EPS 8-pin EPS 8-pin EPS 1x 16-pin Slot width Dual-slot Dual-slot Dual-slot Triple-slot TDP 300 W 250 W 300 W 450 W
Clock Speeds NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 Boost Clock 1860 MHz 1410 MHz 1740 MHz 2520 MHz GPU Clock 1455 MHz 765 MHz 1305 MHz 2235 MHz Memory Clock 16000 MHz 2400 MHz 14500 MHz 21200 MHz
Graphics Card NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 Bus Interface PCIe 4.0 x16 PCIe 4.0 x16 PCIe 4.0 x16 PCIe 4.0 x16 Generation Quadro (Ax000) Tesla (Axx) Tesla (Axx) GeForce 40
Graphics Features NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 DirectX 12 Ultimate (12_2) - 12 Ultimate (12_2) 12 Ultimate (12_2) OpenCL 3 2 3 3 OpenGL 4.6 - 4.6 4.6 Shader Model 6.6 - 6.6 6.7 CUDA 8.6 8 8.6 8.9
Graphics Processor NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 Architecture Ampere Ampere Ampere Ada Lovelace Die Size 628 mm2 826 mm2 628 mm2 608 mm2 GPU Name GA102 GA100 GA102 AD102-300-A1 Process Size 8 nm 7 nm 8 nm 5 nm Transistors 28300 million 54200 million 28300 million 76300 million
Memory NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 Bandwidth 768 GB/s 1555 GB/s 695.8 GB/s 1018 GB/s Memory Bus 384 bit 5120 bit 384 bit 384 bit Memory Size 48 GB 40 GB 48 GB 24 GB Memory Type GDDR6 HBM2e GDDR6 GDDR6X
Render Config NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 ROPs 112 160 112 192 RT Cores 84 - 84 128 Shading Units/ CUDA Cores 10752 6912 10752 16384 TMUs 336 432 336 512 Tensor Cores 336 432 336 512
Theoretical Performance NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 FP16 (half) performance 38.71 TFLOPS 77.97 TFLOPS 37.42 TFLOPS 82.58 TFLOPS FP32 (float) performance 38.71 TFLOPS 19.49 TFLOPS 37.42 TFLOPS 82.58 TFLOPS FP64 (double) performance 1210 GFLOPS 9746 GFLOPS 584.6 GFLOPS 1290 GFLOPS Pixel Rate 201.6 GPixel/s 225.6 GPixel/s 194.9 GPixel/s 483.8 GPixel/s Texture Rate 604.8 GTexel/s 609.1 GTexel/s 584.6 GTexel/s 1290 GTexel/s
Price NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 MSRP $4,649.00 - - $1,599.00 Release Date Oct 5th, 2020 Jun 22nd, 2020 Oct 5th, 2020 Oct 12th, 2022
Test bench configuration NVIDIA RTX A6000 NVIDIA A100 40 GB (PCIe) NVIDIA A40 NVIDIA RTX 4090 Hardware BIZON X5000 More details BIZON X5000 More details BIZON X5000 More details BIZON X5500 More details Software 3D Rendering:
Nvidia Driver: 461.40
VRay Benchmark: 5
Octane Benchmark: 2020.1.5
Redshift Benchmark: 3.0.28 Demo
Blender: 2.90
Luxmark: 3.13D Rendering:
Nvidia Driver: 461.09
VRay Benchmark: 5
Octane Benchmark: 2020.1.5
Redshift Benchmark: 3.0.28 Demo
Blender: 2.90
Luxmark: 3.13D Rendering:
Nvidia Driver: 461.40
VRay Benchmark: 5
Octane Benchmark: 2020.1.5
Redshift Benchmark: 3.0.28 Demo
Blender: 2.90
Luxmark: 3.13D Rendering:
Nvidia Driver:
VRay Benchmark:
Octane Benchmark:
Redshift Benchmark:
Blender:
Luxmark: