Nvidia GeForce 20 series

from Wikipedia, the free encyclopedia
Logo of the Nvidia GeForce 20 series
ASUS GeForce RTX 2070 ROG STRIX Advanced - 8GB GDDR6

The GeForce 20 series is a series of graphics chips from Nvidia and the successor to the GeForce 10 series .

In the run-up to Gamescom , Nvidia presented the GeForce 20 series on August 20, 2018. The series, code-named "Turing" (after the British mathematician Alan Turing ), first focused on ray tracing . In order to underline this aspect, Nvidia changed the prefix GTX, which was previously mostly used in sales names, to RTX . Another innovation is the use of GDDR6 memory . Sales started on September 20, 2018.

Turing architecture

The Turing architecture is the direct further development of the Volta architecture, which was never used in the graphics cards of the GeForce series. The Volta architecture was only used in the Titan series , as well as in the professional series, Quadro and Tesla . In the GeForce graphics cards, the Turing architecture replaces the Pascal architecture of the GeForce 10 series .

The graphics processors, which are based on the Turing architecture, are composed of so-called graphics processing clusters (GPC), some of which are also referred to as raster engines. Each graphics processing cluster contains 4 or 6 texture processing clusters (TPC), some of which are not fully activated. A texture processing cluster consists of two shader clusters, which Nvidia calls streaming multiprocessors (SM). These streaming multiprocessors, the function block that includes the most important units, have been significantly modified compared to the Pascal architecture and have been partially reorganized. Each streaming multiprocessor is assigned:

  • 64 FP32 units for 32-bit floating point numbers and 2 FP64 units for 64-bit floating point numbers
  • 64 INT32 units for 32-bit integers that can work in parallel with the floating point units
  • 4 texture units, each consisting of a texture mapping unit and a texture address unit
  • 16 load / store units
  • 16 special function units
  • 8 tensor units
  • 1 ray tracing unit

At Pascal, a streaming multiprocessor still consisted of 128 FP32 units which can output either 32-bit floating point numbers (floating point) or 32-bit whole numbers (integers). At Turing, this system was abandoned and instead the FP32 unit was reduced to 64, and 64 new INT32 units were added. This allows both operations to be carried out in parallel. Alternatively, the FP32 units can also calculate 16-bit floating point numbers (half precision) in a ratio of 2: 1 (Pascal 1: 1).

The tensor cores for AI calculations were adopted from the Volta architecture. According to Nvidia, these are FP16 units for matrix calculations. You can also execute FP32, INT4 and INT8 commands, but this is only important for professional applications. According to Nvidia, only the FP16 calculations of the tensor cores are important for 3D applications.

The innovation of the Turing architecture highlighted most by Nvidia Marketing is the hardware-based ray tracing support. For this purpose, each streaming multiprocessor has a ray tracing unit, sometimes also referred to as an RT core. Since Nvidia does not provide any information on how these work, it is currently not possible to reconstruct what exactly an RT unit does. Nvidia only specifies the number of RT units per model and a filling rate of Gigarays per second.

Data overview

Graphics processors

Graphics
chip
production units L2
cache
API support Video
pro-
cessor
Bus
interface
stelle
NV
link
production
process
transis-
tors
The -
area
ROPs Unified shaders Texture
units
Tensor
cores
RT
cores
DirectX OpenGL OpenCL CUDA volcano
GPC SM ALUs
TU102 12 nm 18.6 billion 754 mm² 96 6th 72 4608 288 576 72 6 MB 12.1 4.6 1.2 7.1 1.1.78 k. A. PCIe 3.0 2 ×
TU104 13.6 billion 545 mm² 64 6th 48 3072 192 384 48 4 MB
TU106 10.6 billion 445 mm² 64 3 36 2304 144 288 36 4 MB -

Desktop model data

model Official
launch
Graphics processor (GPU) Graphics memory performance Power consumption
Type Active units Chip clock
Memory
size
Storage
cycle
Storage
interface
Memory
bandwidth

(GB / s)
Computing power ( TFlops ) Fill rate MGCP
Readings
ROPs SM ALUs Texture
units
Tensor
cores
RT
cores
Base Boost ALUs
( 16  /  32  /  64  bit)
Tensor cores
( 4  /  8  /  16  bit)
Rays
(GR / s)
Pixels
(GP / s)
Texel
(GT / s)
Idle 3D
GeForce RTX 2060 Jan. 7, 2019 TU106-200-A1 48 30th 1920 120 240 30th 1365 MHz 1680 MHz 06 GB GDDR6 7000 MHz 192 bits 336 12.90 /  06.45 / 0.202 206/103/520 0≈5 80.6 201.6 160 W k. A. k. A.
GeForce RTX 2060 Super 9 Jul 2019 TU106-410-A1 64 34 2176 136 272 34 1470 MHz 1650 MHz 08 GB GDDR6 256 bit 448 14.36 /  07.18 / 0.224 230/115/570 0≈6 105.6 224.4 175 W
GeForce RTX 2070 17th Oct 2018 TU106-400-A1
TU106-400A-A1
64 36 2304 144 288 36 1410 MHz 1620 MHz 14.93 /  07.46 / 0.233 239/119/600 0≈6 103.7 233.3 175 W
185 W.
GeForce RTX 2070 Super 9 Jul 2019 TU104-410-A1 64 40 2560 160 320 40 1605 MHz 1770 MHz 18.12 /  09.06 / 0.283 290/145/720 0≈7 113.3 283.2 215 W
GeForce RTX 2080 Sep 20 2018 TU104-400-A1
TU104-400A-A1
64 46 2944 184 368 46 1515 MHz 1710 MHz 20.14 / 10.07 / 0.315 322/161/810 0≈8 109.4 314.6 215 W
225 W
GeForce RTX 2080 Super 23 Jul 2019 TU104-450-A1 64 48 3072 192 384 48 1650 MHz 1815 MHz 22.30 / 11.15 / 0.348 357/180/890 0≈8 116.2 348.5 250 W
GeForce RTX 2080 Ti 27 Sep 2018 TU102-300-K1-A1
TU102-300A-K1-A1
88 68 4352 272 544 68 1350 MHz 1545 MHz 11 GB GDDR6 352 bits 616 26.90 / 13.45 / 0.420 430/215/108 ≈10 136 420.2 250 W
260 W
Nvidia Titan RTX Dec 18, 2018 TU102-400-A1 96 72 4608 288 576 72 1350 MHz 1770 MHz 24 GB GDDR6 384 bits 672 32.63 / 16.31 / 0.510 522/261/131 ≈11 129.6 388.8 280 W

Remarks

  1. The time of the public presentation is given, not when the models are available.
  2. All performance values ​​given are theoretical maximum values ​​for the GPU's boost clock. They are not directly comparable with other architectures. In practice, the performance is limited by the ability to parallelize the calculation and by the first limiting resource (shader, texture units, memory bandwidth, chip temperature, power consumption).
  3. a b The specified clock rates are the reference data recommended or specified by Nvidia, the effective clock is specified for the memory clock. However, the exact clock rate can deviate by a few megahertz due to different clock generators, and the final definition of the clock rates is in the hands of the respective graphics card manufacturer. It is therefore entirely possible that there are or will be graphics card models that have different clock rates.
  4. The MGCP value given by Nvidia corresponds to the maximum power consumption under standard conditions defined by Nvidia. It cannot be directly compared with values ​​from other manufacturers. With overclocking and good cooling, the values ​​can also be exceeded significantly.
  5. The power consumption of graphics cards with Nvidia reference design is stated. The values ​​represent mean values ​​from various measurements on various websites that require special equipment. The values ​​fluctuate somewhat due to differences in the measurement loads and measurement methods and due to fluctuations in series.

Web links

Commons : Nvidia GeForce 20 series  - collection of images, videos and audio files

Individual evidence

  1. a b c d Turing TU102, -104, -106: The technology of the Nvidia GeForce RTX 2080 Ti, 2080 & 2070 (page 2). Computerbase, September 14, 2018, accessed July 4, 2019 .
  2. NVIDIA GeForce RTX 2060. Nvidia Corporation, January 7, 2019, accessed January 7, 2019 .
  3. NVIDIA GeForce RTX 2060 Super. Nvidia Corporation, July 2, 2019, accessed July 3, 2019 .
  4. NVIDIA GeForce RTX 2070. Nvidia Corporation, August 20, 2018, accessed September 28, 2018 .
  5. NVIDIA GeForce RTX 2070 Super. Nvidia Corporation, July 2, 2019, accessed July 3, 2019 .
  6. NVIDIA GeForce RTX 2080. Nvidia Corporation, August 20, 2018, accessed September 28, 2018 .
  7. NVIDIA GeForce RTX 2080 Super. Nvidia Corporation, July 2, 2019, accessed July 3, 2019 .
  8. NVIDIA GeForce RTX 2080 Ti. Nvidia Corporation, August 20, 2018, accessed September 28, 2018 .
  9. NVIDIA TITAN RTX now available. Retrieved on May 9, 2020 (German).