Nvidia GeForce 8 series

from Wikipedia, the free encyclopedia
EVGA GeForce 8800 GTX

The GeForce 8 series is a series of desktop - graphics chip company Nvidia and successor of the GeForce 7 series . With the GeForce 8 series, Nvidia introduced the shader model 4.0 (SM 4.0) for the first time, with pixel , geometry and vertex shaders based on DirectX 10 . With around 681 million transistors , the “8800 GTX”, the fastest GeForce card when it was launched, was the first commercially available graphics chip model to have over 500 million transistors.

history

NVIDIA G80 GPU
NVIDIA NVIO-1-A3 RAMDAC

In contrast to the GeForce 7 series, the GeForce 8 series was not largely developed from the previous series, but in many respects completely new. Nvidia commented on the support of the unified shader architecture in such a way that most observers assumed that this technology is not supported. Nvidia's chief architect said in an interview that Nvidia would create a chip with a unified shader “as soon as it makes sense”. It was not known that this had already been implemented in the GeForce 8 series until the appearance of the first G80 graphics chip.

The first two graphics cards based on the new graphics processors were launched on November 8, 2006. These were the GeForce 8800 GTX with 768 MB graphics memory and the GeForce 8800 GTS with 640 MB. In terms of performance, the cards were surprisingly able to set themselves apart from the previous series and the ATI competition . The single-core GeForce 8800 GTS graphics card had roughly the same performance as the GeForce 7950 GX2 with two GPUs, which was the fastest on the market until then. Another version of the 8800 GTS followed in February 2007 with a graphics memory of 320 MB and otherwise the same construction data.

Nvidia GeForce 8600 GT with G84 GPU

On April 17, 2007, Nvidia finally presented the first GeForce 8 cards for mainstream and low-end, in the form of the GeForce 8600 GTS, 8600 GT and the 8500 GT. The 8600 series could not meet the sometimes very high expectations, which were aroused primarily by the very successful predecessors, the 6600 series and the 7600 series, as well as the high performance of the 8800 series, because the performance in relation corresponds to only about a quarter of the high-end G80 chip on the most important parts of the GPU. In contrast, the two predecessors G73 (7600) and NV43 (6600) were each halved versions of the respective high-end chips.

With the presentation of the GeForce 8800 Ultra on May 2, 2007, Nvidia turned the speed screw once again. Compared to the 8800 GTX, this model is delivered with higher clock rates. However, since a new stepping is also used, this is not simply a matter of increasing the clock rate, rather the processor is generally designed for higher clock rates. In contrast to the previous graphics processors of the GeForce 8 series, the 8800 Ultra was not immediately available when it was introduced, but a so-called "paper launch" was carried out. So it could be presented even before the competing series AMDs . The official availability in the trade was announced by Nvidia on May 15, but in isolated cases it was available shortly after the presentation.

One week later, on May 9, 2007, the first mobile graphics solutions based on the GeForce 8, the GeForce 8M series, were presented . Initially, however, only low-end and midrange models were presented, which is due to the fact that "normal" GPUs are no longer used, rather than specially adapted graphics processors. However, high-end models would use too much electricity here. More powerful mobile graphics solutions followed later.

On June 19, 2007, the GeForce 8400 GS, which was based on the G86 chip, was judged against the competition in the form of the Radeon HD 2400 Pro on the entry-level market. This graphics card was also presented before the competitor's model was widely available, which has already happened with other graphics units in the entire series. Nvidia focused on in-house comparisons to document the product's performance.

Almost a year after the introduction of the G80 and six months after the introduction of the G84 and G86, the refresh generation was initiated with the presentation of the "GeForce 8800 GT" on October 29, 2007. The graphics card is based on the newly developed G92 graphics chip, now manufactured in 65 nm , which has a similar number of execution units as the G80, but its structure is more similar to the G84 and G86. As with the G84 and G86, there are just as many Texture Addressing Units (TAUs) as Texture Mapping Units (TMUs) on the G92 , while the TMUs and TAUs in the G80 form a 2: 1 ratio. In addition, like the G84 and G86, the G92 supports the PureVideo HD technology of the second generation . The PCIe 2.0 interface was newly introduced with the G92 , but it is backwards compatible with the earlier PCIe. Since Nvidia delivers the GeForce 8800 GT with a very high texel fill rate and arithmetic performance, it is faster than the older GeForce 8800 GTS, provided that the smaller bandwidth and the smaller memory expansion (compared to the 640 MB version) do not become the performance limit.

On December 11, 2007, Nvidia also presented a GeForce 8800 GTS with the new G92 graphics processor, which overtakes the 8800 GT or, in some applications with low resolution, the 8800 GTX and 8800 Ultra in terms of performance due to higher clock rates and more active units can.

In the same month, the G98 core was also brought onto the market without notice and can be found on graphics cards with the designation "GeForce 8400 GS". So far, other graphics cards have also been sold under this name, so that it is very difficult to distinguish between them.

The graphics chips compete with the Radeon HD 2000 series with the Radeon HD 3000 series from AMD, which in turn supports DirectX 10.1.

Technical

Unified shaders

Unified shaders are used for the first time in the GeForce 8 series . The evolution of technology has led to the fact that one can no longer really differentiate between quads in the traditional sense, since there are no longer any rendering pipelines . However, compound units still exist, namely the shader clusters and the ROP partitions (Raster Operation Processor Partitions), which enable the GeForce 8 series to be scaled well. The tasks of the pixel pipelines, vertex and pixel shaders from the rendering pipelines are now handled by so-called stream processors (SPs). Each with 16 stream processors are in a shader - cluster together.

A stream processor can carry out a maximum of one addition operation (ADD) and two multiplication operations (MUL) in a clock unit. In doing so, no second MUL operation was performed for graphics shader calculations with an earlier driver version than ForceWare 158.19. With newer drivers, the second MUL calculation can also be used to some extent for shader calculations, but special functions must be carried out via the MUL operation. Therefore, in contrast to many uses of the graphics card as a GPGPU , the maximum computing power is never achieved via the stream processors and in practice is up to a third below this maximum value.

Power consumption

The GeForce 8800 GTX was the first PCI Express graphics card with a graphics processor that had two 6-pin power connectors. She was able to book the negative record for the highest power consumption at her launch and hold it until the launch of the GeForce 8800 Ultra. The GeForce 8800 Ultra, on the other hand, only held this negative record for a short time, until the ATI Radeon HD 2900 XT was introduced . When the GeForce 8800 GTX was presented, it was criticized in many places for its high power consumption, but also praised for its high graphics performance and the relatively quiet cooling solution, which, however, occupies two slots. It was also criticized that the G80 does not have any energy-saving functions, so that the clock frequency is not even lowered in 2D mode, as was usual with its predecessors.

The high power consumption means that Nvidia recommended high-performance power supplies for the GeForce 8800 series. For the GeForce 8800 GTX and GeForce 8800 Ultra, for example, a power supply unit with at least 30 amps on the 12 volt rail was recommended. For the GeForce 8800 GTS, Nvidia recommended a power supply with 26 amps at 12 volts and a PCIe power connection.

Memory management error

The graphics cards and mobile graphics chips of the GeForce 8 series were delivered with drivers that had an error in the memory management. This bug, referred to by Nvidia as a VRAM bug, leads to more and more data being loaded into the graphics memory , so that at some point it is overfilled. Then textures and similar files have to be swapped out, which greatly reduces performance. Normally, files that are no longer required should be emptied from the graphics memory, this does not happen with the GeForce 8 series.

Due to the high memory expansion of the first graphics cards released (640 MB for the GeForce 8800 GTS and 768 MB for the GeForce 8800 GTX), this error was not noticed in the first few months. Discrepancies only arose with the release of the GeForce 8800 GTS with 320 MB, this graphics card is also most affected by this error. In various tests, this graphics card lagged behind older graphics cards from Nvidia and AMD models with slower graphics processors and only 256 MB of memory, although it was technologically superior. In further tests it was then clearly established that the performance strongly depends on the memory usage. Even greater differences were found in the 256 MB models of the GeForce 8600 GT and GTS, but theoretical tests have shown that every graphics card in the GeForce 8 series has these errors.

Nvidia responded and promised a driver that would solve the problem. However, this was delayed on the grounds that there was a “problem of a complex nature”. At the end of August 2007, Nvidia provided the beta driver ForceWare 163.44 without further notice , which many Internet magazines assumed it solved the VRAM problem. This was mostly done on the basis of our own tests, after the GeForce 8800 GTS with 320 MB was significantly faster in tested games, but the GeForce 8800 GTS with 640 MB showed no differences. However, as Nvidia announced when asked, the bug in this driver has not been resolved; the performance increases are due to changes that weakened the VRAM bug somewhat. The memory compression has been revised, so that not so much performance is lost in the event of a memory overflow. In addition, the textures are managed differently in Stalker , which explains the increase in performance there. According to Nvidias Technical Marketing Analyst James Wang, the next official driver will fix the memory management errors. It remains unclear whether this actually happened.

The G92 graphics processor, which is used on the GeForce 8800 GT and on the GeForce 8800 GTS presented in December 2007, represents a further development of the G8x chip series. Since it has undergone various changes compared to this, it is possible that the VRAM -Bug has also been fixed, but evidence for this is still pending.

More functions

All graphics cards of the GeForce 8 series support the CUDA technology. The series also supports a video technology called PureVideo HD , which can completely or partially relieve the main processor of decoding . All graphics processors without an integrated video processor only support the first version, all others at least the second.

Graphics processors

Graphics
chip
production units L2
cache
API support Video
pro-
cessor
Bus
interface
stelle
production
process
transis-
interfere
The -
area
ROP
particle
functions
ROPs Unified shaders Texture units DirectX OpenGL OpenCL
Shader -
cluster
Stream
processors
TAUs TMUs
G80 90 nm 681 million 484 mm² 6th 24 8th 128 32 64 n / A 10.0 3.3 1.1 VP1 PCIe
G84 80 nm 289 million 169 mm² 2 08th 2 032 16 16 VP2
G86 210 million 115 mm² 2 08th 1 016 08th 08th
G92 65 nm 754 million 324 mm² 4th 16 8th 128 64 64 PCIe 2.0
G98 210 million 086 mm² 1 04th 1 008th 08th 08th VP3 PCI ,
PCIe 2.0
GT218 40 nm 260 million 057 mm² 1 04th 1 016 08th 08th 10.1 VP4

Naming

The GeForce 8 series uses the same naming scheme as the GeForce 6 series . All graphics chips are identified with a four-digit number that generally begins with an “8” (for GeForce 8). The second digit then divides the family into different market segments. The third and fourth digits serve for further diversification. In addition to the discrete graphics chips, Nvidia also integrates graphics cores based on the GeForce 8 series in the chipsets of the nForce 700 series , which are marketed under the names GeForce 8200 and GeForce 8300.

NVIDIA GeForce 8800 Ultra
division
Letter abbreviations
  • no suffix - IGP variant or budget version
  • GS - Low performing budget version
  • GT - "normal" version (price-performance ratio)
  • GTS - In the mainstream segment, with the GT, the most powerful version
  • GTX - powerful model (only in the high-end segment)
  • Ultra - most powerful model (only in the high-end segment)

This scheme can only be used to a limited extent, however, as Nvidia markets various graphics cards under the names "GeForce 8400 GS" and "GeForce 8800 GTS". The name of the 8800 GT was chosen in relation to the newer GeForce 8800 GTS, and not to the older G80-based 8800-GTS variants. These new 8800-GTS graphics cards are differentiated from the previous cards because of the smaller memory interface in terms of memory size.

Model data

model Official
launch
Graphics processor (GPU) Graphics memory
Type Active units Chip clock
(MHz)
Shader
clock
(MHz)
Size
( MB )
Clock rate
(MHz)
Type Storage
interface
ROPs Shader -
cluster
Stream
processors
TMUs TAUs
GeForce 8100 May 2008 MCP78 4th 1 8th 8th 8th 500 1200 Shared
memory
IGP DDR2 IGP
GeForce 8200 May 2008 MCP78 4th 1 8th 8th 8th 500 1200 IGP DDR2 IGP
GeForce 8300 May 2008 MCP78 4th 1 8th 8th 8th 500 1500 IGP DDR2 IGP
GeForce 8300 GS Apr 17, 2007 G86 4th 1 8th 8th 8th 459 918 128 400 DDR2 64 bit
GeForce 8400 GS Apr 17, 2007 G86 4th 2 16 8th 8th 459 918 256 400 DDR2 64 bit
GeForce 8400 GS Rev. 2 Dec. 4, 2007 G98 1 8th 520 1230 512
GeForce 8400 GS Rev. 3 Jul 12, 2010 GT218 2 16
GeForce 8400 Dec. 4, 2007 G98 4th 1 8th 8th 8th 540 1300 256 500 DDR2 64 bit
GeForce 8500 GT Apr 17, 2007 G86 4th 2 16 8th 8th 459 918 256
512
1024
400 GDDR3 128 bit
GeForce 8600 GS Apr 17, 2007 G84 8th 1 16 8th 8th 540 1190 512 400 DDR2 128 bit
GeForce 8600 GT Apr 17, 2007 G84 8th 2 32 16 16 540 1190 256
512
1024
700 GDDR3 128 bit
GeForce 8600 GTS Apr 17, 2007 G84 8th 2 32 16 16 675 1450 256
512
1000 GDDR3 128 bit
GeForce 8800 GS Jan. 31, 2008 G92 12 6th 96 48 48 550 1375 384
768
800 GDDR3 192 bits
GeForce 8800 GT Oct 29, 2007 G92 16 7th 112 56 56 600 1500 512 900 GDDR3 256 bit
GeForce 8800 GTS Feb 12, 2007 G80 20th 6th 96 48 24 513 1188 320 792 GDDR3 320 bits
Nov 8, 2006 640
Nov 19, 2007 7th 112 56 28
GeForce 8800 GTS 512 Dec 11, 2007 G92 16 8th 128 64 64 650 1625 512 970 GDDR3 256 bit
GeForce 8800 GTX Nov 8, 2006 G80 24 8th 128 64 32 575 1350 768 900 GDDR3 384 bits
GeForce 8800 Ultra May 2, 2007 G80 24 8th 128 64 32 612 1500 768 1080 GDDR3 384 bits
Hints
  • The specified clock rates are those recommended or specified by Nvidia. However, the final specification of the clock rates is in the hands of the respective graphics card manufacturer. It is therefore entirely possible that there are or will be graphics card models that have different clock rates.
  • The date indicated is the date of the public presentation, not the date of availability of the models.
  • The clock frequency of the memory is also often given as twice as high. The reason for this is the double data rate (DDR).

Performance data

The following theoretical performance data result for the respective models:

model Computing power of all stream
processors
( GFlops )
Fill rate of the graphics processor Data transfer rate
to graphics memory
( GB / s)
Pixel (GPixel / s) Texel ( GTexel / s)
GeForce 8100 28.8 2 4th -
GeForce 8200 28.8 2 4th -
GeForce 8300 28.8 2 4th -
GeForce 8300 GS (G86) 22.0 1.8 3.6 6.4
GeForce 8400 GS (G86) 44.1 1.8 3.7 6.4
GeForce 8400 GS (G98) 29.5 2.3 4.5 6.4
GeForce 8400 GS (GT218) 59.0 2.1 4.2 6.4
GeForce 8400 31.2 2.2 4.3 8.0
GeForce 8500 GT 44.1 1.8 3.7 12.8
GeForce 8600 GS 57.1 4.3 4.3 12.8
GeForce 8600 GT 114.2 4.3 8.6 22.4
GeForce 8600 GTS 139.2 5.4 10.8 32.0
GeForce 8800 GS 396 6.6 26.4 38.4
GeForce 8800 GT 504 9.6 33.6 57.6
GeForce 8800 GTS 342.1 10.3 24.6 63.4
GeForce 8800 GTS (112 SPs) 399.2 10.3 28.7 63.4
GeForce 8800 GTS 512 624 10.4 41.6 62.1
GeForce 8800 GTX 518.4 13.8 36.8 86.4
GeForce 8800 Ultra 576 14.7 39.2 103.7
Hints
  • The specified performance values ​​for the computing power via the stream processors, the pixel fill rate, the texel fill rate and the memory bandwidth are theoretical maximum values. The overall performance of a graphics card depends, among other things, on how well the available resources can be used or fully utilized. There are also other factors that are not listed here that affect performance.
  • The specified computing power via the stream processors refers to the use of both MUL operations, which is not achieved with graphics shader calculations, since further calculations have to be carried out. In these calculations, the computing power of the stream processors is therefore lower. Details can be found in the Unified Shader section.
  • The computing power via the stream processors is not directly comparable with the performance of the ATI Radeon HD 2000 and 3000 series, as this is based on a different architecture that scales differently.

Power consumption data

The measured values ​​listed in the table refer to the pure power consumption of graphics cards that correspond to the nVidia reference design. A special measuring device is required to measure these values; Depending on the measurement technology used and the given measurement conditions, including the program used to generate the 3D load, the values ​​can fluctuate between different devices.

model Type Consumption ( watt ) additional
power
plug
MGCP
Readings
Idle 3D load
GeForce 8100 MCP78 k. A. k. A. k. A. no
GeForce 8200 MCP78 k. A. k. A. k. A. no
GeForce 8300 MCP78 k. A. k. A. k. A. no
GeForce 8300 GS G86 040 k. A. k. A. no
GeForce 8400 GS G86 038 k. A. k. A. no
GeForce 8400 GS Rev. 2 G98 025th k. A. k. A. no
GeForce 8400 GS Rev. 3 GT218 025th k. A. k. A. no
GeForce 8400 G98 025th k. A. k. A. no
GeForce 8500 GT G86 030th 21st 036 no
GeForce 8600 GS G84 047 k. A. k. A. no
GeForce 8600 GT G84 047 25th 056 no
GeForce 8600 GTS G84 060 29 068 1 × 6 pin
GeForce 8800 GS G92 105 k. A. k. A. 1 × 6 pin
GeForce 8800 GT G92 125 51 135 1 × 6 pin
GeForce 8800 GTS (320 MB) G80 146 63 136 1 × 6 pin
GeForce 8800 GTS G80 146 74 152 1 × 6 pin
GeForce 8800 GTS (112 SPs) G80 k. A. k. A. k. A. 1 × 6 pin
GeForce 8800 GTS 512 G92 143 62 165 1 × 6 pin
GeForce 8800 GTX G80 155 82 192 2 × 6-pin
GeForce 8800 Ultra G80 175 90 209 2 × 6-pin

Much more common than measuring the consumption of the graphics card is determining the power consumption of an entire system. For this purpose, a reference system is compiled in which the various graphics cards are installed; Then the measurement takes place directly at the socket with the help of an energy cost meter or a comparable device . However, the meaningfulness of the measured values ​​is limited: It is not clear what consumption comes from the graphics card and what can be ascribed to the rest of the PC system. With this measurement method, the difference in consumption between idle and 3D load operation does not only depend on the program with which the load was generated; the utilization and efficiency of the rest of the PC system including the power supply unit, mainboard and processor also influence the measured difference. Since the tested systems usually differ from your own PC system at home, the values ​​given there cannot be mapped to your own system. Only measurement data from otherwise identical systems are (to a limited extent) suitable for comparison with one another. Because of this dependency, total system measured values ​​are not listed in the table here. However, since they can give a better picture of the practical power consumption of a specific system with a specific graphics card, websites that made such measurements are listed under the web links .

See also

Web links

Commons : Nvidia GeForce 8 series  - collection of images, videos and audio files

Measurement of the power consumption of an entire system

Individual evidence

  1. ^ Nvidia Chief Architect: Unified Pixel and Vertex Pipelines - The Way to Go. (No longer available online.) X-bit labs, July 11, 2005, archived from the original on February 20, 2009 ; accessed on February 1, 2010 (English). Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.xbitlabs.com
  2. Test: nVidia GeForce 8800 GTS (SLI). ComputerBase, December 22, 2006, accessed February 1, 2010 .
  3. ForceWare 163.44 fixes VRAM bug on G8x (update). ComputerBase, August 28, 2007, accessed February 1, 2010 .
  4. http://www.nvidia.de/object/win7-winvista-32bit-257.21-whql-driver-de.html OpenCL 1.0 from Tesla chip G80 with WHQL 257.21
  5. http://www.gpu-tech.org/content.php/162-Nvidia-supports-OpenCL-1.1-with-GeForce-280.19-Beta-performance-suffers OpenCL 1.1 from Tesla chip G80 with Beta 280.19
  6. GeForce 8100 mGPU. Nvidia Corporation, accessed December 4, 2011 .
  7. GeForce 8200 mGPU. Nvidia Corporation, accessed December 4, 2011 .
  8. GeForce 8300 mGPU. Nvidia Corporation, accessed December 4, 2011 .
  9. Model is only intended for the OEM market and is not officially listed by Nvidia.
  10. GeForce 8400 GS. TechPowerUp, accessed May 26, 2015 .
  11. GeForce 8400 GS Rev. 2. TechPowerUp, accessed on May 26, 2015 (English).
  12. GeForce 8400 GS Rev. 3. TechPowerUp, accessed on May 26, 2015 (English).
  13. GeForce 8400. TechPowerUp, accessed on May 26, 2015 (English).
  14. GeForce 8500 GT. TechPowerUp, accessed May 26, 2015 .
  15. GeForce 8600 GS. TechPowerUp, accessed May 26, 2015 .
  16. GeForce 8600 GT. TechPowerUp, accessed May 26, 2015 .
  17. GeForce 8600 GTS. TechPowerUp, accessed May 26, 2015 .
  18. GeForce 8600 GS. TechPowerUp, accessed May 26, 2015 .
  19. GeForce 8600 GT. TechPowerUp, accessed May 26, 2015 .
  20. a b c d GeForce 8800. Nvidia Corporation, accessed December 4, 2011 .
  21. The MGCP value specified by Nvidia does not necessarily correspond to the maximum power consumption. This value is also not necessarily comparable with the TDP value of the competitor AMD.
  22. The value given under 3D load corresponds to the typical game usage of the card. However, this is different depending on the 3D application. As a rule, a modern 3D application is used to determine the value, which, however, limits the comparability over longer periods of time.
  23. a b c d e f g h i j k l m n o p q r Power consumption graphics cards and electricity costs. Tom's Hardware, December 16, 2008, accessed January 23, 2010 .