Nvidia GeForce 900 series
The GeForce 900 series is a series of graphics chips from Nvidia and the successor to the GeForce 700 series . The series was originally supposed to be released under the name "GeForce 800", but after this identifier had already been used for the annual release for the Nvidia GeForce M series, Nvidia omitted the name for the desktop series in order to align both series again.
According to Nvidia, all graphics cards in the GeForce 900 series support DirectX 12 with the "feature level 12.1" of Microsoft's Windows 10 operating system for the first time .
description
GeForce GTX 980 Ti and Titan X
The GM200 graphics processor functions as a high-end chip of the GeForce 900 series and in this function replaced the GK110 GPU of the GeForce 700 series . The GM200 has 8 billion transistors on a chip area of 601 mm², making it the largest and most complex graphics processor on the market until then. From a technical point of view, the GM200 with 96 raster, 3072 shader and 192 texture units represents a 50% larger variant of the GM204. This also differs significantly from its predecessors: The GF100, GF110 or GK110 GPUs still had it has advanced double-precision capabilities (FP64) and was also used on the Quadro and Tesla professional series. Therefor were on the GK110 z. For example, 64 separate ALUs were installed in each SMX block , which resulted in a DP rate of 1/3. Since these separate ALUs are missing on the GM200 (they have probably been deleted for reasons of space, since the production of graphics processors with a size of over 600 mm² is hardly possible for technical and economic reasons) it only has a DP rate of 1/32 . Since double-precision operations are not required for 3D applications, this aspect did not play a role in the gaming sector, but made the GM200 unsuitable for the Tesla professional series. Therefore, Nvidia turned away from its previous strategy of developing a high-end / enthusiast chip for all three series, and did not use the GM200 for the Tesla series. Instead, an improved version of the GK110, the GK210 graphics processor, was designed for the Tesla accelerator. A Quadro M6000 with GM200 was released on March 19, 2015 and, in addition to the full configuration of the chip, has slightly lower clock rates than its titanium counterpart. In terms of double precision performance, the Maxwell Quadro represents a step backwards compared to the Kepler-based Quadro K6000, which was only caught up in GTC2018 with the Quadro GV100, which is specially designed for HPC workloads in workstations and is based on the GV100 Volta chip Tesla V100 accelerator and the Titan V based.
On March 17, 2015, Nvidia introduced the GeForce GTX Titan X, the first graphics card to use the GM200 GPU. The GM200-400-A1 is used on the Titan X in its full configuration and equipped with a 12 GB video memory. While earlier versions of the Titan series had separate features compared to the remaining GeForce models (e.g. increased DP rates), this was not possible with the Titan X due to the structure of the GM200 GPU. Instead, Nvidia advertised the Titan X as the "first 4K graphics card". This designation was highly misleading, as earlier graphics cards could also display resolutions of 4K. The only advantage of the Titan X was that for the first time it had sufficient computing power to achieve relatively high frame rates with such resolutions. Under 4K it was around 40% to 45% faster than the GeForce GTX 980 and thus established itself as the fastest single-chip card on the market. However, the lead decreased again at lower resolutions. Despite the high list price of US $ 999, the ban on the board partners' own designs (the same cooler was used as the previous Titan models; it was only darker in color), and the lack of extra features, the Titan X became relative positively rated in the trade press, due to the high performance and the advantages of the Maxwell architecture (DirectX 12 support, energy efficiency, etc.). The Titan X sold quite successfully on the market and was even able to exceed the sales figures of the first Titan model two years earlier, although the street price was around 25% higher due to the weak euro.
The second model based on the GM200 GPU followed on June 1, 2015 with the GeForce GTX 980 Ti . However, the GM200-310-A1 was not used to its full extent; two of the 24 shader clusters have now been deactivated. While the clock rates of the GeForce GTX Titan X had been adopted unchanged, the video memory was reduced from 12 to 6 GB. With this information, the GeForce GTX scored 980 Ti a performance that was only about 5% less than the X of the titanium in practice showed that the 6 GB halved video store had no significance (only for large SLI configurations could a limitation can be determined here) and the two deactivated shader clusters could be compensated again by partially real higher boost clock rates. In view of the only marginally lower performance, but a 300 to 400 € cheaper street price compared to the Titan X, the GeForce GTX 980 Ti was rated positively in the trade press. This was also due to the fact that AMD's competing series , the Radeon R300 , was still not on the market at the time and that Nvidia allowed board partners to design their own again.
GeForce GTX 980 and 970
The GM204 graphics processor was the first GPU of the GeForce 900 series and uses the “second generation Maxwell architecture”. As with the first Kepler generation, the GeForce 600 series , Nvidia is sending the performance chip (GM204) onto the market before the high-end chip (GM200). After Nvidia, just like AMD, did without 20 nm production at TSMC , the GM204 will continue to be produced in 28 nm production, contrary to original plans. It has 5.2 billion transistors on a chip area of 398 mm². The basic structure is identical to that of the GM107 GPU of the first Maxwell generation: The shader clusters (SMM) still contain 128 shader and 8 texture units, but the level 1 cache and the textures have been changed from 64 kByte to 96 kByte -Cache increased from 24 kByte to 48 kByte per cluster. The GM204 consists of a total of 16 shader clusters, with four clusters each hanging on a raster engine, which means that the GM204 has 2048 stream processors, 128 texture units, 64 ROPs and a 2 MB level 2 cache. What is new, however, is the first-time support for HDMI output 2.0 . In order to compensate for the small memory interface of 256 bits compared to other GPUs of this class, Nvidia introduced the “Third Generation Delta Color Compression” feature, which is a bandwidth saver that is supposed to reduce the memory load by around 25%. Together with the GM204, Nvidia introduced numerous other features, such as a new anti-aliasing mode or downsampling , although these were not limited to the GM204, but were also made available to older cards via drivers.
The GeForce GTX 980 was presented by Nvidia on September 19, 2014, with the GM204-400-A1 being used in full. Compared to its predecessor, the GeForce GTX 780 , the card has a 24% to 30 % higher performance , the GeForce GTX 780 Ti a 6% to 9% and the AMD Radeon R9 290X a 21% to 24% higher performance. The GeForce GTX 980 established itself as the fastest single GPU card on the market. Accordingly, the card was rated positively in the trade press, despite the relatively high list price of US $ 549. a. also because the GeForce GTX 980 had clear advantages in terms of energy consumption. Compared to the slower Radeon R9 290X, the GeForce GTX 980 consumes 80 to 120 watts less under load, and around 90 watts compared to the GTX 780 Ti. The low energy consumption was rated so positively in the trade press because such improvements are normally achieved through a new manufacturing process, which this time was not available (all cards are manufactured in the 28 nm process at TSMC).
Just like the GeForce GTX 980, the GeForce GTX 970 was presented on September 19, 2014 , using the GM204-200-A1. In this, three of the 16 shader clusters of the GM204-GPU were deactivated, whereby 1664 shader and 104 texture units are still active, as well as the number of raster units from 64 to 56 and the level 2 cache from 2048 kB to 1792 kB ( the last two points were not yet known when the product was launched). The card thus achieves around 30% higher performance than the GeForce GTX 770 . Compared to the AMD model Radeon R9 290 , which was just as expensive when it was launched , it was able to achieve around 12% higher performance, while the significantly more expensive Radeon R9 290X could be outdone by 3% to 11%. Since the card, like the GeForce GTX 980, had massive advantages in terms of energy consumption and surpassed the competition models in terms of performance and price-performance ratio, the GeForce GTX 970 was (initially) rated accordingly by the trade press. Nvidia also contributed to the positive rating with the reduced list price of US $ 329; the GeForce GTX 770 was launched at a list price of US $ 399. In January 2015 it was announced that only 3.5 GB of the 4 GB video memory can be used at the highest speed with the GeForce GTX 970. Since this fact was not publicly communicated when Nvidia was launched, it led to severe criticism and a class action lawsuit in the USA (see controversy over incorrect memory information for the GeForce GTX 970 ). Nevertheless, the GeForce GTX 970 was extremely successful on the market; so this was z. As in summer 2015 on the computer game - distribution platform Steam , the most commonly used discrete graphics card.
GeForce GTX 960 and 950
The GM206 graphics processor is the performance model of the Maxwell 2 generation. It has 2 raster engines, 1024 stream processor, 64 texture and 32 raster units with a 128 bit DDR memory interface. In practical terms, the GM206 is a halved version of the GM204, which requires 2.94 billion transistors on a chip area of 227 mm². The technical data was sometimes a surprise, as a higher number of shader units and a 192-bit memory interface had been expected in advance. However, Nvidia contradicted the corresponding speculations and instead stated that the GM206 GPU on the GeForce GTX 960 was fully active and that no other units were physically available. The only new feature compared to the GM204 chip is the video processor VP7 of the GM206 chip, which makes it the first GPU to decode and encode the H.265 codec (HEVC), while previous GPUs can only encode the codec. In addition, the HDMI output 2.0 is now also equipped with the HDCP 2.2 copy protection, which means that future Blu-rays can be played with Ultra HD resolution (4K).
On January 22, 2015, Nvidia presented the GeForce GTX 960, the first card based on the GM206 graphics processor. The GM206-300-A1 is used in full configuration, which means that the GeForce GTX 960 under FullHD achieves an average of 9% higher performance compared to its predecessor, the GeForce GTX 760 , and is therefore about as fast as the AMD rival Radeon R9 280 and 285 . The low performance gain compared to the predecessor was viewed critically in the trade press, especially after the GeForce GTX 970 and 980 were able to achieve significantly better values here. Due to the combination of the video memory, which is only 2 GB in size, and the small 128-bit memory interface, the "future viability" of the card was questioned, as memory limitation is quickly expected here with increasing requirements. On the other hand, the low power consumption, which is around 109 watts in 3D applications, was seen positively, whereas the equally fast AMD models with 189 watts (R9 280) and 183 watts (R9 285) need almost twice as much, and the slower predecessor as well at 155 watts is significantly higher here. Since Nvidia again did without a reference design, no general statement could initially be made about the volume of the card, but the low power loss enables the board partner to install very quiet cooling solutions.
On August 20, 2015, Nvidia presented the GeForce GTX 950, the second variant with a GM206 graphics processor. On the GM206-250-A1, however, two of the eight shader clusters are now deactivated, which means that only 768 stream processors and 48 texture units are available. The GeForce GTX 950 thus achieved a performance that was roughly between the GeForce GTX 750 Ti and the GeForce GTX 960. Compared to the AMD model, the Radeon R9 270X , which is about the same price , the GeForce GTX 950 was around 3% slower (although the Radeon R9 270X was already in end-of-life status at this point, i.e. it was sold). Compared to the actual AMD competitor, the Radeon R7 370 , the GeForce GTX 950 was able to show around 10% better performance. Nevertheless, the list price of US $ 159 was generally viewed as too high in the trade press and accordingly criticized.
Controversy over incorrect memory specifications for the GeForce GTX 970
At the beginning of January 2015, users of the GeForce GTX 970 reported in Internet forums that their graphics card rarely used more than 3.5 GB of video memory or that there was a sharp drop in speed as soon as the memory usage exceeded the limit of 3.5 GB. As a result, Nvidia confirmed on January 25, 2015 that the GeForce GTX 970 can only access 3.5 of the 4 GB video memory at the highest speed. The reason for this is that the GM204 GPU on the GeForce GTX 970 has been trimmed more than originally stated by Nvidia. Only 56 of the 64 ROPs and only seven eighths of the level 2 cache are activated, which reduces the level 2 cache from 2048 kB to 1792 kB. While the lower number of ROPs is hardly of any significance in practice, due to the omission of one of the cache areas, not all eight memory controllers of the GM204-GPU, each of which is assigned a 512 MB memory module, can be used simultaneously; two memory controllers have to share a cache area and thus also the connection to the graphics processor via mutual access. The result is that the memory has to be divided into two areas that can only be accessed from one another. The division on the GTX 970 now provides that one of the two memory controllers concerned is used together with the other six fully cached, in that the second memory controller concerned keeps the common crossbar connection and the cache free for it by lying fallow. Therefore, the GeForce driver in the GTX 970 tries to use only the fully cached 3.5 GB of the video memory. Nvidia also denied previous assumptions that the observed memory behavior of the GeForce GTX 970 would be a driver bug .
Although scenarios in which this leads to measurable problems are extremely rare, the circumstance triggered massive criticism in the specialist press and in Internet forums . The criticism was not so much the technical configuration of the GeForce GTX 970, but rather the deviations from the previously advertised product properties and Nvidia's hesitant handling of the questions raised. Some online retailers then independently offered exchange campaigns for the GeForce GTX 970, although from a legal point of view there was no reason to do so, as there was no product defect. In the weeks that followed, sales of the GeForce GTX 970 briefly fell by around 30%. In the United States , a class action lawsuit was filed against Nvidia in late February for what plaintiffs believed was misleading advertising claims. On February 24, 2015, Nvidia's CEO and co-founder Jen-Hsun Huang commented on the storage problem. According to Huang, the memory configuration of the GeForce GTX 970 is a "feature" with which it is possible to equip the graphics card with four instead of three gigabytes of graphics memory. The explanation is highly questionable from a technical point of view, since the GM204 graphics processors were also used partially deactivated in notebook models, but video memory sizes of up to 8 GB were still possible, which the GPU could access at the highest speed (see GeForce GTX 980M and 980MX). On July 28, 2016, Nvidia in the United States agreed to pay affected customers $ 30 in compensation. In addition, attorney fees of $ 1.3 million from the previous class action lawsuit will be paid by Nvidia.
Data overview
Graphics processors
Graphics chip |
production | units | L2 cache |
API support | Video pro- cessor |
Bus interface stelle |
||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
production process |
transis- tors |
The - area |
ROP particle functions |
ROPs | Unified shaders |
Texture units |
DirectX | OpenGL | OpenCL | CUDA | volcano | |||||
SM | ALUs | |||||||||||||||
GM200 | 28 nm | 8 billion | 601 mm² | 6th | 96 | 3072 | 24 | 192 | 3072 KB | 12.1 | 4.6 | 1.2 | 1.1.126 | 5.2 | VP6 | PCIe 3.0 |
GM204 | 5.2 billion | 398 mm² | 4th | 64 | 2048 | 16 | 128 | 2048 KB | ||||||||
GM206 | 2.94 billion | 227 mm² | 2 | 32 | 1024 | 8th | 64 | 1024 KB | VP7 |
Desktop model data
model | Official launch |
Graphics processor (GPU) | Graphics memory | Performance data | Power consumption | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Type | Active units | Chip clock |
Memory size |
Storage cycle |
Storage interface |
Computing power (in GFlops ) |
Fill rate |
Memory bandwidth (in GB / s) |
MGCP |
Readings |
|||||||||
ROPs | SM | ALUs | Texture units |
Base | Boost | 32 bit | 64 bit | Pixels (GP / s) |
Texel (GT / s) |
Idle | 3D load |
||||||||
GeForce GTX 950 (OEM) | March 1, 2016 | GM206-251-A1 | 32 | 6th | 768 | 48 | 1026 MHz | 1190 MHz | 2 GB GDDR5 | 3306 MHz | 128 bit | 1828 | 57.1 | 28.6 | 57.1 | 105.8 | 75 W | k. A. | k. A. |
GM206-300-A1 | 32 | 8th | 1024 | 64 | 937 MHz | 1203 MHz | 4 GB GDDR5 | 2506 MHz | 128 bit | 2464 | 77 | 38.5 | 77 | 80.2 | k. A. | k. A. | k. A. | ||
GeForce GTX 950 | Aug 20, 2015 | GM206-250-A1 | 32 | 6th | 768 | 48 | 1024 MHz | 1188 MHz | 2 GB GDDR5 | 3306 MHz | 128 bit | 1825 | 57 | 28.5 | 57 | 105.8 | 90 W | 9 W. | 92 W |
GeForce GTX 960 (OEM) | Nov 26, 2015 | GM206-300-A1 | 32 | 8th | 1024 | 64 | 1176 MHz | 1201 MHz | 4 GB GDDR5 | 3506 MHz | 128 bit | 2460 | 76.9 | 38.4 | 76.9 | 112.2 | k. A. | k. A. | k. A. |
GM204-150-A1 | 48 | 10 | 1280 | 80 | 924 MHz | - | 3 GB GDDR5 | 2506 MHz | 192 bits | 2365 | 73.9 | 37 | 73.9 | 120.3 | k. A. | k. A. | k. A. | ||
GeForce GTX 960 | Jan. 22, 2015 | GM206-300-A1 | 32 | 8th | 1024 | 64 | 1127 MHz | 1178 MHz | 2 GB GDDR5 | 3506 MHz | 128 bit | 2413 | 75.4 | 37.7 | 75.4 | 112.2 | 120 W | 10 W | 109 W |
GeForce GTX 970 | 19 Sep 2014 | GM204-200-A1 | 56 | 13 | 1664 | 104 | 1051 MHz | 1178 MHz | 4 GB GDDR5 (3584 + 512 MB) |
3506 MHz | 256 bit (224 + 32 bit) |
3920 | 122.5 | 61.3 | 122.5 | 224.2 (196 + 28) |
145 W | 12 W. | 161 W |
GeForce GTX 980 | 19 Sep 2014 | GM204-400-A1 | 64 | 16 | 2048 | 128 | 1126 MHz | 1216 MHz | 4 GB GDDR5 | 3506 MHz | 256 bit | 4981 | 155.6 | 77.8 | 155.6 | 224.2 | 165 W | 12 W. | 174 W |
GeForce GTX 980 Ti | Jun 1, 2015 | GM200-310-A1 | 96 | 22nd | 2816 | 176 | 1002 MHz | 1076 MHz | 6 GB GDDR5 | 3506 MHz | 384 bits | 6060 | 189.4 | 94.7 | 189.4 | 336.6 | 250 W | 13 W. | 236 W |
GeForce GTX Titan X | March 17, 2015 | GM200-400-A1 | 96 | 24 | 3072 | 192 | 1002 MHz | 1089 MHz | 12 GB GDDR5 | 3506 MHz | 384 bits | 6691 | 209.1 | 104.5 | 209.1 | 336.6 | 250 W | 13 W. | 240 W |
Notebook model data
model | Official launch |
Graphics processor (GPU) | Graphics memory | Performance data | MGCP |
||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Type | Active units | Chip clock |
Memory size |
Storage cycle |
Storage interface |
Computing power (in GFlops ) |
Fill rate |
Memory bandwidth (in GB / s) |
|||||||||
ROPs | SM | ALUs | Texture units |
Base | Boost | 32 bit | 64 bit | Pixels (GP / s) |
Texel (GT / s) |
||||||||
Geforce 910M | March 13, 2015 | GK208B | 8th | 2 | 384 | 32 | 641 MHz | - | 2 GB DDR3 | 1001 MHz | 64 bit | 492 | 164.1 | 5.1 | 20.5 | 16 | 33 W |
Geforce 920M | March 13, 2015 | GK208B | 8th | 2 | 384 | 32 | 954 MHz | - | 2 GB DDR3 | 900 MHz | 64 bit | 732 | 244.2 | 7.6 | 30.5 | 14.4 | 33 W |
Geforce 920MX | March 25, 2016 | GM108 | 8th | 2 | 256 | 24 | 965 MHz | 993 MHz | 2 GB DDR3 | 900 MHz | 64 bit | 508 | 15.9 | 7.9 | 23.8 | 14.4 | 16 W |
Geforce 930M | March 13, 2015 | GM108 | 8th | 3 | 384 | 24 | 928 MHz | 941 MHz | 2 GB DDR3 | 900 MHz | 64 bit | 723 | 22.6 | 7.5 | 22.6 | 14.4 | 33 W |
Geforce 930MX | March 1, 2016 | GM108 | 8th | 3 | 384 | 24 | 952 MHz | 1000 MHz | 2 GB DDR3 | 900 MHz | 64 bit | 783 | 24.5 | 8.2 | 24.5 | 14.4 | 17 W. |
Geforce 940M | March 13, 2015 | GM108 | 8th | 3 | 384 | 24 | 1072 MHz | 1176 MHz | 2 GB DDR3 | 900 MHz | 64 bit | 903 | 28.2 | 9.4 | 28.2 | 14.4 | 33 W |
GM107 | 16 | 4th | 512 | 32 | 1020 MHz | 1098 MHz | 1124 | 35.1 | 17.6 | 35.1 | 75 W | ||||||
Geforce 940MX | Jun 28, 2016 | GM108 | 8th | 3 | 384 | 24 | 1004 MHz | 1242 MHz | 2 GB DDR3 | 1001 MHz | 64 bit | 954 | 29.8 | 9.9 | 29.8 | 16 | 23 W |
954 MHz | 993 MHz | 1 GB GDDR5 | 2506 MHz | 763 | 23.8 | 7.9 | 23.8 | 40.1 | |||||||||
GM107 | 4th | 512 | 32 | 795 MHz | 861 MHz | 2 GB GDDR5 | 882 | 27.6 | 6.9 | 27.6 | |||||||
Geforce 945M | Apr 8, 2016 | GM108 | 8th | 3 | 384 | 24 | 1006 MHz | 1189 MHz | 1 GB DDR3 | 1001 MHz | 64 bit | 913 | 28.5 | 9.5 | 28.5 | 16 | 23 W |
Oct 27, 2015 | GM107 | 16 | 5 | 640 | 40 | 928 MHz | 1020 MHz | 2 GB DDR3 | 900 MHz | 128 bit | 1306 | 40.8 | 16.3 | 40.8 | 28.8 | 75 W | |
Geforce GTX 950M | March 13, 2015 | GM107 | 16 | 5 | 640 | 40 | 993 MHz | 1124 MHz | 4 GB DDR3 | 900 MHz | 128 bit | 1439 | 45 | 18th | 45 | 28.8 | 75 W |
Geforce GTX 960M | March 13, 2015 | GM107 | 16 | 5 | 640 | 40 | 1097 MHz | 1176 MHz | 4 GB GDDR5 | 2506 MHz | 128 bit | 1505 | 47 | 18.8 | 47 | 80.2 | 75 W |
Geforce GTX 965M | Jun 12, 2016 | GM206 | 32 | 8th | 1024 | 64 | 935 MHz | 1150 MHz | 2 GB GDDR5 | 2506 MHz | 128 bit | 2355 | 73.6 | 36.8 | 73.6 | 80.2 | k. A. |
Jan. 9, 2015 | GM204 | 540 MHz | - | 1106 | 34.6 | 17.3 | 34.6 | ||||||||||
924 MHz | 950 MHz | 1946 | 60.8 | 30.4 | 60.8 | ||||||||||||
935 MHz | - | 4 GB GDDR5 | 1915 | 59.8 | 29.9 | 59.8 | |||||||||||
Geforce GTX 970M | Oct 7, 2014 | GM204 | 48 | 10 | 1280 | 80 | 924 MHz | 1038 MHz | 3 GB GDDR5 | 2506 MHz | 192 bits | 2657 | 83 | 41.5 | 83 | 120.3 | ~ 100 W |
6 GB GDDR5 | |||||||||||||||||
Geforce GTX 980M | Oct 7, 2014 | GM204 | 64 | 12 | 1536 | 96 | 540 MHz | - | 8 GB GDDR5 | 2506 MHz | 256 bit | 1659 | 51.8 | 25.9 | 51.8 | 160.4 | k. A. |
1038 MHz | 1127 MHz | 3462 | 108.2 | 54.1 | 160.4 | ~ 122 W | |||||||||||
GeForce GTX 980MX | Jun 1, 2016 | GM204 | 64 | 13 | 1664 | 104 | 1050 MHz | 1178 MHz | 8 GB GDDR5 | 3000 MHz | 256 bit | 3920 | 122.5 | 61.3 | 122.5 | 192 | 148 W |
GeForce GTX 980 (notebook) | Nov 22, 2015 | GM204 | 64 | 16 | 2048 | 128 | 1064 MHz | 1190 MHz | 8 GB GDDR5 | 3506 MHz | 256 bit | 4874 | 152.3 | 76.2 | 152.3 | 224.4 | ~ 200 W |
Remarks
- ↑ a b The date indicated is the date of the public presentation, not the date of availability of the models.
- ↑ a b The specified performance values for the computing power via the stream processors, the pixel and texel fill rate, and the memory bandwidth are theoretical maximum values (with boost clock) that are not directly comparable with the performance values of other architectures. The overall performance of a graphics card depends, among other things, on how well the available resources can be used or fully utilized. There are also other factors that are not listed here that affect performance.
- ↑ a b c d The specified clock rates are the reference data recommended or specified by Nvidia, the effective clock is specified for the memory clock. However, the exact clock rate can deviate by a few megahertz due to different clock generators, and the final definition of the clock rates is in the hands of the respective graphics card manufacturer. It is therefore entirely possible that there are or will be graphics card models that have different clock rates.
- ↑ a b The MGCP value given by Nvidia does not necessarily correspond to the maximum power consumption. This value is also not necessarily comparable with the TDP value of the competitor AMD.
- ↑ The measured values listed in the table relate to the pure power consumption of graphics cards that correspond to the Nvidia reference design. A special measuring device is required to measure these values; Depending on the measurement technology used and the given measurement conditions, including the program used to generate the 3D load, the values can fluctuate between different devices. Therefore, measured value ranges are given here, each representing the lowest, typical and highest measured values from different sources.
Web links
Individual evidence
- ↑ NVIDIA breaks down DirectX 12 support for its own graphics cards. hardwareLUXX , June 1, 2015, accessed June 7, 2015 .
- ↑ a b Launch analysis: nVidia GeForce GTX Titan X. 3DCenter.org, March 18, 2015, accessed on June 10, 2015 .
- ↑ Tesla K80 - dual Kepler with up to 8.7 TFLOPS for supercomputers. ComputerBase, November 17, 2014, accessed August 6, 2015 .
- ↑ heise online: Quadro M6000: 6000 Euro high-end workstation card with 12 GB of RAM. Retrieved January 25, 2020 .
- ↑ Instead of Geforce 2018: Nvidia Quadro GV100 brings 32 GiByte HBM2 and Volta-GPU. March 28, 2018, accessed January 25, 2020 .
- ↑ GeForce GTX Titan X in the test: Nvidia's 4K graphics card with 12 GB memory (page 3). ComputerBase, March 17, 2015, accessed June 10, 2015 .
- ↑ GeForce GTX Titan X in the test: Nvidia's 4K graphics card with 12 GB memory (page 4). ComputerBase, March 17, 2015, accessed June 10, 2015 .
- ↑ GeForce GTX Titan X: Nvidia's flagship sells better than the original. ComputerBase, March 26, 2015, accessed June 10, 2015 .
- ↑ a b Launch analysis: nVidia GeForce GTX 980 Ti. 3DCenter.org, June 2, 2015, accessed on June 10, 2015 .
- ↑ Launch analysis: nVidia GeForce GTX 970 & 980.3DCenter, September 19, 2014, accessed on February 3, 2015 .
- ↑ a b c d Nvidia GeForce GTX 980 and GTX 970 in the (SLI) test - efficient, high performance (page 6). Computerbase, September 19, 2014, accessed February 3, 2015 .
- ↑ Steam Hardware Survey: 1080p, two cores and Nvidia GPU in the front. Computerbase, August 12, 2015, accessed September 24, 2015 .
- ↑ Launch analysis: nVidia GeForce GTX 960.3DCenter, January 23, 2015, accessed on February 3, 2015 .
- ↑ a b c Nvidia GeForce GTX 960 in the test - Maxwell for 200 euros with 128 bits. Computerbase, January 22, 2015, accessed February 3, 2015 .
- ↑ a b c d e f g h i j k l m Power consumption of current and past graphics cards. 3DCenter.org, February 23, 2014, accessed June 7, 2015 .
- ↑ Launch analysis nVidia GeForce GTX 950 (page 2). 3DCenter.org, August 23, 2015, accessed September 24, 2015 .
- ↑ GeForce GTX 970: Nvidia confirms memory limitations. Computerbase, January 25, 2015, accessed March 5, 2015 .
- ↑ a b c nVidia admits the "3.5 GB problem" of the GeForce GTX 970. 3DCenter, January 27, 2015, accessed February 3, 2015 .
- ↑ GeForce GTX 970: Correcting The Specs & Exploring Memory Allocation. January 26, 2015, accessed November 4, 2015 .
- ↑ The GeForce GTX 970 and the limited memory - An analysis. Computerbase, January 29, 2015, accessed February 3, 2015 .
- ↑ GeForce GTX 970: Nvidia leaves dealers and customers out in the rain. Heise online , accessed on January 29, 2015 .
- ↑ GTX 970: Sales are said to have plummeted in February. PC Games Hardware, June 9, 2015, accessed June 10, 2015 .
- ↑ GeForce GTX 970: Class action lawsuit against Nvidia. Heise online, February 24, 2015, accessed November 4, 2015 .
- ↑ Nvidia CEO makes GTX 970 limitation a new function. Computerbase, February 24, 2015, accessed May 3, 2015 .
- ↑ Michael Günsch: Wrong specifications: Nvidia compensates US buyers of the GeForce GTX 970. Accessed July 28, 2016 .
- ↑ NVIDIA GeForce GTX 950 (OEM). Nvidia Corporation. Retrieved November 23, 2017 .
- ↑ NVIDIA GeForce GTX 950. Nvidia Corporation, accessed August 20, 2015 .
- ↑ NVIDIA GeForce GTX 960 (OEM). Nvidia Corporation. Retrieved November 23, 2017 .
- ↑ NVIDIA GeForce GTX 960. Nvidia Corporation, accessed January 22, 2015 .
- ↑ NVIDIA GeForce GTX 970. Nvidia Corporation, accessed September 19, 2014 .
- ↑ a b c Nvidia corrects the specifications of the GeForce GTX 970. golem.de, January 25, 2015, accessed on November 12, 2015 .
- ↑ NVIDIA GeForce GTX 980. Nvidia Corporation, accessed September 19, 2014 .
- ↑ NVIDIA GeForce GTX 980 Ti. Nvidia Corporation, accessed June 1, 2015 .
- ↑ NVIDIA GeForce GTX Titan X. Nvidia Corporation, accessed March 17, 2015 .
- ↑ NVIDIA GeForce 910M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 920M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 920MX. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 930M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 930MX. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 940M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 940MX. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce 945M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce GTX 950M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce GTX 960M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce GTX 965M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce GTX 970M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce GTX 980M. Nvidia Corporation, accessed January 15, 2016 .
- ↑ NVIDIA GeForce GTX 980MX. Nvidia Corporation, accessed June 1, 2016 .
- ↑ NVIDIA GeForce GTX 980 (notebook). Nvidia Corporation, accessed January 15, 2016 .