Pitch Black (film) and Graphics processing unit: Difference between pages

From Wikipedia, the free encyclopedia
(Difference between pages)
Content deleted Content added
No edit summary
 
 
Line 1: Line 1:
{{redirect|GPU|other uses|GPU (disambiguation)}}
{{Unreferenced|date=August 2008}}
{{otheruses|Pitch Black (disambiguation)}}
{{Infobox Film
| name = Pitch Black
| image = Pitch Black poster.JPG
| caption = Theatrical Poster
| director = [[David Twohy]]
| producer = Tom Engelman
| writer = '''Story:'''<br>Jim Wheat<br>[[Ken Wheat]]<br>'''Screenplay:'''<br>Jim Wheat<br>Ken Wheat<br>David Twohy
| starring = [[Vin Diesel]]<br>[[Radha Mitchell]]<br>[[Cole Hauser]]<br>[[Keith David]]<br>[[Lewis Fitz-Gerald]]
| music = [[Graeme Revell]]
| cinematography = [[David Eggby]]
| editing = [[Rick Shaine]]
| distributor = [[Focus Features|USA Films]]<br>[[Universal Studios]]
| released = [[February 18]], [[2000]]
| runtime = 109 minutes
| language = [[English language|English]]<br>[[Arabic language|Arabic]]
| budget = $23,000,000
|gross = $53,187,659
| preceded_by = ''[[The_Chronicles_of_Riddick_(series)#Slam_City|PITCHBLACK: Slam City]]''
| followed_by = ''[[The Chronicles of Riddick]]''
| amg_id = 1:181890
| imdb_id = 0134847
}}
'''''Pitch Black''''' (also known as '''''The Chronicles of Riddick: Pitch Black''''') is a [[2000 in film|2000]] [[science fiction]] action [[film|movie]] [[film director|directed]] by [[David Twohy]].


[[Image:6600GT GPU.jpg|thumb|[[GeForce 6 Series#GeForce 6600 Series|GeForce 6600GT (NV43)]] GPU]]
In the film, a dangerous criminal ([[Riddick|Richard B. Riddick]]) is being transported to prison in a cargo spacecraft. When the spaceship is damaged in a [[meteor shower]] and makes an emergency crash landing on an empty [[desert planet]], Riddick escapes. However, when predatory flying alien creatures called "Bioraptors" (dubbed "Big Boys" by Riddick) begin attacking the survivors, Riddick joins forces with the crew to develop a plan to escape the planet.
A '''graphics processing unit''' or '''GPU''' (also occasionally called '''visual processing unit''' or '''VPU''') is a dedicated graphics rendering device for a [[personal computer]], [[workstation]], or [[game console]]. Modern GPUs are very efficient at manipulating and displaying [[computer graphics]], and their highly parallel structure makes them more effective than general-purpose [[Central processing unit|CPUs]] for a range of complex [[algorithm]]s. A GPU can sit on top of a [[video card]], or it can be integrated directly into the [[motherboard]]. More than 90% of new desktop and notebook computers have integrated GPUs, which are usually far less powerful than those on a video card.<ref>{{ cite web | url = http://computershopper.com/feature/200704_the_right_gpu_for_you | title = Computer Shopper: The Right GPU for You | author = Denny Atkin | accessdate = 2007-05-15 }}</ref>


==Plot Summary==
== History ==
=== Graphics accelerators ===
As the movie opens a cargo spacecraft accidentally crosses through a meteor shower while on auto-pilot. The ship along with 10 passengers survives a crash onto a strange, brightly lit desert planet. Among the survivors are Carolyn Fry, the ship's docking pilot; dangerous criminal Richard B. Riddick, headed for a new prison; his captor, William J. Johns; a [[Muslim]] [[Imam]] with his three sons Hassan, Ali and Suleiman; a young stowaway named Jack; an antique dealer named Paris; and two [[Australia]]n settlers, Zeke and Shazza.
*A GPU (Graphics Processing Unit) is a processor attached to a graphics card dedicated to calculating [[floating point]] operations and the like.
*A graphics accelerator incorporates custom microchips which contain special mathematical operations commonly used in graphics rendering. The efficiency of the microchips therefore determines the effectiveness of the graphics accelerator. They are mainly used for playing 3D games or high-end 3D rendering.
*A GPU implements a number of graphics [[Primitive (geometry)|primitive]] operations in a way that makes running them much faster than drawing directly to the screen with the host CPU. The most common operations for early [[2D computer graphics]] include the [[BitBLT]] operation (combines several [[bitmap]] patterns using a [[RasterOp]]), usually in special hardware called a ''"[[blitter]]"'', and operations for drawing [[rectangle]]s, [[triangle]]s, [[circle]]s, and [[Arc (geometry)|arc]]s. Modern GPUs also have support for [[3D computer graphics]], and typically include [[digital video]]–related functions.


====1980s====
Riddick manages to escape, and Johns leaves to track him. When Zeke goes to bury the bodies of the ship's navigator and another dead colonist he is attacked by unseen creatures and brutally killed. Riddick is caught shortly afterward by Johns and Shazza accuses Riddick of killing Zeke. At Riddick's urging Fry investigates a cave and discovers that some kind of vicious creatures exist underground.
[[Image:Pitch Black screenshot 3.jpg|left|thumb|350px|Visual effect screenshot from ''Pitch Black'']]


The [[Commodore Amiga]] was the first mass-market computer to include a [[blitter]] in its video hardware, and IBM's [[8514 (display standard)|8514]] graphics system was one of the first PC video cards to implement 2D primitives in hardware.
An abandoned [[mining]] settlement with a supply of water and a small escape ship is discovered several miles from the crash site. While exploring the deserted base, Hassan investigates the large coring room. He awakens a swarm of smaller creatures and is found flayed of all his skin. A month long total eclipse of the planet is imminent and the survivors must escape before it begins and the creatures emerge. They must retrieve solar shells from the crashed ship to power the escape vehicle.


The Amiga was unique, for the time, in that it featured what would now be recognized as a full graphics accelerator, offloading practically all video generation functions to hardware, including line drawing, area fill, block image transfer, and a graphics coprocessor with its own (though primitive) instruction set. Prior (and quite some time after on most systems) a general purpose CPU had to handle every aspect of drawing the display.
While retrieving the cells the eclipse begins and the creatures attack. The group attempts to take shelter, but Shazza is violently torn a apart by the smaller creatures during a dash for the wrecked ship. The group take shelter in a section of the cargo ship, until they find that the bioraptors have also found a way inside, when a pair kills and eats Ali. Johns shoots the bioraptor and inspecting the corpse the group discover that the creatures are extremely photosensitive, to the point where their skin starts to burn when light is directed upon it. Electing to make a dash for the ship with the power cells, the seven arm themselves with lights and attempt the journey back to the escape vehicle. Riddick uses his surgically-altered eyes to lead through the darkness. While at first well illuminated, the large, brightly lit fiber optic cables the group have wrapped around them fail when Paris panics and attempts to flee, knocking over the power configuration and extinguishing the light. Without the protection the light offers, Paris is quickly eaten. The six make their way to a large canyon, which in turn leads to the settlement and salvation. Riddick says that the entire canyon is filled with the "Big Boys" and that "The girl" is bleeding, turning to Jack. Jack admits that she is a girl, saying she thought that people would respect her more if people thought she was male. Johns speaks with Riddick, saying that if Jack's blood will attract the creatures, then maybe he should kill her and drag the body behind them as a distraction. This comment causes Riddick to attack Johns and they fight in a small circle of light created by flares. The fight lasts until the flares go out. Riddick sinks into the shadows as, without protection, Johns is eaten by a Bioraptor. As they enter the canyon it starts raining. The lights go out and Suleiman is grabbed by an alien and injured but Fry wards it off with a flashlight. They continue but Suleiman is again taken by a creature, too quickly for anyone to help. When most of their light sources are gone, the three remain in a cave while Riddick goes alone to the ship. In the cave the three discovers that it's filled with glowworm like slugs and fill bottles with them. Armed with this Fry goes after Riddick. Impressed at Fry's instinct and skill in finding her way to the ship, Riddick callously asks her to leave Imam and Jack behind. Fry initially accepts, but then overcomes her fears and pins Riddick to the ground, ordering him to help her rescue the remaining survivors. Riddick then easily overpowers Fry and holds a knife to her throat, asking if she is willing to die for them. Fry states that she would. Riddick is once again impressed by Fry, and agrees to return to Imam and Jack, and the four then make their way to the ship.
On their way back, Riddick has trailed behind and is surrounded by two creatures. Fry, who had made it back to the skiff, subsequently finds Riddick with a severe leg injury. She tries to help him back but is struck and taken by a bioraptor.


====1990s====
Riddick makes it back to the skiff to find Imam and Jack waiting. In a final stroke of revenge, he delays departure until the last second before engaging the engines at full throttle to incinerate the greatest possible number of advancing creatures.
[[Image:Dstealth32.jpg|thumb|[[Tseng Labs]] [[Tseng Labs ET4000|ET4000/W32p]]]]
[[Image:DIAMONDSTEALTH3D2000-top.JPG|thumb|[[S3 Graphics]] [[S3 ViRGE|ViRGE]]]]
[[Image:Voodoo3-2000AGP.jpg|thumb|[[3dfx]] [[Voodoo3]]]]
By the early 1990s, the rise of [[Microsoft Windows]] sparked a surge of interest in high-speed, high-resolution 2D [[Raster graphics|bitmapped graphics]] (which had previously been the domain of Unix workstations and the [[Apple Macintosh]]). For the PC market, the dominance of Windows meant PC graphics vendors could now focus development effort on a single programming interface, [[Graphics Device Interface]] (GDI).


In 1991, [[S3 Graphics]] introduced the first single-chip 2D accelerator, the ''[[S3 Graphics|S3 86C911]]'' (which its designers named after the [[Porsche 911]] as an indication of the speed increase it promised). The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function '''Windows accelerators''' had surpassed expensive general-purpose graphics coprocessors in Windows performance, and these coprocessors faded away from the PC market.
As the three are leaving the planet, Jack asks what they should say if they run into bounty hunters or other law enforcers asking where to find Riddick. He responds by saying, "Tell 'em Riddick's dead. He died somewhere on that planet."


Throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional [[application programming interface]]s (APIs) arrived for a variety of tasks, such as Microsoft's [[WinG]] graphics library for [[Windows 3.1x|Windows 3.x]], and their later [[DirectDraw]] interface for hardware acceleration of 2D games within [[Windows 95]] and later.
==Cast==
* [[Vin Diesel]] as [[Riddick|Richard B. Riddick]]
* [[Radha Mitchell]] as Carolyn Fry
* [[Cole Hauser]] as William J. Johns
* [[Keith David]] as Abu "Imam" al-Walid
* [[Lewis Fitz-Gerald]] as Paris P. Ogilvie
* [[Claudia Black]] as Sharon "Shazza" Montgomery
* [[Rhiana Griffith]] as Jack / Jackie
* John Moore as John "Zeke" Ezekiel
* Simon Burke as Greg Owens
* Les Chantery as Suleiman
* Sam Sari as Hassan
* [[Firass Dirani]] as Ali
* Vic Wilson as Captain Tom Mitchell
* Angela Moore as Dead Crewmember


In the early and mid-1990s, [[CPU]]-assisted real-time 3D graphics were becoming increasingly common in computer and console games, which lead to an increasing public demand for [[3D acceleration|hardware-accelerated 3D graphics]]. Early examples of mass-marketed 3D graphics hardware can be found in [[History of video game consoles (fifth generation)|fifth generation video game consoles]] such as [[PlayStation]] and [[Nintendo 64]]. In the PC world, notable failed first-tries for low-cost 3D graphics chips were the [[S3 Graphics|S3]] ''[[ViRGE]]'', [[ATI Technologies|ATI]] ''Rage'', and [[Matrox]] ''Mystique''. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were even [[Pin-compatibility|pin-compatible]] with the earlier-generation chips for ease of implementation and minimal cost. Initially, performance 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D GUI acceleration entirely) such as the [[3dfx]] ''Voodoo''. However, as manufacturing technology again progressed, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. [[Rendition (company)|Rendition's]] ''Verite'' chipsets were the first to do this well enough to be worthy of note.
==Related works==
{{main|The Chronicles of Riddick (series)}}


[[OpenGL]] appeared in the early 90s as a professional graphics API, but became a dominant force on the PC, and a driving force for hardware development. Software implementations of [[OpenGL]] were common during this time although the influence of [[OpenGL]] eventually lead to widespread hardware support. Over time a parity emerged between features offered in hardware in those offered in [[OpenGL]]. [[DirectX]] became popular among [[Microsoft Windows|Windows]] game developers during the late 90s. Unlike [[OpenGL]], Microsoft insisted on a providing strict one-to-one support of hardware. The approach made DirectX less popular as a stand alone graphics API initially since many GPUs provided their own specific features, which existing [[OpenGL]] applications were already able to benefit from, leaving DirectX often one generation behind. (See: [[Comparison of OpenGL and Direct3D]]).
The movie's [[sequel]], ''[[The Chronicles of Riddick]]'', was released in [[2004 in film|2004]] and was also directed by David Twohy. A short animated movie, ''[[The Chronicles of Riddick: Dark Fury]]'', directed by Peter Chung, was also released that year. ''Dark Fury'' bridges the gap between ''Pitch Black'' and ''Chronicles of Riddick''. In [[2000]] a prequel to ''Pitch Black'' was released named ''[[Into Pitch Black]]'', which was supposed to be a [[Documentary film|Documentary]] movie. ''[[The Chronicles of Riddick: Escape from Butcher Bay]]'', a game for the [[Xbox]] and the [[PC clone|PC]], was also released in 2004.

Over time Microsoft began to work closer with hardware developers, and started to target the releases of [[DirectX]] with those of the supporting graphics hardware. [[Direct3D]] 5.0 was the first version of the [[wikt:burgeoning|burgeoning]] API to gain wide-spread adoption in the gaming market, and it competed directly with many more hardware specific, often proprietary graphics libraries, while [[OpenGL]] maintained a strong following. [[Direct3D]] 7.0 introduced support for hardware-accelerated [[transform and lighting]] (T&L). 3D accelerators moved beyond being just simple rasterizers to add another significant hardware stage to the 3D rendering pipeline. The [[NVIDIA]] ''[[GeForce 256]]'' (also known as NV10) was the first card on the market with this capability. Hardware transform and lighting, both already existing features of [[OpenGL]], came to hardware in the 90s and set the precedent for later pixel shader and vertex shader units which were far more flexible and programmable.

====2000 to present====
With the advent of the [[OpenGL]] API and similar functionality in [[DirectX]], GPUs added programmable [[Pixel shader|shading]] to their capabilities. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. NVIDIA was first to produce a chip capable of programmable shading, the ''[[GeForce 3]]'' (code named NV20). By October 2002, with the introduction of the [[ATI Technologies|ATI]] ''[[Radeon 9700 core|Radeon 9700]]'' (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement [[Control flow#Loops|looping]] and lengthy [[floating point]] math, and in general were quickly becoming as flexible as CPUs, and orders of magnitude faster for image-array operations. Pixel shading is often used for things like [[bump mapping]], which adds texture, to make an object look shiny, dull, rough, or even round or extruded. <ref>{{ cite web | url = http://www.blacksmith-studios.dk/projects/downloads/bumpmapping_using_cg.php | title = Bump Mapping Using CG (3rd Edition) | author = Søren Dreijer | accessdate = 2007-05-30 }}</ref>

As the processing power of GPUs have increased, so has their demand for electrical power. High performance GPUs often consume more energy than current CPUs.<ref>http://www.xbitlabs.com/articles/video/display/power-noise.html X-bit labs: Faster, Quieter, Lower: Power Consumption and Noise Level of Contemporary Graphics Cards</ref> See also [[performance per watt]] and [[quiet PC]].

Today, [[Parallel computing|parallel]] GPUs have begun making computational inroads against the CPU, and a subfield of research, dubbed [[GPGPU]] for ''General Purpose Computing on GPU'', has found its way into fields as diverse as [[oil exploration]], scientific [[image processing]], and even [[stock options]] pricing determination. There is increased pressure on GPU manufacturers from "GPGPU users" to improve hardware design, usually focusing on adding more flexibility to the programming model.{{Fact|date=February 2007}}

===GPU companies===
There have been many companies producing GPUs over the years, under numerous brand names. The current dominators of the market are [[AMD]] (manufacturers of the [[ATI Radeon]] and ATI FireGL graphics chip line) and [[NVIDIA]] (manufacturers of the [[NVIDIA Geforce]] and [[NVIDIA Quadro]] graphics chip line.) [[Intel]] is steadily moving into the graphics market as they evolve their integrated graphics products to better compete with the add-on GPUs offered by ATI and NVIDIA.

==Computational functions==

Modern GPUs use most of their [[transistor]]s to perform calculations related to [[3D computer graphics]]. They were initially used to accelerate the memory-intensive work of [[texture mapping]] and [[rendering]] polygons, later adding units to accelerate [[geometry|geometric]] calculations such as the [[rotation]] and [[Translation (geometry)|translation]] of [[vertex (geometry)|vertices]] into different [[coordinate system]]s. Recent developments in GPUs include support for [[programmable shader]]s which can manipulate vertices and textures with many of the same operations supported by [[Central processing unit|CPUs]], [[oversampling]] and [[interpolation]] techniques to reduce [[aliasing]], and very high-precision [[color space]]s. Because most of these computations involve [[Matrix (mathematics)|matrix]] and [[Vector calculus|vector]] operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations.

In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). In addition, most GPUs made since 1995 support the [[YUV]] [[color space]] and [[hardware overlay]]s (important for [[digital video]] playback), and many GPUs made since 2000 support [[MPEG]] primitives such as [[motion compensation]] and [[inverse discrete cosine transform|iDCT]]. Recent graphics cards even decode [[high-definition video]] on the card, taking some load off the central processing unit.

==GPU forms==
===Dedicated graphics cards===

{{main | Video card}}

The most powerful class of GPUs typically interface with the [[motherboard]] by means of an [[expansion slot]] such as [[PCI Express]] (PCIe) or [[Accelerated Graphics Port]] (AGP) and can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few [[graphics cards]] still use [[Peripheral Component Interconnect]] (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is unavailable.

A dedicated GPU is not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that dedicated graphics cards have [[RAM]] that is dedicated to the card's use, not to the fact that '''most''' dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.

Multiple cards can draw together a single image, so that the number of pixels can be doubled and [[antialiasing]] can be set to higher quality. If the screen is parted into a left and right, each card can cache the textures and geometry from their side (See [[Scalable Link Interface]] (SLI) and [[ATI CrossFire]]).

===Integrated graphics solutions===
[[Image:Harumphy.dg965.heatsink.jpg|thumb|[[Intel GMA]] X3000 IGP (under heatsink)]]

'''Integrated graphics solutions''', or '''shared graphics solutions''' are graphics processors that utilize a portion of a computer's system RAM rather than dedicated graphics memory. Computers with integrated graphics account for 90% of all PC shipments<ref>[http://www.anandtech.com/mb/showdoc.aspx?i=3111&p=23 AnandTech: µATX Part 2: Intel G33 Performance Review<!-- Bot generated title -->]</ref>. These solutions are cheaper to implement than dedicated graphics solutions, but are less capable. Historically, integrated solutions were often considered unfit to play 3D games or run graphically intensive programs such as Adobe Flash{{Fact|date=October 2007}}. (Examples of such IGPs would be offerings from SiS and VIA circa 2004.)<ref>{{ cite web | url = http://www.xbitlabs.com/articles/chipsets/display/int-chipsets-roundup.html | title = Xbit Labs: Roundup of 7 Contemporary Integrated Graphics Chipsets for Socket 478 and Socket A Platforms | author = Tim Tscheblockov | accessdate = 2007-06-03 }}</ref> However, today's integrated solutions such as the Intel's [[Intel GMA|GMA X3000]] ([[List_of_Intel_chipsets#Core_2_Chipsets| Intel G965 chipset]]), AMD's Radeon HD 3200 ([[AMD 780G]] chipset) and NVIDIA's GeForce 8200 ([[nForce 710|NVIDIA nForce 730a]]) are more than capable of handling 2D graphics from Adobe Flash or low stress 3D graphics<ref>[http://www.extremetech.com/article2/0,2845,2121192,00.asp Intel G965 with GMA X3000 Integrated Graphics - Media Encoding and Game Benchmarks - CPUs, Boards & Components by ExtremeTech<!-- Bot generated title -->]</ref>. However, the aforementioned GPUs still struggle with high-end video games. Modern desktop motherboards often include an integrated graphics solution and have expansion slots available to add a dedicated graphics card later.

As a GPU is extremely memory intensive, an integrated solution finds itself competing for the already slow system RAM with the CPU as it has no dedicated video memory. System RAM may be 2 Gb/s to 12.8 Gb/s, yet dedicated GPUs enjoy between 10 Gb/s to over 100 Gb/s of bandwidth depending on the model.

Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.<ref>{{ cite web | url = http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/Integrated_Graphics_Solutions_white_paper_rev61.pdf | title = Integrated Graphics Solutions for Graphics-Intensive Applications | author = Bradley Sanford | accessdate = 2007-09-02 }}</ref>

===Hybrid solutions===

This newer class of GPUs competes with integrated graphics in the low-end PC and notebook markets. The most common implementations of this are ATI's [[HyperMemory]] and NVIDIA's [[TurboCache]]. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. These also share memory with the system, but have a smaller amount of it than do discrete graphics cards to make up for the high [[Memory latency|latency]] of the system RAM. Technologies within PCI Express can make this possible. While these solutions are sometimes advertised as having as much as 768MB of RAM, this refers to how much can be shared with the system memory.

=== Stream Processing and General Purpose GPUs (GPGPU) ===
{{main|GPGPU|Stream processing}}
A new concept is to use a modified form of a [[stream processing|stream processor]] to allow a [[GPGPU|general purpose graphics processing unit]]. This concept turns the massive [[floating-point]] computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power, as opposed to being hard wired solely to do graphical operations. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics cards" above) GPU designers, [[ATI Technologies|ATI]] and [[NVIDIA]], are beginning to pursue this new market with an array of applications. ATI has teamed with [[Stanford University]] to create a GPU-based client for its [[Folding@Home]] distributed computing project (for protein folding calculations) that in certain circumstances yields results forty times faster than the conventional CPUs traditionally used in such applications.<ref>{{ cite web | url = http://www.engadget.com/2006/09/29/stanford-university-tailors-folding-home-to-gpus/ | title = Stanford University tailors Folding@home to GPUs | author = Darren Murph | accessdate = 2007-10-04 }}</ref><ref>{{ cite web | url = http://graphics.stanford.edu/~mhouston/ | title = Folding@Home - GPGPU | author = Mike Houston | accessdate = 2007-10-04 }}</ref>

Recently NVidia began releasing cards supporting an API extension to the [[C (programming language)|C]] programming language called [[CUDA]] ("Compute Unified Device Architecture"), which allows specified functions from a normal C program to run on the GPU's stream processors. This makes C programs capable of taking advantage of a GPU's ability to operate on large matrices in parallel, while still making use of the CPU where appropriate. CUDA is also the first API to allow CPU-based applications to access directly the resources of a GPU for more general purpose computing without the limitations of using a graphics API.

Since 2005 there has been interest in using the speed offered by GPUs for [[evolutionary computation]] in general and for speeding up the [[Fitness (genetic algorithm)|fitness]] evaluation in [[genetic programming]] in particular. There is a short introduction on pages 90-92 of A Field Guide To Genetic Programming. Most approaches compile [[linear genetic programming|linear]] or [[genetic programming|tree programs]] on the host PC and transfer the executable to the GPU to run. Typically the speed advantage is only obtained by running the single active program simultaneously on many example problems in parallel using the GPU's [[SIMD]] architecture<ref>{{ cite web | url = http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/eurogp07_harding.html | title = Fast genetic programming on GPUs | author = S Harding and W Banzhaf | accessdate = 2008-05-01 }}</ref>. However, substantial speed up can also be obtained by not compiling the programs but instead transferring them to the GPU and interpreting them there<ref>{{ cite web | url = http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/langdon_2008_eurogp.html | title = A SIMD interpreter for Genetic Programming on GPU Graphics Cards | author = W Langdon and W Banzhaf | accessdate = 2008-05-01 }}</ref>. Speedup can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU (''e.g.'' [[GeForce 8 Series|8800 GTX]]) can readily simultaneously interpret hundreds of thousands of very small programs.


==See also==
==See also==
*[[Processing unit]]
* [[Solar eclipses in fiction]]
* [[Grue]]
*[[Video card]]
*[[Computer graphics]]
*[[Computer hardware]]
*[[Video game console|Game console]]
*[[Ray tracing hardware]]
*[[Monitors]]
*[[Physics Processing Unit]]
*[[GPU cluster]]
*[[Comparison of ATI Graphics Processing Units]]
*[[Comparison of Nvidia Graphics Processing Units]]
*[[Intel GMA]]
*Intel's upcoming GPU, [[Larrabee (GPU)|Larrabee]].


==References==
==References==
<references/>
{{reflist}}


==External links==
==External links==
*[http://www.youtube.com/watch?v=3NBGzZ_InTM Video] of [[Jamie Hyneman]] and [[Adam Savage]], demonstrating the essence of a GPU, with a massive paintball gun.
{{wikiquote|Pitch Black}}
* [http://www.pitchblack.com/ Official movie website]
*[http://www.nvidia.com/content/nsist/module/what_gpu.asp NVIDIA - What is a GPU?]
*The [http://developer.nvidia.com/object/gpu_gems_2_home.html ''GPU Gems'' book series]
* {{imdb title|id=0134847|title=Pitch Black}}
*[http://www.tomshardware.com/2006/08/08/graphics_beginners_3/ Toms Hardware GPU beginners' Guide]
* {{rotten-tomatoes|id=pitch_black|title=Pitch Black}}
*[http://www.gpgpu.org/ General-Purpose Computation Using Graphics Hardware]
* {{mojo title|id=pitchblack|title=Pitch Black}}
*[http://www.computer.org/portal/site/computer/menuitem.5d61c1d591162e4b0ef1bd108bcd45f3/index.jsp?&pName=computer_level1_article&TheCat=1055&path=computer/homepage/Feb07&file=howthings.xml&xsl=article.xsl&;jsessionid=G10s8pkpkP1K0Lk07bXx5dR0mXLSj8hXdnLDN5Kjj5GZTJtTTLZ0!1592783441 How GPUs work]

*{{HSW|39-how-to-install-a-graphics-card-video|How to Install a Graphics Card}}
{{The Chronicles of Riddick}}
*[http://www.agilemolecule.com/Ascalaph/Ascalaph-Liquid.html Ascalaph Liquid GPU] [[molecular dynamics]].
{{Processing units}}


[[Category:2000 films]]
[[Category:Graphics hardware]]
[[Category:American films]]
[[Category:Graphics cards]]
[[Category:English-language films]]
[[Category:Virtual reality]]
[[Category:Films shot in Super 35]]
[[Category:Independent films]]
[[Category:Monster movies]]
[[Category:Science fiction action films]]
[[Category:Science fiction horror films]]
[[Category:Space adventure films]]
[[Category:The Chronicles of Riddick films]]


[[bs:Grafički procesor]]
[[de:Pitch Black – Planet der Finsternis]]
[[ca:Unitat de Procés Gràfic]]
[[es:Pitch Black]]
[[fr:Pitch Black]]
[[cs:GPU]]
[[it:Pitch Black]]
[[de:Grafikprozessor]]
[[es:Graphics Processing Unit]]
[[he:פיץ' בלאק]]
[[fa:واحد پردازش گرافیکی]]
[[lt:Visiška tamsa]]
[[fr:Processeur graphique]]
[[hu:Pitch Black - 22 évente sötétség]]
[[gl:Unidade de Procesamento Gráfico]]
[[nl:Pitch Black]]
[[ko:그래픽 처리 장치]]
[[pl:Pitch Black]]
[[it:Graphics Processing Unit]]
[[ru:Чёрная дыра (фильм, 2000)]]
[[fi:Pimeän uhka]]
[[he:מעבד גרפי]]
[[kk:Бейне бейімдеуіш]]
[[sv:Pitch Black]]
[[zh:星際傳奇]]
[[lv:GPU]]
[[ms:Unit pemprosesan grafik]]
[[nl:GPU]]
[[ja:Graphics Processing Unit]]
[[no:GPU]]
[[pl:Procesor karty graficznej]]
[[pt:Unidade de processamento gráfico]]
[[ru:Графический процессор]]
[[sk:Grafický procesor]]
[[sl:Grafični procesor]]
[[sv:Grafikprocessor]]
[[tr:GPU]]
[[zh-yue:GPU]]
[[zh:圖形處理器]]

Revision as of 12:58, 12 October 2008

GeForce 6600GT (NV43) GPU

A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a dedicated graphics rendering device for a personal computer, workstation, or game console. Modern GPUs are very efficient at manipulating and displaying computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms. A GPU can sit on top of a video card, or it can be integrated directly into the motherboard. More than 90% of new desktop and notebook computers have integrated GPUs, which are usually far less powerful than those on a video card.[1]

History

Graphics accelerators

  • A GPU (Graphics Processing Unit) is a processor attached to a graphics card dedicated to calculating floating point operations and the like.
  • A graphics accelerator incorporates custom microchips which contain special mathematical operations commonly used in graphics rendering. The efficiency of the microchips therefore determines the effectiveness of the graphics accelerator. They are mainly used for playing 3D games or high-end 3D rendering.
  • A GPU implements a number of graphics primitive operations in a way that makes running them much faster than drawing directly to the screen with the host CPU. The most common operations for early 2D computer graphics include the BitBLT operation (combines several bitmap patterns using a RasterOp), usually in special hardware called a "blitter", and operations for drawing rectangles, triangles, circles, and arcs. Modern GPUs also have support for 3D computer graphics, and typically include digital video–related functions.

1980s

The Commodore Amiga was the first mass-market computer to include a blitter in its video hardware, and IBM's 8514 graphics system was one of the first PC video cards to implement 2D primitives in hardware.

The Amiga was unique, for the time, in that it featured what would now be recognized as a full graphics accelerator, offloading practically all video generation functions to hardware, including line drawing, area fill, block image transfer, and a graphics coprocessor with its own (though primitive) instruction set. Prior (and quite some time after on most systems) a general purpose CPU had to handle every aspect of drawing the display.

1990s

Tseng Labs ET4000/W32p
S3 Graphics ViRGE
3dfx Voodoo3

By the early 1990s, the rise of Microsoft Windows sparked a surge of interest in high-speed, high-resolution 2D bitmapped graphics (which had previously been the domain of Unix workstations and the Apple Macintosh). For the PC market, the dominance of Windows meant PC graphics vendors could now focus development effort on a single programming interface, Graphics Device Interface (GDI).

In 1991, S3 Graphics introduced the first single-chip 2D accelerator, the S3 86C911 (which its designers named after the Porsche 911 as an indication of the speed increase it promised). The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, and these coprocessors faded away from the PC market.

Throughout the 1990s, 2D GUI acceleration continued to evolve. As manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces (APIs) arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, and their later DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later.

In the early and mid-1990s, CPU-assisted real-time 3D graphics were becoming increasingly common in computer and console games, which lead to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-marketed 3D graphics hardware can be found in fifth generation video game consoles such as PlayStation and Nintendo 64. In the PC world, notable failed first-tries for low-cost 3D graphics chips were the S3 ViRGE, ATI Rage, and Matrox Mystique. These chips were essentially previous-generation 2D accelerators with 3D features bolted on. Many were even pin-compatible with the earlier-generation chips for ease of implementation and minimal cost. Initially, performance 3D graphics were possible only with discrete boards dedicated to accelerating 3D functions (and lacking 2D GUI acceleration entirely) such as the 3dfx Voodoo. However, as manufacturing technology again progressed, video, 2D GUI acceleration, and 3D functionality were all integrated into one chip. Rendition's Verite chipsets were the first to do this well enough to be worthy of note.

OpenGL appeared in the early 90s as a professional graphics API, but became a dominant force on the PC, and a driving force for hardware development. Software implementations of OpenGL were common during this time although the influence of OpenGL eventually lead to widespread hardware support. Over time a parity emerged between features offered in hardware in those offered in OpenGL. DirectX became popular among Windows game developers during the late 90s. Unlike OpenGL, Microsoft insisted on a providing strict one-to-one support of hardware. The approach made DirectX less popular as a stand alone graphics API initially since many GPUs provided their own specific features, which existing OpenGL applications were already able to benefit from, leaving DirectX often one generation behind. (See: Comparison of OpenGL and Direct3D).

Over time Microsoft began to work closer with hardware developers, and started to target the releases of DirectX with those of the supporting graphics hardware. Direct3D 5.0 was the first version of the burgeoning API to gain wide-spread adoption in the gaming market, and it competed directly with many more hardware specific, often proprietary graphics libraries, while OpenGL maintained a strong following. Direct3D 7.0 introduced support for hardware-accelerated transform and lighting (T&L). 3D accelerators moved beyond being just simple rasterizers to add another significant hardware stage to the 3D rendering pipeline. The NVIDIA GeForce 256 (also known as NV10) was the first card on the market with this capability. Hardware transform and lighting, both already existing features of OpenGL, came to hardware in the 90s and set the precedent for later pixel shader and vertex shader units which were far more flexible and programmable.

2000 to present

With the advent of the OpenGL API and similar functionality in DirectX, GPUs added programmable shading to their capabilities. Each pixel could now be processed by a short program that could include additional image textures as inputs, and each geometric vertex could likewise be processed by a short program before it was projected onto the screen. NVIDIA was first to produce a chip capable of programmable shading, the GeForce 3 (code named NV20). By October 2002, with the introduction of the ATI Radeon 9700 (also known as R300), the world's first Direct3D 9.0 accelerator, pixel and vertex shaders could implement looping and lengthy floating point math, and in general were quickly becoming as flexible as CPUs, and orders of magnitude faster for image-array operations. Pixel shading is often used for things like bump mapping, which adds texture, to make an object look shiny, dull, rough, or even round or extruded. [2]

As the processing power of GPUs have increased, so has their demand for electrical power. High performance GPUs often consume more energy than current CPUs.[3] See also performance per watt and quiet PC.

Today, parallel GPUs have begun making computational inroads against the CPU, and a subfield of research, dubbed GPGPU for General Purpose Computing on GPU, has found its way into fields as diverse as oil exploration, scientific image processing, and even stock options pricing determination. There is increased pressure on GPU manufacturers from "GPGPU users" to improve hardware design, usually focusing on adding more flexibility to the programming model.[citation needed]

GPU companies

There have been many companies producing GPUs over the years, under numerous brand names. The current dominators of the market are AMD (manufacturers of the ATI Radeon and ATI FireGL graphics chip line) and NVIDIA (manufacturers of the NVIDIA Geforce and NVIDIA Quadro graphics chip line.) Intel is steadily moving into the graphics market as they evolve their integrated graphics products to better compete with the add-on GPUs offered by ATI and NVIDIA.

Computational functions

Modern GPUs use most of their transistors to perform calculations related to 3D computer graphics. They were initially used to accelerate the memory-intensive work of texture mapping and rendering polygons, later adding units to accelerate geometric calculations such as the rotation and translation of vertices into different coordinate systems. Recent developments in GPUs include support for programmable shaders which can manipulate vertices and textures with many of the same operations supported by CPUs, oversampling and interpolation techniques to reduce aliasing, and very high-precision color spaces. Because most of these computations involve matrix and vector operations, engineers and scientists have increasingly studied the use of GPUs for non-graphical calculations.

In addition to the 3D hardware, today's GPUs include basic 2D acceleration and framebuffer capabilities (usually with a VGA compatibility mode). In addition, most GPUs made since 1995 support the YUV color space and hardware overlays (important for digital video playback), and many GPUs made since 2000 support MPEG primitives such as motion compensation and iDCT. Recent graphics cards even decode high-definition video on the card, taking some load off the central processing unit.

GPU forms

Dedicated graphics cards

The most powerful class of GPUs typically interface with the motherboard by means of an expansion slot such as PCI Express (PCIe) or Accelerated Graphics Port (AGP) and can usually be replaced or upgraded with relative ease, assuming the motherboard is capable of supporting the upgrade. A few graphics cards still use Peripheral Component Interconnect (PCI) slots, but their bandwidth is so limited that they are generally used only when a PCIe or AGP slot is unavailable.

A dedicated GPU is not necessarily removable, nor does it necessarily interface with the motherboard in a standard fashion. The term "dedicated" refers to the fact that dedicated graphics cards have RAM that is dedicated to the card's use, not to the fact that most dedicated GPUs are removable. Dedicated GPUs for portable computers are most commonly interfaced through a non-standard and often proprietary slot due to size and weight constraints. Such ports may still be considered PCIe or AGP in terms of their logical host interface, even if they are not physically interchangeable with their counterparts.

Multiple cards can draw together a single image, so that the number of pixels can be doubled and antialiasing can be set to higher quality. If the screen is parted into a left and right, each card can cache the textures and geometry from their side (See Scalable Link Interface (SLI) and ATI CrossFire).

Integrated graphics solutions

Intel GMA X3000 IGP (under heatsink)

Integrated graphics solutions, or shared graphics solutions are graphics processors that utilize a portion of a computer's system RAM rather than dedicated graphics memory. Computers with integrated graphics account for 90% of all PC shipments[4]. These solutions are cheaper to implement than dedicated graphics solutions, but are less capable. Historically, integrated solutions were often considered unfit to play 3D games or run graphically intensive programs such as Adobe Flash[citation needed]. (Examples of such IGPs would be offerings from SiS and VIA circa 2004.)[5] However, today's integrated solutions such as the Intel's GMA X3000 ( Intel G965 chipset), AMD's Radeon HD 3200 (AMD 780G chipset) and NVIDIA's GeForce 8200 (NVIDIA nForce 730a) are more than capable of handling 2D graphics from Adobe Flash or low stress 3D graphics[6]. However, the aforementioned GPUs still struggle with high-end video games. Modern desktop motherboards often include an integrated graphics solution and have expansion slots available to add a dedicated graphics card later.

As a GPU is extremely memory intensive, an integrated solution finds itself competing for the already slow system RAM with the CPU as it has no dedicated video memory. System RAM may be 2 Gb/s to 12.8 Gb/s, yet dedicated GPUs enjoy between 10 Gb/s to over 100 Gb/s of bandwidth depending on the model.

Older integrated graphics chipsets lacked hardware transform and lighting, but newer ones include it.[7]

Hybrid solutions

This newer class of GPUs competes with integrated graphics in the low-end PC and notebook markets. The most common implementations of this are ATI's HyperMemory and NVIDIA's TurboCache. Hybrid graphics cards are somewhat more expensive than integrated graphics, but much less expensive than dedicated graphics cards. These also share memory with the system, but have a smaller amount of it than do discrete graphics cards to make up for the high latency of the system RAM. Technologies within PCI Express can make this possible. While these solutions are sometimes advertised as having as much as 768MB of RAM, this refers to how much can be shared with the system memory.

Stream Processing and General Purpose GPUs (GPGPU)

A new concept is to use a modified form of a stream processor to allow a general purpose graphics processing unit. This concept turns the massive floating-point computational power of a modern graphics accelerator's shader pipeline into general-purpose computing power, as opposed to being hard wired solely to do graphical operations. In certain applications requiring massive vector operations, this can yield several orders of magnitude higher performance than a conventional CPU. The two largest discrete (see "Dedicated graphics cards" above) GPU designers, ATI and NVIDIA, are beginning to pursue this new market with an array of applications. ATI has teamed with Stanford University to create a GPU-based client for its Folding@Home distributed computing project (for protein folding calculations) that in certain circumstances yields results forty times faster than the conventional CPUs traditionally used in such applications.[8][9]

Recently NVidia began releasing cards supporting an API extension to the C programming language called CUDA ("Compute Unified Device Architecture"), which allows specified functions from a normal C program to run on the GPU's stream processors. This makes C programs capable of taking advantage of a GPU's ability to operate on large matrices in parallel, while still making use of the CPU where appropriate. CUDA is also the first API to allow CPU-based applications to access directly the resources of a GPU for more general purpose computing without the limitations of using a graphics API.

Since 2005 there has been interest in using the speed offered by GPUs for evolutionary computation in general and for speeding up the fitness evaluation in genetic programming in particular. There is a short introduction on pages 90-92 of A Field Guide To Genetic Programming. Most approaches compile linear or tree programs on the host PC and transfer the executable to the GPU to run. Typically the speed advantage is only obtained by running the single active program simultaneously on many example problems in parallel using the GPU's SIMD architecture[10]. However, substantial speed up can also be obtained by not compiling the programs but instead transferring them to the GPU and interpreting them there[11]. Speedup can then be obtained by either interpreting multiple programs simultaneously, simultaneously running multiple example problems, or combinations of both. A modern GPU (e.g. 8800 GTX) can readily simultaneously interpret hundreds of thousands of very small programs.

See also

References

  1. ^ Denny Atkin. "Computer Shopper: The Right GPU for You". Retrieved 2007-05-15.
  2. ^ Søren Dreijer. "Bump Mapping Using CG (3rd Edition)". Retrieved 2007-05-30.
  3. ^ http://www.xbitlabs.com/articles/video/display/power-noise.html X-bit labs: Faster, Quieter, Lower: Power Consumption and Noise Level of Contemporary Graphics Cards
  4. ^ AnandTech: µATX Part 2: Intel G33 Performance Review
  5. ^ Tim Tscheblockov. "Xbit Labs: Roundup of 7 Contemporary Integrated Graphics Chipsets for Socket 478 and Socket A Platforms". Retrieved 2007-06-03.
  6. ^ Intel G965 with GMA X3000 Integrated Graphics - Media Encoding and Game Benchmarks - CPUs, Boards & Components by ExtremeTech
  7. ^ Bradley Sanford. "Integrated Graphics Solutions for Graphics-Intensive Applications" (PDF). Retrieved 2007-09-02.
  8. ^ Darren Murph. "Stanford University tailors Folding@home to GPUs". Retrieved 2007-10-04.
  9. ^ Mike Houston. "Folding@Home - GPGPU". Retrieved 2007-10-04.
  10. ^ S Harding and W Banzhaf. "Fast genetic programming on GPUs". Retrieved 2008-05-01.
  11. ^ W Langdon and W Banzhaf. "A SIMD interpreter for Genetic Programming on GPU Graphics Cards". Retrieved 2008-05-01.

External links

Template:Processing units