Partitioned Global Address Space

from Wikipedia, the free encyclopedia

Partitioned Global Address Space ( PGAS ) is a programming model especially for multiprocessor systems and massively parallel computers . The global address range of the main memory is logically divided. Each processor is assigned one of these address areas as local memory. Nevertheless, all processors can access any memory cell, with the local memory being able to be accessed at a much higher speed than that of other processors.

An important advantage of the PGAS languages ​​is the easily formulated possibility of accommodating large amounts of data, which can no longer be stored by a single processor, in the distributed memories of many processors without losing the uniform data description.

The programmer is faced with the task of achieving maximum program performance by appropriately defining and assigning the program variables. It is particularly important to ensure that the load is evenly distributed among all processors involved. A program that has only been written with PGAS means is sometimes very difficult to oversee, since the assignment of the variables in the program does not appear explicitly, but only takes place implicitly according to defined rules. That is why there are additional options for synchronizing processes in PGAS languages.

properties

A PGAS system has the following properties:

  • It consists of a lot of processors with local memory. Portions of this local memory can be declared private so that they are not visible to other processors
  • It has a mechanism to at least a portion of the local memory of the processors with other processors share . This inevitably leads to delays in memory access, depending on the access method and the location at which the divided memory cell is located
  • Each shared memory cell is associated with a thread ( affinity ), either locally so that it can be accessed quickly, or remotely, so that access takes longer. This association also shows itself to the programmer, so that he can program with high performance.

Programming languages

All PGAS programming languages ​​contain the properties mentioned above, even if they have been implemented differently. Each language makes it possible to differentiate between private and shared memory and also to determine the association of the shared memory cells. Some PGAS languages ​​offer distributed loops or special syntax to access remote data.

The processors execute normal threads . When variables are accessed, synchronization is implicit; explicit synchronization by the programmer is possible.

  • Unified Parallel C (UPC) has split off from C. It expands C to include split arrays and split pointers that address the global address space. UPC also has a forall loop that executes iterations depending on the association.
  • Co-array Fortran (CAF) is based on Fortran . It expands Fortran arrays so that other processes can also access them. A variant of Coarray Fortran is included in Fortran 2008. It is the only PGAS language that has received ISO certification.
  • Titanium is based on Java . It offers SPMD parallelism, pointers to split data and an advanced model for distributed arrays.
  • Fortress supports parallel programming for both multi-core processors and supercomputers. Fortress has a mathematical syntax, uses parallelism whenever possible and is a framework for creating domain-specific languages .
  • Chapel is developed by the Cray company. It pursues the following four goals: multithreading , locality awareness, object orientation and generic programming .
  • X10 is available in two implementations, one based on Java and one based on C ++ . X10 offers task and data parallelism. The essential feature of X10 is the asynchronous task management.

Web links

Individual evidence

  1. a b c George Almasi: PGAS (Partitioned Global Address Space) Languages . In: Encyclopedia of Parallel Computing . Springer US, 2011, p. 1539-1545 , doi : 10.1007 / 978-0-387-09766-4_2091 .
  2. Thomas Rauber, Gudula Rünger: Parallel programming . 3. Edition. Springer, Berlin Heidelberg 2012, ISBN 978-3-642-13604-7 , p. 375-386 .
  3. Kathy Yelick, Luigi Semenzato, Geoff Pike, et al .: Titanium: A High-Performance Java Dialect . In: Concurrency: Practice and Experience . tape 10 , no. 11-13 , 1998, pp. 825-836 , doi : 10.1002 / (SICI) 1096-9128 (199809/11) 10: 11/13 <825 :: AID-CPE383> 3.0.CO; 2-H .
  4. GuyL., Jr. Steele, Eric Allen, David Chase et al .: Fortress (Sun HPCS Language) . In: Encyclopedia of Parallel Computing . 2011, p. 718-735 , doi : 10.1007 / 978-0-387-09766-4_190 .
  5. David Callahan, Bradford L. Chamberlain, Hans P. Zima: David Callahan, Bradford L. Chamberlain, Hans P. Zima . In: Ninth International Workshop on High-Level Parallel Programming Models and Supportive Environments . 2004, p. 52--60 , doi : 10.1109 / HIPS.2004.1299190 .
  6. Vijay Saraswat, Bard Bloom, Igor Peshansky, Olivier Tardieu, David Grove ,: X10 Language Specification Version 2.3 . 2013, p. 1–282 ( [1] (PDF; 1.6 MB) [accessed on August 28, 2013]).