North German association for high and high performance computing

from Wikipedia, the free encyclopedia
Logo of the North German Association for High and High Performance Computing (HLRN)
Federal states involved in the HLRN, Berlin, Brandenburg, Bremen, Hamburg, Mecklenburg-Western Pomerania, Lower Saxony and Schleswig-Holstein; Operator centers in Berlin and Göttingen

The North German Association for the Promotion of High-Performance Computing (HLRN) was founded in 2001 through an administrative agreement between the six northern German states of Berlin , Bremen , Hamburg , Mecklenburg-Western Pomerania , Lower Saxony and Schleswig-Holstein with the aim of jointly procuring and commissioning a computer system for processing founded by Grand Challenges . At the end of 2012, Brandenburg also joined the HLRN network.

The HLRN has jointly operated a distributed supercomputer system at the Konrad-Zuse-Zentrum für Informationstechnik Berlin ( Zuse-Institut Berlin , ZIB) and Georg-August-Universität Göttingen (since 2018; previously: Gottfried Wilhelm Leibniz Universität IT Services Hannover).

Organization and structure

The highest body of the HLRN is the Administrative Council, which is composed of one representative from each of the ministries or senate authorities of each member country. The Board of Directors decides on all common matters that are of fundamental importance.

The standing commissions of the HLRN include the Scientific Committee and the Technical Commission. The Scientific Committee decides on the approval of projects and the allocation of operating resources (contingents). The Technical Commission advises the Board of Directors and the operating centers during planning and on all technical issues. The HLRN consultants support users and their projects in a supraregional competence network.

HLRN-IV

Signing of the contract for the HLRN-IV at the Zuse Institute Berlin; Martin Matzke (Senior VP Atos Germany), Christof Schütte (President of the Zuse Institute Berlin), Steffen Krach (State Secretary for Science and Research Berlin), Alexander Reinefeld (Head of Parallel and Distributed Systems at the Zuse Institute Berlin), v. l. No.

In spring 2018, the purchase contracts for the new high-performance computer systems were signed at the two locations in Berlin and Göttingen. The new HLRN-IV computer systems from Atos / Bull worth 30 million euros will be installed in two phases (phase 1 in 2018, phase 2 in 2019) and will be used by the seven HLRN countries (Berlin, Brandenburg, Bremen, Hamburg, Mecklenburg-Western Pomerania, Lower Saxony and Schleswig-Holstein) and the federal government are each half funded. The HLRN-IV replaces the previous HLRN-III systems and, with almost a quarter of a million computing cores and a computing power of around 16 peta-flops, will increase the available power about six times. It thus offers researchers the opportunity for even more precise model calculations, e.g. B. in environmental research, in the life, materials and engineering sciences and in basic research in the fields of physics, chemistry and mathematics.

Signing of the contract for the HLRN-IV at the Georg-August-Universität Göttingen; Thomas Theissen (authorized signatory Atos) on the left, Sabine Johannsen (State Secretary for Science and Culture in Lower Saxony) in the middle; Ulrike Beisiegel , P University of Göttingen, right

The concept, which has been tried and tested since the installation of the HLRN-I system in 2002, with two operator locations in the states of Berlin and Lower Saxony, will continue to be pursued by the HLRN group. In Berlin and Lower Saxony, the operating centers are the Zuse Institute Berlin and the Georg-August-Universität Göttingen, which thanks to their many years of experience with the operation and further development of high-performance and high-performance computers by the Gesellschaft für Wissenschaftliche Datenverarbeitung mbH Göttingen provide transparent, competent system-technical support for users will ensure.

Usage and applications

The HLRN system can be used at the request of scientists from universities and research institutions in the seven northern German states. The system is available to other user groups on request and for a fee.

The computer resources are primarily used for numerical model calculations from the main application areas of environmental research, climate and marine sciences, engineering applications such as aerodynamics and fluid dynamics (e.g. aircraft and shipbuilding) and basic sciences such as physics, chemistry and life sciences. Areas of application include engine development , hurricane forecasting , optimization of wind turbines, and climate and marine research .

From the users' point of view, the system, which is distributed across the two locations in Berlin and Hanover, looks like a large complex (one-system property). For example, a common batch system distributes computing jobs to the two HLRN-III complexes. User management, billing and system administration are centralized.

history

HLRN-I

From 2002 to 2008 the HLRN network operated a symmetrically distributed parallel computer IBM pSeries 690 (Regatta) with a total of 1024 IBM Power 4 processors at the two locations in Berlin and Hanover . The two locations were connected to one another via a dedicated fiber optic link (HLRN link). In the TOP500 list from November 2002, the two locations took 44th and 45th place.

HLRN-II

HLRN-II computer at the Hanover location

From 2008 to the beginning of 2014, the HLRN network operated a computer system from SGI , which was also symmetrically distributed over the two locations and connected to one another via a dedicated fiber optic connection with 10GigE .

In a first expansion stage, 44 SGI Altix XE 250 servers and 320 SGI Altix ICE 8200 + blades were installed at both locations in mid-2008 . Each computing node was equipped with two Intel Xeon E5472 Quad Core (Harpertown) processors with 3.0 GHz. The main memory was 64 GB (Altix XE) or 16 GB (Altix ICE 8200+). All computing nodes were connected via a 4x DDR Infiniband network for communication and data exchange with each other via MPI , as well as via another 4x DDR Infiniband network with a 405 TB global background storage system based on Luster , and were operated with the free Linux operating system . The main memory expansion of the first stage was eight Tbytes, the maximum computing power (peak performance) was 35.3 TFlop / s .

In a second expansion stage, four additional SGI Altix XE 250 servers and 960 SGI Altix ICE 8200+ blades were installed at each location in mid-2009 , the new blades each with two Intel Xeon X5570 quad core (Nehalem EP Gainestown) processors equipped with 2.93 GHz and 48 GB of main memory and connected via 4x DDR Dual Rail - Infiniband . The global background storage system was expanded to 810 Tbytes per location. The main memory expansion of the second stage was 45 TByte, the maximum computing power (peak performance) was 90 TFlop / s.

In the TOP500 list from November 2009, the two locations took 39th and 40th place.

In a third expansion stage, five SGI Altix UltraViolet 1000s were installed at both locations in mid-2010 . Four of these systems, per location, each comprised 32 blades with two Intel Xeon X7560 Octo Core (Nehalem EX Beckton) processors with 2.23 GHz and 64 GB of main memory. The 32 blades of each system were connected to one another via an SGI NUMAlink5 and thus provided a 2 Tbyte global address space as a NUMA system. The fifth system comprised 24 blades each with a total of 1.5 Tbytes of global main memory.

In addition, the overall system comprised around 20 TB for home directories, which were replicated synchronously between the two locations, as well as two archive storage systems based on the StorageTek SL8500 tape library from Oracle with a total of 6.5 PB of storage capacity.

HLRN-III

Supercomputer Gottfried of the HLRN-III system at the beginning of 2014 at Leibniz University IT Services in Hanover

The third generation of the HLRN went into operation in autumn 2013. In mid-January 2014, phase 1 of the 2.6 PetaFlop / s system from Cray was officially inaugurated. The HLRN-III complexes are located at Leibniz University IT Services (LUIS), formerly RRZN, in Hanover and at the Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) and are connected to one another via a dedicated fiber optic connection from 10GigE . The complex in Hanover is named after the polymath and mathematician Gottfried Wilhelm Leibniz as Gottfried named. The complex in Berlin bears the name Konrad of the computer pioneer Konrad Zuse .

The first phase of the two Cray XC30 computers each consisted of 744 compute nodes with a total of 1488 dual-socket Xeon E5-2695v2 processors from Intel with a total of 93 TB of main memory, which are connected via a fast Cray Aries network with Dragonfly topology. Compared to the previous system HLRN-II, they offer twice the computing power.

HLRN-III supercomputer Konrad at the Zuse Institute Berlin after the expansion in autumn 2014

In a second expansion phase in autumn 2014, the HLRN-III was expanded to a total of 3552 compute nodes with 85248 compute cores with 1872 compute nodes in Berlin and 1680 compute nodes in Hanover and a total of 2.7 PetaFlop / s peak performance and 222 TB main memory. This increased the computing power of the system tenfold compared to the previous HLRN-II system.

The massive parallel supercomputers are complemented by service nodes for login, data management and memory-intensive pre- and post-processing. At the Hanover location, 64 SMP nodes with 4 Intel Xeon processors each and 256 or 512 GB main memory are available for less scalable and memory-intensive applications. From the supercomputers, the SMP components and the service nodes, users can access a parallel file system ( Luster ) with a total capacity of 2.7 PB.

In the TOP500 list from November 2014, the two supercomputers occupy positions 51 and 61 (positions 118 and 150 in November 2017).

The investment costs, half of which are borne by the seven federal states involved in the HLRN and the federal government, amount to 30 million euros. At the end of 2014, more than 850 scientists were using the computing power of the HLRN-III in over 150 major projects.

HLRN-IV

In autumn 2019, the fourth generation of the HLRN with the two components "Lise" and "Emmy" went into operation at the Berlin and Göttingen locations. The system contains around a quarter of a million computing cores and has a peak performance of 16 quadrillion computing operations per second (16 PetaFlop / s). The two system components are named after the physicist Lise Meitner and the mathematician Emmy Noether .

Web links

Individual evidence

  1. Amendment to the HLRN Administrative Agreement of December 14, 2012
  2. ^ Organization and structure of the HLRN
  3. Press release from the Lower Saxony Ministry of Science and Culture: Green light for Northern Germany's largest supercomputer from January 16, 2014
  4. TOP500 from November 2002
  5. TOP500 from November 2009
  6. HLRN press release " New supercomputer for top research in Northern Germany " from December 17, 2012
  7. ^ Ceremonial inauguration of the HLRN-III high-performance computer in Hanover
  8. Northern Germany's supercomputer runs on ndr.de from January 16, 2014
  9. Bernd Haase: Hanover's new supercomputer. Gottfried steps on the gas in: Hannoversche Allgemeine Zeitung from January 16, 2014
  10. 744 nodes with dual-socket Intel Xeon E5-2695v2 Ivy Bridge and 1128 nodes with dual-socket Intel Xeon E5-2680v3 Haswell-EP
  11. 744 nodes with dual-socket Intel Xeon E5-2695v2 Ivy Bridge and 936 nodes with dual-socket Intel Xeon E5-2680v3 Haswell-EP
  12. TOP500 from November 2014
  13. HLRN NewsCenter article Gottfried and Konrad in the Top 500