OpenHPC
The OpenHPC initiative is a collaborative project under the umbrella of the Linux Foundation . The aim is to develop and distribute an easily installable and usable software environment ( framework ) for high-performance computing . The components necessary for the basic setup of an HPC ComputeCluster are available as open source . Additional, optional components (Intel-C / Fortran compiler, various performance analysis tools) require the purchase of licenses from their respective manufacturers.
founding
The project was founded on November 12, 2015 during SC '15 by the companies and research institutions named under founding members .
aims
The project aims to achieve that openHPC
- as a robust and diverse system made up of components that are integrated with one another, enables a greatly simplified use of HPC technologies
- provides a stable software environment for testing and validating a wide range of HPC applications
- As an integrated open source framework, it reduces the overall costs for commissioning your own HPC environment
- With its flexible configuration management and mutually compatible software packages, it covers many HPC applications and still allows integration with other HPC components
Setting up a ComputeCluster requires a lot of administrative work. Specialized system administrators plan
- Which cluster management software you want to use to install and manage your cluster
- what tools they use to monitor it and
- with which ResourceManager (Job Scheduler) you want to load it with jobs (subtasks) by the user.
Then install and configure the selected software on the MasterNode (s), with which you can then record the ComputeNodes, provide them with an operating system and put them into operation.
OpenHPC wants to simplify these processes by installing a single software stack - consisting of largely standardized and mutually tested components - from a single repository , which is immediately executable and enables the ComputeNodes to be put into operation immediately. The openHPC project has already largely taken care of the integration effort for the various components, so that the operator only incurs little configuration effort.
First edition
On February 5, 2016, "OpenHPC 1.0.1" was the first release of the framework.
Supported Operating Systems
At the moment openHPC is only being developed and tested against CentOS 7.1. However, it can be assumed that it can also be operated with compatible Linux distributions ( Red Hat Enterprise Linux or Scientific Linux ).
use
Since HPC generally envisages the “ scale-out ” principle (distribution of complex invoices over several compute nodes), a master node is required that supplies the compute nodes with the openHPC software and later with subtasks.
To use openHPC, it is sufficient to enter the openHPC repository in the supported basic operating system of the master node in order to download and install the integrated software stack.
The components are /opt/ohpc/
installed according to what should be a common ( NFS ) directory for the master node and all compute nodes .
The components used in the openHPC project are:
- Warewulf as an image-based compute node management system
- SLURM as a resource manager / batch scheduler
- MPI libraries (Intel-MPI, MVAPICH2, OpenMPI)
- various scientific libraries such as FFTW, HDF5 (with serial and parallel I / O support) and the GNU Scientific Library (GSL)
- InfiniBand drivers and tools (OFED, PSM, RDMA, IPoIB)
- Luster client for the distributed file system
- Nagios client for availability monitoring
- Ganglia for performance and utilization monitoring of the cluster
- Intel Cluster Checker for diagnosing hardware and software problems (license required)
- Intel Parallel Studio XE C / C ++ and Fortran Compiler (may require a license)
Founding members
(in alphabetic order)
- All-in-one software
- Altair engineering
- ANSYS of the Argonne National Laboratory
- Atos
- Barcelona Supercomputing Center
- The Center for Research in Extreme Scale Technologies at Indiana University
- Cray
- Dassault Systemes SIMULIA
- Dell
- Fujitsu Systems Europe
- Hewlett Packard Enterprise
- Intel Corporation
- Jülich Supercomputing Center
- Lawrence Berkeley National Laboratory
- Lawrence Livermore National Laboratory
- Leibniz Supercomputing Center
- Lenovo
- Los Alamos National Laboratory
- MSC software
- NEC
- Oak Ridge National Laboratory
- Pacific Northwest National Laboratory
- ParTec
- Penguin Computing
- Pittsburgh Supercomputing Center
- Sandia National Laboratories
- SENAI CIMATEC
- SUSE
- Texas Advanced Computing Center
Web links
- OpenHPC project website
- GitHub page of the project
- HPCwire: Altair plans to open source its PBSPro HPC workload manager
- Announcement of incorporation
- Heise Developer for the openHPC project
- The Inquirer on the openHPC project
- ZDNet to the openHPC project
- HPCwire via the openHPC project
- Scientific Computing via the openHPC project
- The UK's The Register via openHPC
- Intel Blogs: openHPC as "Grab-and-Go-Meals with Recipes" for the quick commissioning of modern ComputeClusters
Individual evidence
- ↑ linuxfoundation.org ( Memento of the original from March 9, 2016 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice.
- ↑ openhpc.community
- ↑ openhpc.community