Basic Linear Algebra Subprograms

from Wikipedia, the free encyclopedia
BLASTER
Basic data

Publishing year 1979
Current  version 3.8.0
(November 12, 2017)
operating system cross-platform
programming language Fortran
category Linear Algebra
www.netlib.org/blas

Basic Linear Algebra Subprograms ( BLAS for short ) describes a program library that provides elementary operations of linear algebra such as vector and matrix multiplications . There are a variety of implementations in different languages ​​or for different architectures and platforms. The advantage of BLAS lies not only in the reusability and the ease of work for the programmer, but also in the possibility for hardware manufacturers to provide highly optimized implementations, some in assembler , for these frequently used subroutines . This is why BLAS are widely used, especially in the supercomputing area.

The definition of the interface to BLAS is a de facto standard .

Functionality

The following data types are supported: single (32 bit), double (64 bit), complex (2 × 32 bit), doublecomplex (2 × 64 bit).

The functionality of BLAS is divided into three levels:

Level 1

Level 1 contains vector arithmetic operations of the form

as well as the scalar product and vector norms .

Level 2

Level 2 contains vector matrix arithmetic operations of the form

as well as solving linear systems of equations with a matrix in triangular form .

Level 3

Level 3 contains matrix arithmetic operations of the form

as well as solving for matrix in triangular form, among other things . This level contains the often required matrix multiplication.

Implementations

refblas
Official reference implementation from Netlib . C and Fortran-77 versions are available.
ACML
AMD Core Math Library, for AMD Athlon and Opteron CPUs under Linux and Windows .
ATLAS
Automatically Tuned Linear Algebra Software , an open source implementation of the BLAS APIs for C and Fortran 77.
ESSL
IBM's Engineering and Scientific Subroutine Library, for the PowerPC architecture under AIX and Linux.
Goto BLAS
Kazushige Gotos BLAS implementation.
OpenBLAS
Further development of Goto BLAS for modern processors.
HP MLIB
HP's Math library, for the architectures Itanium , PA-RISC , x86 and Opteron under HP-UX and Linux.
IMKL
Intel Math Kernel Library (BLAS, LAPACK, ScaLAPACK, Sparse Solvers, FFT, Vector Math), for Intel processors under Linux, macOS and Windows.
LAMA
The Library for Accelerated Math Applications ( LAMA ) is a template library written in C ++ for writing numerical solvers for different hardware (e.g. GPUs using CUDA or OpenCL ) in systems with distributed memory - it hides the hardware-specific programming from the Application developer.
MTL4
The Matrix Template Library Version 4 is a generic C ++ template library for weak and full BLAS functionality. MTL4 offers an intuitive interface (similar to Matlab ) and, thanks to generic programming, broad applicability.
uBLAS
A generic C ++ class library that provides BLAS functionality, part of the Boost library.
CUBLAS
The Nvidia implementation from BLAS for the CUDA technology.
clBLAS
An implementation of BLAS for OpenCL .
AmdBLAS
The AMD implementation by BLAS for the ATI Stream SDK.
Armadillo
The Armadillo template library is a wrapper for C ++.

Web links

Individual evidence

  1. BLAS Benchmark August 2008 . own.tuxfamily.org. August 1, 2008. Retrieved March 4, 2010.
  2. Writing the Fastest Code, by Hand, for Fun: A Human Computer Keeps Speeding Up Chips . New York Times, John Markoff. November 28, 2005. Retrieved March 4, 2010.
  3. www.libama.org