Steinbuch Center for Computing

from Wikipedia, the free encyclopedia
Data center on the south campus
Data center on the south campus

The Steinbuch Center for Computing ( SCC for short ) is an institute and the central computing center of the Karlsruhe Institute of Technology (KIT).

Naming

The SCC was named after Karl Steinbuch , a pioneer in the field of adaptive machines. From 1958 until his retirement in 1980 he was professor and director of the Institute for Message Processing and Transmission at what was then the University of Karlsruhe.

history

(Selection)

  • The then Technical University of Karlsruhe received its first computer in 1958, a Zuse Z22 . The computer system is supervised by the Institute for Applied Mathematics (Prof. Johannes Weissinger ), which is to be regarded as the nucleus of the SCC.
  • In 1966 the computer center of the TH Karlsruhe is founded.
  • From January 1994 the then University of Karlsruhe took over the administration and registration of the .de domains for three years . The technical operation was carried out by the university until 1999. She also took over the administration of the domains for the People's Republic of China for a short time (see Internet in the People's Republic of China )
  • In February 2008, the previously independent data centers of the University and Research Center Karlsruhe merged to form the Steinbuch Center for Computing.
  • In May 2010 the SCC started upgrading the KIT WLAN to IEEE 802.11n . This means that devices can now wirelessly connect to the KIT WLAN in accordance with the IEEE 802.11b / g standard in the 2.4 GHz band and in accordance with the IEEE 802.11a / n standard in the 5 GHz band.

tasks

As a central facility, the SCC operates the information processing infrastructure of the Karlsruhe Institute of Technology. This includes the connection to the local dormitories as well as to the German research network and the state research network BelWü and the operation of the campus WLAN . The data center also operates ten pool rooms, one of which is open Monday through Saturday, and a print and media outlet for the students. The SCC also operates some HPC computer systems. In addition, individual faculties and institutes (e.g. computer science and physics) also operate their own pool rooms and clusters .

HPC computer systems

The SCC currently operates three parallel computer systems for different user groups.

Current

  • since January 2014: bwUniCluster with 512 computing nodes with 16 cores each and with 332.8 GFLOPS each and 352 computing nodes with 28 cores each and 770 GFLOPS each, total theoretical peak performance 44 TFLOPS and a total of 86 TB of RAM
  • since September 2014: Research high-performance computer ForHLR I with 512 computing nodes, each with 20 cores and each with 400 GFLOPS, total theoretical peak performance 216 TFLOPS and a total of 41.1 TB of working memory
  • since April 2016: Research high-performance computer ForHLR II with 1152 computing nodes, each with 20 cores and each with 832 GFLOPS, total theoretical peak performance 1 PFLOPS and a total of 95 TB of working memory. When the computer was purchased, it was 126th in the TOP500 .

Former

  • 2010 to January 2017: KIT high-performance computer HP XC3000 (342 computing nodes [incl. 10 service nodes] with 81 G FLOPS each , total theoretical peak performance of 27.04 TFLOPS, 10.3 TB main memory)
  • 2007 to 2013: State high-performance computer HP XC4000 (772 computing nodes with four cores each and 20.8 GFLOPS or 41.6 GFLOPS, total theoretical peak performance of 15.77 TFLOPS, approx. 12 TB of RAM)
  • 2008 to 2013: InstitutsCluster 1 jointly procured by several institutes (206 computing nodes with eight cores each and 85.3 G FLOPS each , total theoretical peak performance of 17.57 TFLOPS, 3.3 TB main memory)
  • 2007 to 2013: Vector parallel computer NEC SX-8R (8 vector processors , each with 36 GFLOPS, total theoretical peak performance of 288 GFLOPS, 256 GB RAM)
  • 2008 to 2013: Vector parallel computer NEC SX-9 (16 vector processors , each with 100 GFLOPS, total theoretical peak performance of 1.6 TFLOPS, 1 TB main memory)
  • 2012 to 2017: InstitutsCluster 2 jointly procured by several institutes (480 computing nodes, each with 16 cores and each with 332.8 G FLOPS , total theoretical peak performance of 135.5 TFLOPS, 28.3 TB main memory)

Individual evidence

  1. https://www.scc.kit.edu/dienste/12055.php
  2. https://www.scc.kit.edu/dienste/bwUniCluster.php
  3. https://www.scc.kit.edu/dienste/forhlr1.php
  4. https://www.top500.org/site/50444
  5. https://www.scc.kit.edu/dienste/forhlr2.php
  6. https://www.scc.kit.edu/dienste/hc3.php
  7. https://www.scc.kit.edu/dienste/4943.php
  8. https://www.scc.kit.edu/dienste/ic1.php
  9. https://www.scc.kit.edu/dienste/4143.php
  10. https://www.scc.kit.edu/dienste/4143.php
  11. https://www.scc.kit.edu/dienste/ic2.php

Web links

Coordinates: 49 ° 0 ′ 43.5 "  N , 8 ° 24 ′ 28.9"  E