Data center

from Wikipedia, the free encyclopedia
Data center at CERN
Server in the data center of CERN
Hetzner Online GmbH data center park

With data center (in short: RZ ; or: Data Center ) refers to both the building and the premises in which the central computer facilities (. Eg computer , but also the power required to operate infrastructure ) one or more companies or organizations housed as well as the organization itself that takes care of these computers . It is therefore of central importance in the use of EDP in companies, administrations or other institutions. Rules for the technical and organizational measures, the construction and operation of data centers are described in DIN EN 50600.

The common abbreviation is RZ , depending on the organization, ZER (central facility for computer systems) can also mean a data center.

In the companies and state institutions of the GDR , such institutions were often linked to organizational departments and were referred to as organizational and computing centers , or ORZ for short .

tasks

numerous servers of the Wikimedia Foundation in several racks

Data centers were often assigned to an administrative body, for example the financial administration , a research institution, a university or a commercial operation such as a bank or an insurance company . These administrative bodies have the requirement to process large amounts of data , such as the tax returns of all citizens of a federal state. Therefore, extensive machine equipment was necessary, which could only be maintained in a concentrated manner in a data center. In the meantime, the outsourcing of data centers is a frequent and successfully implemented option.

In the pre-PC times, the state created so-called area computing centers, which had the task of providing computing capacity for state institutions. Most of these area computing centers were subordinate to the state offices for statistics and data processing, occasionally they were also assigned to universities.

Modern data centers provide a highly redundant infrastructure in which servers can work with minimal planned downtime. All systems required for operation are available in multiple places. For example, air conditioning units ensure the urgently needed cooling of the high-performance computers, but more devices are used than would be required for the amount of heat given off during normal operation. In this way, individual units can be serviced regularly without affecting the entire operation. However, fulfilling the same requirement for the provision of the power supply is significantly more complex. Modern high-quality servers usually have two power supplies that can supply the entire server independently of one another. These power supply units are connected in a cross cabling with different power lines. This way, one side of the power supply can be serviced without disturbing the servers. Each power supply has its own UPS and its own emergency power supply system , so maintenance of which also does not cause downtime. Devices with only one power supply would represent a single point of failure if they were not connected to the current-carrying rail by means of an automatic transfer switch . This installation is completed by means of a double (redundant) connection of the power supply to different transformers and separate network areas of the local energy supply company . Maintenance on this infrastructure, despite the fact that a malfunction only becomes possible after three to five consecutive errors, is still critical interventions that must be carefully planned and coordinated, since incorrect actions by the infrastructure administration cannot yet be automatically intercepted here.

Economical meaning

Data centers are a basic requirement for digitization and have a major impact on economic development. They are seen as indispensable to meet the growing demand for digital applications and secure more than 200,000 jobs in Germany. Specifically, there were 130,000 full-time employees in German data centers in 2017, and a further 85,000 jobs were directly dependent on data centers.

Every year in Germany over 8 billion euros are invested in the construction, modernization and IT of data centers. Almost 7 billion euros of this flow into IT hardware, and a little over 1 billion euros into new construction and modernization of data center buildings, building technology and security.

The industry has been recording double-digit growth rates for years. Despite increasing investments, however, the market shares are shrinking both on a global level, especially with a view to Asia and North America, and on a European level, where growth in Germany cannot keep up with that in Scandinavia and the Netherlands. One of the main reasons is the high electricity costs. Data centers in Germany bear the highest load in Europe, particularly when it comes to ancillary electricity costs in the form of taxes, levies and network charges. The largest cost factor for the ancillary electricity costs is the EEG surcharge , because data centers can not be partially exempted from the special compensation regulation, even if they purchase large amounts of electricity or if they are classified as critical infrastructures , as they are not listed as an electricity cost or trade-intensive sector in the Renewable Energy Sources Act become. Since this situation has existed for many years, there is an increasing relocation of data center capacities abroad. Between 2010 and 2020, the global market share of German data centers is forecast to decrease from 5% to 4%. There is also the problem of the shortage of skilled workers in the industry, which can, however, also be observed in other European countries.

Organizational breakdown

A normal data center provides for an organizational tripartite division within the framework of the care of the devices.

System technology

Technician with laptop on a rack

The system technology is responsible for the hardware . The system technicians repair defective devices, carry out technical installations on the device, take care of the cabling, etc. The employees from system technology are usually located in the vicinity of the electrical engineering professions.

System administration

The system administration is responsible for the administration of the machines. One speaks therefore of system administration or simply administration. The employees are responsible for the software configuration of the machine park. If, for example, a new hard disk drive is connected in hardware by the system technology department , the system administration must ensure that this drive can also be recognized and used by the computers on the software side .

The responsibility of the system administration is to keep the machines running, to restore crashed machines, to install software and to monitor the systems. The system administrators are also responsible for data security , for example they work out data protection plans (“backup plans”) and ensure that they are implemented. The software side of data protection is also the responsibility of the system administration.

The administration of software can fall into the area of ​​system administration if a separate department has not been designated for such tasks ( databases , communication systems, etc.).

System administrators usually have an IT training.

Operating

The operating accepts tend auxiliary tasks similar to the redefinition range from the changing of the printing paper, cutting the prints and their distribution or the loading of magnetic tapes or priorities in the process flows. Operator was still a very skilled profession in the 1970s; At that time, operators ensured that the mainframe was used to the full . To do this, they analyzed the pending processes according to their resource requirements and - e. B. by manually starting various processes - for optimal system utilization and to avoid resource-related disruptions or a deadlock in particular . Nowadays, this problem is usually alleviated by the cheaper hardware and increased performance of the hardware as well as increasingly intelligent operating systems.

With increasing automation, many tasks that were previously performed by surgeons have become superfluous. One of your main tasks today is to restart computers after a crash, to take over newly installed components from system technicians or administrators into normal operation or to recognize and report abnormalities in operation, especially network operation. The costs due to failures justify the personnel costs without further ado.

Spaces

Network cables in the ground

A state-of-the-art data center is equipped with two rooms, a security room for the so-called fine technology (IT systems) and a room for the so-called rough technology (air conditioning, energy supply, extinguishing agents, etc.). A data center can be equipped with a spacious double floor through which not only the cabling, but also cool air from the air conditioning system is routed to the devices. Network cabinets face each other in closed rows with their fronts or backs. Because the devices suck in air at the front and blow it out at the rear, so-called cold corridors and hot corridors are formed. In front of the cupboards, cool air is blown out of the floor and behind the cupboards warm air is sucked out from the ceiling. One measure to reduce the efficiency of cooling in the data center is to enclose cold aisles (also known as cold aisle containment), into which the cold air flows from the raised floor.

Server racks with high heat losses (> 10 kW per rack) can no longer be adequately cooled with cooling via a pressurized double floor. There are special rack cooling systems using a water or refrigerant circuit that dissipate the heat directly on the rack.

The high power density and the associated heat development not only requires complex measures for cooling, but also means that hearing protection may also be required while staying in the precision engineering room due to the noise from the fans in the devices . Direct rack cooling is more energy-efficient and less loud than indirect air cooling.

The requirements for the availability of data centers are high. They are therefore equipped with redundant air conditioning systems, uninterruptible power supplies , fire alarm systems and an extinguishing system.

safety

Secured entrances to the data center in the BND property in Pullach

Depending on the administrative environment, there are different security requirements for data centers. Usually only the access is controlled and the rooms are secured by alarm systems . Some are even housed in a bunker that has several floors underground, keeps pressure waves, heat and ionizing radiation away and is sometimes even EMP- secured. In any case, access is strictly regulated.

Fire protection and the avoidance of fires are given special priority. In addition to fire seals, there are extinguishing systems that can minimize hardware damage. Water, extinguishing foam or powder extinguishing systems can cause more damage to a mainframe than a burnt cable. For this reason, halons are often used as extinguishing agents in modern data centers . In contrast to previously used extinguishing gases, their effect is mainly based on interrupting the fire by chemical means (similar to powder as a radical inhibitor), while z. B. nitrogen or argon only suffocate the flames by displacing the oxygen when the room is flooded. The gases are not electrically conductive and short circuits are avoided. By limiting the gas concentration to a fixed value, the rooms are still accessible to healthy people. Other gases such as carbon dioxide are even banned in newly built data centers because of their toxic effects. The use of extinguishing gases, however, causes an overpressure , so that the use of pressure relief flaps is necessary. Early detection of a fire is also problematic. Conventional point fire detectors are poorly suited for use in a data center, because the use of cooling systems can create up to 1 m thick warm air cushions under the ceiling so that the smoke does not reach the detector. In addition, modern server racks work with high air exchange rates, which greatly dilute the smoke. The industry association BITKOM therefore recommends the use of highly sensitive smoke aspiration systems.

As an alternative or in addition to fire detection and automatic extinguishing, there are also oxygen depletion systems that lower the proportion of atmospheric oxygen (from 21% to 13.5% to 17%) and replace it with inert gas (usually nitrogen) that a fire is very difficult or impossible to ignite . The remaining O 2 partial pressure at z. B. 15% corresponds to that at an altitude of 2700 m, so that there are only minimal restrictions for healthy people.

The archiving of important data backups takes place in another fire section with partly higher protection values ​​(temperature in case of fire max. 50 ° C).

Dust deposits occur as a result of construction work, maintenance work or repair work in the data center . Due to the coarse grain, construction dust even has a grinding, abrasive property and can damage moving parts such as fans and backup components. Dust prevents the necessary heat dissipation and can cause corrosion, overheating and failures. Soot from smoke development z. B. after a fire damage can reduce the heat dissipation of the data center components and is electrically conductive like zinc whisker . This increases the risk of short circuits on assemblies and electronics of the data center components. Construction work in the data center can generate so much dust that the operation of the IT systems is seriously endangered and IT security drops rapidly. ISO 14644-1 class 8 is the recognized standard for the cleanliness of data centers. Leading hardware manufacturers demand compliance with this clean room class for the proper operation of their hardware. DIN EN ISO 14644 was originally intended for clean rooms and “critical environments” but is increasingly finding its way into data centers.

Emergency data center

To for disasters, engl. disaster , (e.g. an earthquake, an attack or a fire) or downtime, engl. downtime (planned for e.g. updates or unplanned in the event of malfunctions), there is the backup data center (sometimes also geo-redundancy ) as a redundancy scenario . A second data center, spatially clearly separated from the original data center (depending on requirements and possible failure scenarios in a different district, country or even continent), is duplicated as completely as possible. The duplication applies to both the hardware and the software and the current data. If the original data center fails, the operation in the backup data center can be continued. Limiting factors are the duplication of data per unit of time and the "switchover time". Even only partial redundancy z. B. only the business-critical systems and data is to be found for cost reasons.

High-security data centers are housed up to a few dozen or even hundreds of meters underground in tunnels, bunkers and similar environments.

Emergency plans and equipment often provide that the workspaces of the employees are copied 1: 1 down to the equipment of the individual workstation, so that the work in the premises of the backup data center can be continued with a very short delay.

The reason for the expenditure in terms of time, personnel and money is understandable: the failure of a data center is viewed as a company risk, including bankruptcy.

In order to not only leave the double equipment standing for emergencies, which rarely occur, this computing capacity is usually also used. A distinction is therefore made between production and test systems. For example, the server that is used for production can be located in the main data center, while an identical server in the backup data center is only used for development and testing. If the main data center fails, the development and test server is used to maintain the productive systems. Development is then no longer possible for this time, but the acutely more important production does not fail.

In the combination of the main and backup data center, however, there is also a growing risk that the necessary expansions and additions (climate, energy, access, monitoring, energy efficiency) will not be made, or only be made late, as there is a supposedly additional level of security due to the presence of a backup Data center there.

Emergency management

Emergency or disaster management is essential in any data center configuration. In an emergency, everyone involved needs to know what to do and who to inform. The basis of this knowledge and action is the emergency manual, which must contain all relevant information about the data center, the systems and infrastructures used, the "rapid reaction force" and the schedule including all persons and their contact details. Realistic and periodic tests must be carried out to check the emergency scenarios.

Energy consumption and use of energy

The around 52,000 German data centers consumed 9.7 terawatt hours of electricity in 2011. Only a part of this is due to the actual operation of the IT, approx. 50 percent is caused by cooling , UPS and other components. Large data centers can have a permanent power consumption of 100 MW of electrical energy.

All measures that contribute to more efficient resource utilization of the IT equipment, a reduction in CO 2 emissions and higher energy efficiency are subsumed under the term Green IT . Green IT has been one of the key issues in the IT industry since 2006 and has been largely free of the charge of being a mere marketing tool. Manufacturer-independent recommendations for designing a data center environment in the sense of Green IT can be found on the websites of BITKOM , Gartner, the Borderstep Institute and the Uptime Institute.

District heating extraction

Due to their high energy demand, which is associated with a considerable cooling requirement for waste heat management, data centers have great potential for district heating extraction. To increase energy efficiency , the waste heat generated during operation can be used with large heat pumps and the heat energy obtained can then be fed into local and district heating networks. As of 2017 this will e.g. B. already practiced in data centers in Stockholm and Helsinki . With the use of liquid cooling , temperatures of up to 60 ° C can be reached, which is sufficient to feed directly into fourth-generation district heating networks without the use of heat pumps . Alternatively, it can also be fed directly into cold local heating networks , as these operate at a sufficiently low temperature level to be able to do without a heat pump on the generator side. The latter is practiced e.g. B. in cold local heating systems in Zurich , Wallisellen and Herleen .

Potential for load shifting

Data centers also offer high flexibility potential in intelligent power grids . Since data centers are usually only partially used and some computing operations are not time-critical, computing power can be shifted both spatially and temporally if required. In this way, consumption can be specifically reduced or increased regionally without this having an impact on the services provided. Further potential arises from the systems usually installed for uninterruptible power supply such as batteries and emergency power generators , which can also be used to provide control power or to cover peak loads. In this way, system costs could be minimized. Overall, it is considered possible that European data centers will have a load shift potential of a few GW to a few dozen GW in 2030.

sightseeing

Many data centers are monitored with personnel and technical effort and can only be entered by authorized personnel . Some data centers show some of their systems to small, guided visitor groups as part of open days . Commercial data centers and providers of data center space offer tours, especially for potential new customers.

See also

literature

Web links

Commons : Data centers  - collection of images, videos and audio files
Wiktionary: Computer center  - explanations of meanings, word origins, synonyms, translations

Individual evidence

  1. Computer center dispute , Computerwoche of March 18, 1977.
  2. Data centers have a high sustainability potential , study on the importance of digital infrastructures 2018, Borderstep Institute for Innovation and Sustainability
  3. Data centers are indispensable for growth and prosperity , DataCenter-Insider, Vogel IT-Medien GmbH, July 25, 2019
  4. Data centers: increasing investments cannot compensate for shrinking market shares , communication on new study 2017, Borderstep Institute for Innovation and Sustainability
  5. a b No digitalization without modern data centers , it-daily, IT Verlag für Informationstechnik GmbH, October 14, 2019
  6. Data centers: Increasing investments cannot compensate for shrinking market shares , manage it, ap Verlag GmbH, January 4, 2018
  7. German data centers pay electricity fees six times as high as European competitors Deutsche Wirtschaftsnachrichten, February 8, 2020
  8. Renewable Energy Sources Act 2017, Appendix 4 (list of electricity cost or trade-intensive sectors)
  9. ↑ Electricity prices slow down German data centers , Handelsblatt, February 2, 2015
  10. Market shares are shrinking despite increasing investments , CRN, WEKA Fachmedien GmbH, January 17, 2018
  11. BITKOM Guide: Reliable Data Centers, 2010, Version 2, p. 35ff. (No longer available online.) Bitkom.org, archived from the original on July 2, 2011 ; Retrieved October 20, 2011 . Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.bitkom.org
  12. Functional principle ( Memento of the original from October 25, 2013 in the Internet Archive ) Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. by OxyReduct @1@ 2Template: Webachiv / IABot / www.wagner.de
  13. Energy consumption and energy costs of servers and data centers in Germany (PDF; 83 kB) Borderstep Institute, May 2012.
  14. Gartner: 11 Best Practices for Greater Energy Efficiency in the Data Center, 2009
  15. BITKOM Guide - Energy Efficiency in Data Centers Volume II (2010), PDF 1.2 MB
  16. ^ A b Sven Werner: International review of district heating and cooling . In: Energy . 2017, doi : 10.1016 / j.energy.2017.04.045 .
  17. BITKOM Guide: Energy Efficiency in the Data Center, 2008. (No longer available online.) Bitkom.org, archived from the original on March 29, 2010 ; Retrieved July 3, 2010 . Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.bitkom.org
  18. Gartner: 11 Best Practices for Greater Energy Efficiency in the Data Center, 2009
  19. BMU and Borderstep Report: Energy-efficient data centers - Best Practice Examples from Europe, USA and Asia, 2008. (No longer available online.) Borderstep.de, archived from the original on February 3, 2014 ; Retrieved July 3, 2010 . Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / www.borderstep.de
  20. ^ Revolutionizing Data Center Efficiency, 2009. (No longer available online.) Uptime Institute, archived from the original on June 13, 2010 ; Retrieved July 3, 2010 . Info: The archive link was inserted automatically and has not yet been checked. Please check the original and archive link according to the instructions and then remove this notice. @1@ 2Template: Webachiv / IABot / uptimeinstitute.org
  21. a b Carolina Koronen et al .: Data centers in future European energy systems - energy efficiency, integration and policy . In: Energy Efficiency . tape 13 , 2020, p. 129-144 , doi : 10.1007 / s12053-019-09833-8 .
  22. Simone Buffa et al .: 5th generation district heating and cooling systems: A review of existing cases in Europe . In: Renewable and Sustainable Energy Reviews . tape 104 , 2019, pp. 504-522 , doi : 10.1016 / j.rser.2018.12.059 .
  23. Open house at the Leibniz computer center lrz-muenchen.de, accessed on June 23, 2010.