Process control system

from Wikipedia, the free encyclopedia

Control center of the Lemgo model factory (2010)

A process control system ( PLS , Distributed Control System , DCS or Process Control System , PCS ) is used to manage a process plant, for example a refinery. It typically consists of so - called process - related components ( PNK ) and operating and monitoring stations ( BUB , also display and operating components ( ABK )) and engineering components (EK, engineering station ES).

Process control systems are mostly used for larger plants and usually consist of a package that includes the following mechanisms:

The following additional mechanisms are usually also available:

  • Batch system for recipe operation
  • Route system for the selection of conveyor routes
  • Technical diagnostic options
  • Data interfaces to external systems
  • Computing units for more demanding control technology
  • Data backup systems .

The process-related components are installed in control cabinets , which are located in control rooms. They do the actual control and regulation tasks and are connected to sensors (e.g. pressure transducers) and actuators (e.g. control valves). The operating and monitoring stations are used to visualize the process engineering system and are located in the control room, which is constantly manned by system operators . Process- related components and operating and monitoring stations are connected to one another via a bus system .


Process control systems are available today in countless different designs. The historical development in PLS technology can be divided into four stages.

Manual operation before 1960

The measured variables were evaluated and displayed on the spot without the data being collected. For example, a device for measuring pressure was simply built into a pipe; To find out about the pressure in the line, you had to go to the site and read the display. The actuators were also not yet automated; In other words, to shut off a line, a valve had to be closed on site . The plant personnel therefore had to constantly read measured values ​​in the plant and operate the corresponding actuators in order to keep the process in the desired range. As a result, only small systems were possible and a large number of personnel was required.

Parallel systems from around 1960

Individual control measures were taken over by pneumatic or electrical control devices. Important measured values were measured electrically and displayed in the first control rooms. Key information and actuators could from a central location (the first waiting to be managed). Due to the high level of responsibility, the maintenance staff had to be specially trained. Since there was a system for every sensor and actuator, in some cases hundreds of operating and monitoring systems had to be installed in the control room in the first larger systems. A switch was required for each valve and a display for each measurement. Huge control rooms were built, in which there was often confusion.

Central systems from around 1970

The introduction of the microcontroller made it possible for the first time to automate processes. A central control unit could independently carry out actions, for example, evaluate certain process states and then initiate the desired measures. Due to the partial relief of the operating personnel, more complex and larger systems could be built. The first visualization systems came onto the market and tried to contain the chaos of displays and switches in the control rooms. An attempt was now made to collect as many signals as possible centrally, as this flood of information could be managed with the help of the computer. For the first time, the plant operators had most of the information available in the control room, which further simplified operation. A disadvantage of centralization, however, was that if the central processing unit failed, the entire system would come to a standstill. An attempt was made to combat the low availability of the systems through redundancy concepts.

Decentralized systems from around 1985

Decentralized concepts have become affordable due to the falling prices on the semiconductor market. Several computing units control the process and communicate with one another using a bus system . So it is no longer a large controller that controls everything, but several small ones that are in communication with each other. If a controller fails, the entire system does not come to a standstill, because the other controllers continue to run. This increased the system availability. The controls communicate with each other via a bus system to which the operating and monitoring stations are also connected. This separation of visualization and control makes it possible to use specialized products and to separate them spatially. The controls are mostly special systems made up of microcontrollers, which hang in the switch cabinets of the system and are connected there to the process peripherals. They are extremely robust, fail-safe, modular and available in various performance classes. The visualization is mostly implemented using PCs or PC-like products. For the first time it was possible to set up several operator stations with reasonable effort. This decentralized structure made it possible to implement systems that already correspond to today's size and complexity. Assume that you need ten controllers and three operator stations for a certain system. If a larger system is to be built, the number of controls is increased and new operator stations are built for the additional staff. In reality, however, it was only possible to build systems up to a certain size, as the communication load placed such demands on the system buses that the operation of simple valves may simply be too slow. The engineering costs also had a limiting effect, as only a few technical standards were used and so the different systems of the manufacturers differed greatly. As a result, engineering staff were usually only trained on one or a few manufacturers and there was therefore little competition.

Decentralized systems from around 1995

The use of standard PC architecture and software enabled costs to be further reduced. Most system buses use Ethernet . As a result, it is no longer necessary to use special components, but you can fall back on well-engineered and high-performance standard products from the IT world. With the use of the same techniques by several manufacturers, it is now relatively easy for an engineer to familiarize himself with the products of several manufacturers, which stimulates competition. Compared to older systems, the user has the advantage that he is familiar with the operation with mouse and keyboard from the office environment. The use of field buses , with which devices and I / O peripherals are connected, also result in great cost savings . As a result, not only can the current value of the measurement be evaluated as was customary in the past, but it is also possible to make settings and read out parameters. Furthermore, the I / O periphery can be relocated further out into the field, which brings cost advantages in terms of cabling.


The hardware of the process control systems is moving more and more away from specialized systems towards common and cheap IT components. In addition, a further decentralization can be determined, which brings the intelligence into ever smaller, field-related and mobile units. The use of specified engineering elements and project planning aids further reduces engineering costs. The first systems with Linux are beginning to gain a foothold on the market, while Microsoft Windows is now also penetrating the smaller, field-based and mobile units in the form of Windows CE . The integration of third-party systems (packages) is also gaining in importance. ERP , MES and CMMS interfaces are becoming better and better integrated.

Architecture of process control systems

Today there are innumerable different architectures of process control systems. However, the most common are the single-bus architecture and the server architecture.

Single bus architecture

Here the PNK and BUB stations are lined up on a bus. Each BUB station can pick up the desired data from each PNK and issue commands to this in turn. Prominent representatives of this architecture are the Emerson Electric Company with DeltaV , ABB with Freelance and Yokogawa with Centum VP . This architecture combines the advantage of high availability with the decentralized distribution of intelligence. Disadvantages are the often complicated data management (image changes, downloading clients, engineering servers) and the often high bus load in large systems.

Server architecture

Two bus systems are set up here. The system bus connects all PNKs, the terminal bus all BUB units with the server. The BUB units and the PNKs have no physical connection. The server collects the required data cyclically from all PNKs and makes them available to the BUB units in the terminal bus. A prominent representative of this architecture is Siemens with PCS 7 or ABB with System 800xA . This architecture has the advantage that the bus load can be easily scaled thanks to the separate bus systems and that the server can easily intervene for third-party applications (MES, ERP ...). The disadvantage is the lack of operability in the event of a server failure (which makes redundancy concepts necessary) and the additional hardware required. Mixed versions of these architectures are often in use today. Special architectures are used for special areas of application (for example particularly high availability, high processing speeds, low costs and special flexibility).

Identification features of a process control system

Originally, a PLS could be recognized by two features:

  • a PLS is time deterministic
  • a DCS has a database for all systems involved.

Under a time-deterministic behavior describes the execution of a user program at fixed task cycles . Fixed processing times are assigned to these task cycles , which are usually adhered to. For example, a task can run once per second; even if the associated user program is processed in 200 ms, it is only started once per second. Running just one task on the main processor would be a waste of resources. If an endless loop is executed in this user program due to a programming error or for other reasons, the processing time increases from 200 ms to 1000 ms, for example. However, the user program is aborted after one second at the latest because the task has ended. When the task is restarted, the user program is also restarted. If the user program of a process engineering system is broken down into several parts and these are processed by different tasks, you can ensure that if one part of the program fails due to faulty code, the other parts of the program will still be executed by their tasks. If an endless loop is executed in a task, it will occupy the main processor, but will be aborted at the latest when another task is scheduled for execution. This means that subsystems can be decoupled from one another in terms of programming and performance optimizations can be carried out. For example, you can put temperature measurements that change their value in a matter of minutes in a five-second task and pressure measurements that change very quickly in a 200 ms task. With this system one can say deterministically (i.e. definitely) that this pressure measurement is evaluated every 200 ms, regardless of whether other parts of the program are faulty. This ensures that system variables that have only low temporal dynamics (such as a temperature measurement) do not place too much stress on the main processor.

A database for all systems involved means that process objects (e.g. a pressure measurement) do not have to be created twice in the PNK and in the BUB . A program must be available in the AS for the pressure measurement that records the measured value from the hardware and carries out limit value monitoring. If the measured value exceeds a set limit value, an alarm is triggered (e.g. "Boiler 42 has overpressure"), which is handled by the alarm system . Furthermore, the measured value is displayed by the visualization system so that the plant operator is informed. A tag logging system (curve archiving) can now also record the measured values ​​so that measured value curves can be displayed later. So that the pressure measurement is now available in all these systems, all systems must receive their information from a database.

Programmable logic controllers and process control systems have grown together today.

Products and Manufacturers

A market overview of the process control systems widespread in Germany can be found on the website of the SPS-Magazin .

See also

Individual evidence

  1. ^ Karl Friedrich Früh, Uwe Maier, Dieter Schaudel: Handbuch der Prozessautomatisierung. Oldenbourg Industrieverlag, 2008, ISBN 978-3835631427 , p. 191.
  2. Klaus Thiel, Heiko Meyer, Franz Fuchs: MES - Basics of Tomorrow's Production. Oldenbourg Industrieverlag, 2008, ISBN 978-3835631403 , p. 48.