Terminal server

from Wikipedia, the free encyclopedia

The term terminal server stands for a functional principle of electronic data processing and for server software and hardware. With a terminal server, data is stored centrally on a “server” or “host” and the programs are executed there, while the input and output takes place decentrally on user terminals (the terminals or the client software) via a network . In the era of the mainframe , this functional principle was the norm. The programs are executed on the mainframe, only the screen is displayed on the client or terminal, and mouse and keyboard entries are transmitted to the mainframe.

history

With the advent of computer screens and the dialog applications that were now becoming possible, the computer world consisted of “hosts” (central computers, now called servers, then typically mainframes) and end devices (terminals). The terminals were then text or alphanumeric displays. As a result, the only input device was a keyboard. The user keyed in data that was sent to the host for processing. The host provided the result of the processing on the user's screen. The network required for such an installation only required very little transmission capacity (a few KByte / sec) compared to today and was typically only connected to one host. An example of such a network architecture was the Systems Network Architecture (SNA) from IBM , one of the first mainframe manufacturers. If the end devices were physically close together, inexpensive serial terminals were often used, i.e. there was no computer network in the current sense, but only a series of serial data lines (such as RS-232 or RS-485 connections) to the end devices all connected to a central computer. It was also possible to make this connection by modem over a telephone line. These terminals were widespread in Unix central computers.

With the increasing spread of Unix systems, at that time also called "Open Systems" (because there were variants from several manufacturers), a graphical user interface was developed for this operating system (1984) , the so-called X Window System . This technique made it possible to output graphics to computer screens. The specialty of the X Window System (called X11 for short) was to be able to send outputs to remote terminals via computer networks. A separate network protocol was developed for this purpose, also called X11 on TCP / IP. The receiving terminal required a so-called X server program for the display. In contrast to the old text terminals, this type of terminal (called "X-Terminal") required its own local processor and a graphics card for screen control. As early as 1984, the X Window System had all the properties of modern GUIs such as mouse operation, multi-window output, and scalable and horizontally and vertically scrollable windows. The term terminal server became common with the X terminals.

With the advent of personal computers , GUI and remote program installations became common and eventually standard. This meant that powerful processors and graphics cards were also available on every workstation, the hardware basis for a terminal server infrastructure. In the commercial sector, however, the PCs could never do without server or mainframe programs, terminal servers have now become the name for software solutions that made it possible to display such programs on PCs,

Today's manifestations

The more and more demand for graphics capability of application programs resulted in increasing bandwidth requirements for computer networks when they were used via terminal servers. The X11 protocol was unfavorable in this regard, as graphic information was not compressed. A new family of terminal server protocols was developed, an early and very powerful one was the Independent Computing Architecture from Citrix Systems , which introduced compression, caching and encryption into the protocols. The Virtual Network Computing protocol was developed and later published as open source and thus found widespread use in the Unix world. The company Citrix started a cooperation with Microsoft to enable the terminal server principle also under the then newly developed Windows NT . At this point in time (version 3.1 or 3.51) Windows NT was multi-user capable, but could not run multiple graphical sessions (“multisession” capability) at the same time. Citrix expanded the Windows NT kernel to this effect, built the ICA protocol into Windows NT and was initially allowed to distribute this product called "Citrix Terminal Server" exclusively. Microsoft later bought the rights to this technology (without ICA protocol) from Citrix and developed its own network protocol, the Remote Desktop Protocol (RDP). In later Windows Server versions, the technology was directly available as Windows Terminal Services . Windows Terminal Services has been called Remote Desktop Services since 2008 . The seamless desktop integrates a graphical user interface on the local computer with that on a remote server and mixes the two desktops into one another by only displaying icons and windows. Both Seamless Desktop and X11 / xdm are capable of displaying individual, remotely executed applications integrated in the local user interface, which means that there is no need to install the software locally. If the local and the remote computer use the same protocol, a connection can be established even though the two computers use different operating systems. Since Windows 2000 , the professional version of Windows has provided a user with the screen at the same time via RDP. Several users can connect to the same terminal server.

In terms of the “terminal server” functional principle, today's Internet and cloud web applications can also be viewed as “terminal” or centralized programs. The virtualization of workstation computers is also addressed as terminal server operation or virtual desktop infrastructure . IP networks, through which VPN tunnels are possible, are common for remote access to the terminal services of a cloud web application. In a terminal server farm, several terminal servers are available that access shared network resources. A load distribution is possible in the course of the registration so that the individual user connects to the terminal server that is serving the fewest users at that moment.

economics

When setting up with many workstations ( clients ) and a powerful central system ( host ), the economic efficiency is assessed in an overall model that takes into account, for example:

  • Number of clients for simultaneous use (license costs)
  • Server performance and redundancy (hardware costs)
  • Effort for the clients and local backup (hardware maintenance costs and operating costs)
  • Effort for changes to the application (maintenance costs of the application)
  • Design of the network for the transfers (hardware and line costs)
  • Backup and protection of data stocks (failure and abuse risks)
  • Summary of the access mechanisms in a common concept ( authenticity , authorization and risk of abuse)

advantages

  • Since the data management and processing of the data takes place on the remote server, the local workstation (client) can be cheaper, simpler and more robust with the use of thin clients than with full-fledged desktop computers with their own hard drive, etc.
  • The server function can also be virtualized with a powerful server (host) for several server functions and does not require any hardware of its own.
  • When using optimized network protocols, the data traffic compared to the access of clients to central data that is processed locally on a personal computer can be significantly reduced.
  • In the event of failure, theft or virus attack of the local end device (client), the data stored on the server is not lost or falls into unauthorized hands.
  • The individual workstation costs ( Total Cost of Ownership , TCO) are significantly reduced in large installations.
  • The application software only needs to be installed and maintained once on the terminal server.
  • Only the terminal access software needs to be installed on the local PC systems (clients).
  • A redundant installation can completely bridge failures in a terminal server farm ( hot swap ).
  • Central administration and an easy-to-control security concept are possible, through which the data does not leave the server environment.

disadvantage

  • Centralization: The system network is basically dependent on the network. The user does not have all the freedom that he has on a personal computer.
  • The software compatibility and the license model have to fit: Not all standard software runs on a terminal server.

See also

Web links

Individual evidence

  1. IT administrator: Terminal server. May 4, 2011, accessed October 7, 2018 .
  2. Terminal Services Is Now Remote Desktop Services. Retrieved June 6, 2018 .