OpenVZ

from Wikipedia, the free encyclopedia

OpenVZ

logo
Basic data

Maintainer Virtuozzo
developer Community project
Publishing year 2005
Current  version 7.0
( July 26, 2016 )
operating system Linux
category Virtualization
License GPL 2 ( Free Software )
openvz.org
OpenVZ is supported by libvirt .

OpenVZ ( Open V irtuali Z ation ) is software for Linux to virtualize the operating system .

Details

OpenVZ creates several isolated containers for operating systems. All processes of these operating systems are processed in a single kernel . The operating systems in the containers are still largely independent of one another. For example, they can be shut down independently and each have their own root account .

Compared to virtual machines from VMware or to paravirtualizations like Xen , OpenVZ offers less choice of operating systems: both the host and the guests have to be Linux, only the Linux distributions can differ. On the other hand, OpenVZ offers better performance, scalability, dynamic resource management and easier administration. The virtualization effort or overhead through OpenVZ is negligible.

OpenVZ was introduced as open source from the commercial Virtuozzo by its manufacturer in 2005 .

OpenVZ consists of extensions to the kernel and tools at the user level.

Kernel

The OpenVZ kernel is a modified Linux kernel that introduces the term Virtual Environment (VE). The kernel ensures the functionality of virtualization, isolation, resource management and checkpointing .

Virtualization and isolation

Each VE is a separate entity that looks like a physical server from the standpoint of its owner. This means that each VE has, among other things, its own

Files
System libraries, applications, virtualized / proc , / sys , etc.
Users and Groups
Each VE has the user "root" as well as other users and groups.
Process tree
A VE can only see its own processes that come from init as standard. PIDs (process IDs) are virtualized, so that e.g. B. PID of init is 1 in each VE.
network
Virtual network device enables a VE to have its own IP address, as well as a set of network filters ( iptables) and routing tables.
Devices
If necessary, access to real devices such as network interfaces, serial ports, hard disk partitions, etc. can be guaranteed for a VE.
IPC objects
Shared memory, semaphores, messages

Resource management

Because all PEs use the same kernel, resource management plays the most important role. In fact, each CE must remain within the allocated resource limits and must not affect the other CEs. That is exactly the competence of resource management.

The OpenVZ resource management consists of three subsystems: two-level hard disk quota, CPU planner and user beancounters. All of these resources can be changed on the fly; no restart is necessary. For example, if more RAM is to be allocated to a VE, the corresponding parameters can be changed "on the fly". This is not easily possible with VM-based virtualization solutions.

Two-level disk quotas

The first level is the per-VE hard disk quota, the second level is the standard UNIX hard disk quota per user and per group within a PE.

In order to allocate more hard disk space to a VE, one only needs to increase the corresponding hard disk quota.

CPU scheduler

The CPU planner in OpenVZ is a two-level implementation of the fair share planning strategy .

On the first level, the planner decides which VE receives the CPU cycle. This is done according to the per-VE cpuunits value. On the second level, the standard Linux planner decides which process in the selected VE should get the CPU clock. As always, the standard Linux priorities of processes etc. are used.

The OpenVZ administrator can define different values ​​of cpuunits for different CEs. In this case, CPU time is distributed proportionally according to the specified values ​​of the CEs.

There is also the option of limiting the CPU time. That is, you can z. B. Assign 10% of the CPU time to a CE.

User beancounters

The User Beancounters are a set of pro-VE resource counters, limits and guarantees. There are around 20 parameters that must be carefully selected to accommodate all aspects of VE functionality. This means that each individual CE can only use the allocated resources and have no influence on the host system or other CEs.

The controlled resources are RAM and various in-kernel objects such as IPC shared memory segments, network buffers, etc. Each resource can be viewed in / proc / user_beancounters . Here 5 values ​​are displayed for each individual parameter: Current utilization, maximum utilization, soft limit, hard limit and fail counter.

The meanings of soft limit and hard limit are different and depend on parameters. As a general rule, if any resource exceeds the limit, the corresponding fail counter is increased. This allows the CE owner to analyze the output of / proc / user_beancounters and find possible causes if any problem arises in the CE .

Checkpointing and Live Migration

Live migration and checkpointing are functions that OpenVZ released in mid-April 2006. They allow a VE to be migrated from one physical server to the other without having to stop / restart the VE. The process is known as checkpointing and the main idea is to freeze a VE and save all processes to a file. This file can then be transferred to another machine and all processes can be restored there. The entire transmission of the CE only takes a few seconds and therefore does not cause any downtime, just a slight delay.

The fact that every part of VE status, such as B. opened network connections, is saved, makes the whole migration process completely transparent for the user. While moving the CE, z. B. running a transaction with a database that takes a long time. In this case, the user does not notice that the database is already running on another server.

This feature enables scenarios such as upgrading a server to be performed without rebooting. If a database or other application in a VE needs more RAM or CPU resources, you can simply buy another, better machine, migrate this VE live to it and then increase the corresponding limits. If it is z. For example, if you need to add additional RAM, you can migrate all PEs to another server, upgrade the machine, and then migrate all PEs back.

User-level tools

OpenVZ has both command line tools for managing CEs (vzctl) and tools for managing applications in CEs (vzpkg).

vzctl

vzctl is a simple, high-level command line tool for managing CEs.

vzctl create VEID [--ostemplate <name>] [--config <name>]
This command creates a new VE that has a numeric ID, a specified OS template (a Linux distribution) and the resources specified in the specified configuration file. The two parameters --ostemplate and --config are optional. The main configuration file contains default values ​​for both.
vzctl start VEID
Starts the specified VE. Starting means creating a virtual environment in the kernel, initializing all resource management parameters and starting the CE / sbin / init in this environment.
vzctl stop VEID
Stops the specified VE. A VE can also be stopped or restarted using its own / sbin / halt or / sbin / reboot commands.
vzctl exec VEID <command>
Starts the command <command> in the specified VE. For example, to display all processes in VE 102, you can use vzctl exec 102 ps ax .
vzctl enter VEID
Opens the VE shell. This is useful when e.g. B. sshd is not started and the problem should be investigated.
vzctl set VEID --parameter <value> […] [--save]
Sets the specified parameter for the VE. Various parameters can be used here; to z. For example, to add an IP address to a VE, enter vzctl set VEID --ipadd xxxx --save . To set the hard drive quota for the VE, use vzctl set VEID --diskspace soft: hard --save . In order to (re) define the kernel RAM soft limit and hard limit for VE, you must start the command as follows: vzctl set VEID --kmemsize barrier: limit --save

vzlist

vzlist shows the list of CEs.

vzlist [-a]
This command shows the status and resource consumption of the CEs
VEID NPROC STATUS IP_ADDR HOSTNAME
110 21 running 192.168.0.1 dns
111 0 stopped 192.168.0.21 www

Templates and vzpkg

Templates are ready-made images that are used to create CEs. A template is a set of packages, and a template cache is a tar archive of a chroot environment in which all packages are installed. During the execution of vzctl create the tar archive is unpacked. This technique enables a VE to be generated in a few seconds.

The developers make template caches for the most common Linux distributions available for download on the project website.

vzpkg is a set of tools that make creating a template cache much easier. It supports rpm and yum based repositories. To create a template, e.g. For example, to create Fedora Core 5, you need a set of (yum) repositories that contain FC5 packages and also a set of packages that need to be installed. In addition, if it is necessary to adapt a template, pre- and postinstall scripts are also available. All parameters shown above (repositories, package lists, scripts, GPG keys, etc.) are shown as template metadata. The template cache can be created automatically with the help of template metadata. All you have to do is run the vzpkgcache command. All specified packages are uploaded to the server and installed in a temporary VE. The corresponding tar archive is then generated.

It is also possible to create template caches for non-RPM-based distributions.

The most important features of OpenVZ

Scalability

OpenVZ uses the single-kernel model and is therefore scalable like the Linux kernel 2.6 . It supports the use of up to 64 CPUs and 64 GB RAM. A single VE can be scaled to the complete host system, i. H. use all CPUs and all RAM of the host system. This procedure virtualizes the hardware of the CE: The operating system running in the CE no longer directly accesses the physical hardware of the host system, but uses the OpenVZ interfaces. In this way it is possible to migrate a server during runtime in order to use increased resources or to compensate for hardware failures of the host system.

density

Hundreds of virtual environments can run on the server with OpenVZ, the number of which is mainly limited by the available RAM and the CPU performance.

Bulk management

The administrator of the OpenVZ server can access processes and files from all CEs. This facilitates the mass administration of many servers; security updates in the CEs can be carried out using a simple script. This is an advantage over virtualization solutions such as VMware or Xen , which require a manual update for each virtual machine.

Application scenarios

The following usage scenarios are common to all virtualization technologies. The main difference from virtualization at the operating system level is that the virtualization effort is very low. This is exactly what makes such scenarios very attractive.

safety
It is possible to run different network services like Apache , mail server , DNS server etc. in different PEs. If an intruder finds a security hole in one of these services and breaks into the system, he can initially only damage this service because all the others are in separate PEs. This increases the overall security of the system if the network services were previously carried out together on one physical system and the intruder does not manage to find a security hole in OpenVZ himself. Security is reduced if the network services were previously carried out on physically separate systems.
Server consolidation
The majority of servers these days are underutilized. With the help of OpenVZ, such machines can be consolidated when migrating to virtual environments. The savings are in rack space, electricity and administration.
Hosting
Undoubtedly, virtualization at the operating system level is a unique opportunity for hosters to offer very cheap CEs. It must be noted that every VE has root access, so that an owner of a VE can (re) install various applications and also configure things like iptables (firewall rules).
Development and testing
Developers and testers usually need access to different Linux systems . It is necessary to be able to completely reinstall them at any time. With OpenVZ you have all the different systems on the same server, and operations like creating CEs only take about a minute. It's also very easy to clone a VE. You only have to copy the VE area and the configuration file.
Educational institutions
Each student can have their own VE. It is possible to work with different Linux distributions. A new VE can be created in just a minute.
Thin client systems
Several application servers can be implemented in CEs. These can be accessed via thin clients. The various application servers are protected against each other in terms of resources by the OpenVZ mechanisms. In addition, different distributions can be made available on these in parallel.

Similar technologies

Other implementations of OS virtualization are LXC (LinuX Containers) and Linux-VServer as well as FreeBSD Jails and Solaris Containers . However, the VServer technology can be completely replaced by OpenVZ.

See also

Individual evidence

  1. openvz.livejournal.com .
  2. Performance Evaluation of Virtualization Technologies for Server Consolidation (PDF; 419 kB) Hewlett-Packard . September 30, 2008.
  3. Julius Stiebert: OpenVZ - New virtualization technology under Linux . In: golem.de . December 6, 2005.
  4. Diego Wyllie: Integration of OpenVZ in Debian Linux . In: Computerwoche . August 8, 2006.

Web links