Proxmox VE

from Wikipedia, the free encyclopedia
Proxmox VE
Proxmox-VE-4-4-screenshot-startpage.png
Open source virtualization platform
developer Proxmox Server Solutions GmbH
License (s) AGPLv 3 (Open Source)
Current  version 6.2 of May 12, 2020
(111 days ago)
Kernel Linux
ancestry \ Debian
  \ Proxmox VE
Architecture (s) x86-64 with AMD-V

or IVT

Installation medium ISO image
pve.proxmox.com

Proxmox VE ( Proxmox V irtual E nvironment in short PVE ) is an on Debian -based open source - virtualization platform for the operation of virtual machines with a web interface for setup and management of x86 virtualization . The environment is based on QEMU with the Kernel-based Virtual Machine (KVM). In addition to the operation of classic virtual machines (guest systems), which also allow the use of virtual appliances , PVE also offers LinuX Containers (LXC).

By using a web interface, a large part of the simple work such as setting up, starting and stopping, creating backups and managing the network infrastructure and the ongoing operation of virtual machines and the associated storage systems on the host system is made easier. In addition, clusters of several PVE host systems based on the Corosync Cluster Engine can be formed, which are managed together. Virtual machines and their virtual hard disk space can be exchanged between these. This enables the construction of high availability clusters .

A special feature of PVE is that any packages of the Debian project can be installed on the host system using the Debian Package Manager (dpkg) and the Advanced Packaging Tool (apt) based on it. Proxmox packages are also managed and updated via their own repositories using dpkg. The publication cycle of PVE is usually two to four months, with larger changes longer. Proxmox VE is developed and supported by Proxmox Server Solutions GmbH in Vienna .

history

Up to version 3.4 OpenVZ was used as the container technology , with version 4.0 this was replaced by LXC. Proxmox-VE version 5.1 works on the basis of Linux 4.13 and supports both KVM-based guest systems and LXC-based Linux containers in parallel operation

functionality

In the simplest case, PVE is installed on a compatible x86-64 platform such as a blade server from the ISO image provided . The installation program does the entire installation process in a Debian basic installation practically automatically and uses the entire first hard drive of the PC. Other types of installation are possible and are explained in the PVE Wiki. The differences to a normal Debian installation are primarily in the pre-selection of the hardware to be supported: For example, a larger selection of the PHY Ethernet network chips found in the server area are directly supported by Intel , so there is no need to install drivers later. The Proxmox Wiki offers various FAQs, Howtos and numerous video tutorials.

Proxmox VE supports any hardware on the host system that is also supported by Debian in the respective version, thus also unrestricted SMP systems with several processors and processor cores.

The administration of PVE is carried out for simple steps via a web interface. In addition, SSH access to the command line is available at root level, on which extensive work and special configurations on the host system can be carried out with appropriate scripts and system programs.

construction

Virtual machines (KVM)

On the one hand, Proxmox VE contains a Linux kernel that runs on a KVM basis as a hypervisor (also known as a virtual machine monitor ) directly on the hardware . As of version 2.6.21, the Linux kernel provides the paravirt ops interface as a framework for virtualization applications .

The advantage of KVM is that almost any x86 operating system can be run under a central administration without additional adjustments. This operating mode is known as full virtualization or hardware virtual machine (HVM). For the HVM virtualization technologies, the host system requires hardware support for virtualization with AMD-V or Intel VT as standard .

Linux Containers (LXC)

As of version 4.0, Linux Containers ( LXC ) replace the OpenVZ containers used up to now (up to Proxmox VE 3.4). When upgrading a Proxmox cluster, the OpenVZ containers can be converted to LXC. In contrast to virtual machines, containers have a lower memory requirement, but the guest system is limited to the Linux operating system because the kernel of the host system is used in the guest system with containers .

With the switch to LXC, Proxmox can use the current Linux kernels. The complete integration of LXC into the Proxmox VE framework allows the use of LXC with storage plugins such as Ceph , ZFS (file system) (locally or via iSCSI ), NFS ( Network File System ), GlusterFS , iSCSI from Proxmox VE 4.x onwards , Sheepdog or local storage (directories or LVM ). Since Proxmox 4.4, DRBD9 is no longer directly supported due to license problems and can only be integrated using external plugins.

OpenVZ

Up to version 3.4, Proxmox VE also integrated OpenVZ , a Linux-based server virtualization at operating system level. In contrast to KVM, a common kernel is used here for all virtual machines (instances) that are isolated from one another in so-called containers . The advantage here is the slimness of this technique, i. H. the memory consumption and the processor usage is cheaper than with KVM.

However, with OpenVZ it is only possible to operate Linux VMs with a kernel that has also been adapted on the host side; the guests cannot make their own changes to the kernel or load their own kernel modules, as the kernel is shared with the host system. To keep this technology user-friendly, Proxmox VE / OpenVZ has templates that automatically make all necessary changes to the guest system. Again, all instances benefit immediately from an update of the central kernel; the administrative effort may be lower.

Storage systems

PVE supports the recording and setup of the virtual drives of the guest systems, snapshots and backups required for operation . Both local storage media built into the host system, such as hard disks (HDD) or solid-state drives (SSD), as well as network-based storage systems, such as Storage Area Networks (SAN) or Network Attached Storages (NAS), can be used. The connection can be based on a file system, for example using a Network File System (NFS), or as a block device via interfaces such as iSCSI .

To manage the local storage media one is generally volume management (LVM) is used, it is done building for the virtual disk images to store either in separate partitions in English Thinly-provisioned logical volumes, thin-LVM or in a file system, for example in the standard of QEMU used file format qcow2 . This also enables the drive images to be exchanged between different virtualization environments at file level. In addition to ext4 , the memory-intensive ZFS found in the server area is also offered as a file system on the host system.

Miscellaneous

PVE offers the possibility of an iptables- based firewall for the host system as well as individually for each guest system based on it. The configuration for the network rules can be done via the web interface.

The interface to the console of the guest systems, for example the output of a graphical user interface and the input of the keyboard and mouse controls in the context of a virtual KVM switch , takes place either via the SPICE protocol or via a variant of Virtual Network Computing (noVNC) based on HTML5 . So that each guest system does not use its own port for the console interface on the host system, encrypted access with authentication takes place via a proxy .

From version 5.3 onwards, individual hardware components from the host system, such as plug-in cards with Peripheral Component Interconnect (PCI) or graphics cards (GPU), can be passed through directly to certain guest systems for exclusive access via the web interface .

literature

Web links

Individual evidence

  1. www.proxmox.com .
  2. Proxmox Server Solutions GmbH (Vienna, AT)
  3. Press release Proxmox VE 4.0 release
  4. ^ Wiki of the Proxmox VE project
  5. Virtualization environment Proxmox VE appears in version 4.0 . Heise Online. Retrieved December 15, 2015.
  6. Storage in the Proxmox Wiki
  7. DRBD9 in the Proxmox Wiki
  8. Proxmox VE 5.3 - Roadmap. Retrieved December 8, 2018 .