Features Proxmox VE
Powerful and Lightweight
Proxmox VE is open source software, optimized for performance and usability. For maximum flexibility, we implemented two virtualization technologies - KVM and container.
Proxmox VE uses a Linux kernel and is based on the Debian GNU/Linux Distribution. The source code of Proxmox VE is released under the GNU Affero General Public License, v3 license (AGPL, v3). This means that you are free to inspect the source code at any time or contribute to the project yourself.
Using open source software guarantees full access to all functionalities - as well as high security and reliablity. Everybody is encouraged to contribute while Proxmox ensures the product always meets professional quality criteria.
Kernel-based Virtual Machine (KVM)
Open source hypervisor KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It is a kernel module added to mainline Linux.
With KVM you can run multiple virtual machines by running unmodified Linux or Windows images. It enables users to be agile by providing robust flexibility and scalability that fit their specific demands. Proxmox Virtual Environment uses KVM virtualization since the beginning at 2008, since 0.9beta2.
Read more about KVM
OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Proxmox VE uses OpenVZ virtualization since the beginning of the project in 2008.
Read more about OpenVZ
Move your running virtual machines and containers from one physical host to another without any downtime.
Learn more: CT Live Migration Video Tutorial
Open Virtualization Alliance
Proxmox Server Solutions GmbH is an active member of the Open Virtualization Alliance, an industry-consortium fostering the adoption of KVM as an enterprise-ready open virtualization solution.
Unique Multi-master Design
The clean Web-GUI gives you an overview of all your KVM guests and linux containers and even of your whole cluster. There is no need for a separate and complex management server.
Proxmox Cluster File System
Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system for storing configuration files. This enables you to store the configuration of thousands of virtual machines by configuring them only once. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM which provides a maximum storage size is 30MB - more than enough for thousands of VMs.
Proxmox VE is the only virtualization platform using this unique cluster file system.
Rich app Management Tool
RESTful web API
Proxmox VE uses a REST like API. We choose JSON as primary data format, and the whole API is formally defined using JSON Schema. This enable fast and easy integration for third party management tools like custom hosting enviroments.
You can define granular access for all objects (like VM´s, storages, nodes, etc.) by using the role based user- and permission management. This allows you to define privileges and helps you to contral access to objects. This concept is also known as access control lists: Each permission specifies a subject (a user or group) and a role (set of privileges) on a specific path.
Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM standard authentication or the built-in Proxmox VE authentication server.
50+ Virtual Appliances for Proxmox VE
Via the Proxmox VE Central Web-based Management you can download and install over 50 virtual appliances to run as a OpenVZ container. A virtual appliance is a fully pre-installed and pre-configured application and operating system environment that runs on any standard server in a self-contained, isolated environment known as a virtual machine.
Our technology partner TurnKey Linux offers a huge range of ready-to-run appliances.
Backup and Restore
The integrated backup tool (vzdump) creates consistent snapshots of running OpenVZ VEs and KVM guests. It basically creates a archive of the VM or CT data and also includes the VM/CT configuration files.
KVM live backup works for all storage types including VM images on NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is optimized for storing VM backups fast and effective (sparse files, out of order data, minimized I/O).
Proxmox VE HA Cluster enables the definition of high available virtual servers. If a virtual machine or container (VM or CT) is configured as HA and the physical host fails, the VM is automatically restarted on one of the remaining Proxmox VE Cluster nodes.
Proxmox VE High Availability Cluster
The Proxmox VE HA Cluster is based on proven Linux HA technologies, providing stable and reliable HA service.
Proxmox VE uses a bridged networking model. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges are attached to physical network cards assigned a TCP/IP configuration.
For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is possible to build complex, flexible virtual networks for the Proxmox VE hosts, leveraging the full power of the Linux network stack.
The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages or on shared storage like NFS and on SAN. There are no limits, you may configure as many storage definitions as you like.
The benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime.
You can use all storage technologies available for Debian Linux.
You can add the following storage types via the web interface.
Network storage types supported
- LVM Group (network backing with iSCSI targets)
- iSCSI target
- NFS Share
- Ceph RBD
- Direct to iSCSI LUN
- LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
- Directory (storage on existing filesystem)