Features Proxmox VE
Proxmox VE is a powerful open-source server virtualization platform to manage two virtualization technologies - KVM (Kernel-based Virtual Machine) for virtual machines and LXC for containers - with a single web-based interface. It also integrates out-of-the-box-tools for configuring high availability between servers, software-defined storage, networking, and disaster recovery.
View the complete feature list
For upcoming features or for release notes take a look at the Roadmap Proxmox VE
Easily build your software-defined data center
Server virtualization with support for KVM and LXC
Proxmox VE is based on Debian GNU/Linux and uses a customized Linux Kernel. The Proxmox VE source code is free, released under the GNU Affero General Public License, v3 (GNU AGPL, v3). This means that you are free to use the software, inspect the source code at any time or contribute to the project yourself.
Using open-source software guarantees full access to all functionalities at any time as well as a high level of reliability and security. We encourage everybody to contribute to the Proxmox VE project while Proxmox, the company behind it, ensures that the product meets consistent and enterprise-class quality criteria.
Kernel-based Virtual Machine (KVM)
KVM is the industry-leading Linux virtualization technology for full-virtualization. It's a kernel module merged into the mainline Linux kernel and it runs with near native performance on all x86 hardware with virtualization support—either Intel VT-x or AMD-V.
With KVM you can run both, Windows and Linux, in virtual machines (VMs) where each VM has private virtualized hardware: a network card, disk, graphics adapter, etc. Running several applications in VMs on a single hardware, enables you to save power and reduce cost while at the same time gives you the flexibility to build an agile and scalable software-defined data center that meets your business demands.
Proxmox VE includes KVM support since the beginning of the project back in 2008 (that is since version 0.9beta2).
Container-based virtualization technology is a lightweight alternative to full machine virtualization because it offers lower overhead.
Linux Containers (LXC)
LXC is an operating-system-level-virtualization environment for running multiple, isolated Linux systems on a single Linux control host. LXC works as an userspace interface for the Linux kernel containment features. Users can easily create and manage system or application containers with a powerful API and simple tools.
With the integrated live/online migration feature, you can move running virtual machines from one Proxmox VE cluster node to another without any downtime or noticeable effect from the end-user side.
Administrators can initiate this process either scripted or with the web interface, making it a simple process. It allows you to easily take a VM offline for maintenance or upgrades.
While many people start with a single node, Proxmox VE can scale out to a large set of clustered nodes. The cluster stack is fully integrated and ships with the default installation. To manage all tasks of your virtual data center, you can use the central web-based management interface.
Web-based management interface
Unique multi-master design
The integrated web-based management interface gives you a clean overview of all your KVM guests and Linux containers and even of your whole cluster. You can easily manage your VMs and containers, storage or cluster from the GUI. There is no need to install a separate, complex, and pricy management server.
Proxmox cluster file system (pmxcfs)
Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system developed by Proxmox.
The pmxcfs enables you to store configuration files. By using corosync, these files are replicated in real time to all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM. The maximum storage size is currently 30MB - more than enough to store the configuration of several thousands of VMs.
Proxmox VE is the only virtualization platform using this unique cluster file system pmxcfs.
Command line interface (CLI)
For advanced users who are used to the comfort of the Unix shell or Windows Powershell, Proxmox VE provides a command line interface to manage all the components of your virtual environment. This command line interface has intelligent tab completion and full documentation in the form of UNIX man pages.
Proxmox VE uses a RESTful API. We choose JSON as primary data format, and the whole API is formally defined using JSON Schema. This enables fast and easy integration for third party management tools like custom hosting environments.
You can define granular access for all objects (like VM´s, storages, nodes, etc.) by using the role based user- and permission management. This allows you to define privileges and helps you to control access to objects. This concept is also known as access control lists: Each permission specifies a subject (a user or group) and a role (set of privileges) on a specific path.
Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM standard authentication or the built-in Proxmox VE authentication server.
Proxmox VE High Availability Cluster
A multi-node Proxmox VE HA cluster enables the definition of highly available virtual servers. The Proxmox VE HA cluster is based on proven Linux HA technologies, providing stable and reliable HA service.
Proxmox VE HA Manager
The resource manager, called Proxmox VE HA Manager, monitors all VMs and container in the whole cluster and automatically gets into action if one of them fails. The Proxmox VE HA Manager works out-of-the-box - zero configuration is needed. Additionally, the watchdog-based fencing dramatically simplifies deployment.
The entire settings of the Proxmox VE HA Cluster can be easily configured with the integrated web-based user interface.
Proxmox VE HA Simulator
Proxmox VE includes a HA Simulator. It allows you to test the behaviour of a real-world 3 node cluster with 6 VMs.
The Proxmox HA Simulator runs out-of-the-box and helps you to learn and understand the Proxmox VE HA functionality.
Read more about the Proxmox VE High Availability.
Proxmox VE uses a bridged networking model. Each host can have up to 4094 bridges.
Bridges are like physical network switches implemented in software on the Proxmox VE host. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges are attached to physical network cards assigned a TCP/IP configuration.
For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is possible to build complex, flexible virtual networks for the Proxmox VE hosts, leveraging the full power of the Linux network stack.Read more on the Proxmox VE Network Configuration.
Flexible Software-Defined Storage
The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages or on shared storage like NFS and SAN.
There are no limits, you may configure as many storage definitions as you like. You can use all storage technologies available for Debian GNU/Linux.
The benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime.
You can add the following storage types in the Proxmox VE web interface:
Network storage types supported
- LVM Group (network backing with iSCSI targets)
- iSCSI target
- NFS Share
- Ceph RBD
- Direct to iSCSI LUN
Local storage types supported
- LVM Group
- Directory (storage on existing filesystem)
Read more on the Proxmox VE Storage Model
Backup and Restore
Backups are a basic requirement for any sensible IT environment. The Proxmox VE platform provides a fully integrated solution, using the capabilities of each storage and each guest system type.
The Proxmox VE backups are always full backups - containing the configuration of VMs and container, and all data. Backups can be easily started with the GUI or with the vzdump backup tool (via command line).
The integrated backup tool (vzdump) creates consistent snapshots of running containers and KVM guests. It basically creates an archive of the VM or container data and also includes the configuration files.
Backup jobs can be scheduled so that they are executed automatically on specific days and times, for selectable nodes and guest systems.
KVM live backup works for all storage types including VM images on NFS, iSCSI LUN, Ceph RBD or Sheepdog. The Proxmox VE backup format is optimized for storing VM backups fast and effectively (sparse files, out of order data, minimized I/O).
Read how to configure Proxmox VE Backup
Proxmox VE Firewall
The built-in Proxmox VE Firewall provides an easy way to protect your IT infrastructure. The firewall is completely customizable allowing complex configurations via GUI or CLI.
You can setup firewall rules for all hosts inside a cluster, or define rules for virtual machines and containers only. Features like firewall macros, security groups, IP sets and aliases help to make that task easier.
While all configuration is stored on the cluster file system, the iptables-based firewall runs on each cluster node, and thus provides full isolation between virtual machines. The distributed nature of this system also provides much higher bandwidth than a central firewall solution.
IPv4 and IPv6
The firewall has full support for IPv4 and IPv6. IPv6 support is fully transparent, and we filter traffic for both protocols by default. So there is no need to maintain a different set of rules for IPv6.
Read more about the Proxmox VE Firewall