Features Proxmox VE
Proxmox VE is a powerful and lightweight open source server virtualization software, optimized for performance and usability. For maximum flexibility, Proxmox VE supports two virtualization technologies - Kernel-based Virtual Machine (KVM) and container-based virtualization with Linux Containers (LXC).
Upcoming Features: Roadmap Proxmox VE
Overview Proxmox VE
Open Source Server Virtualization with KVM and LXC
Proxmox VE is based on the Debian GNU/Linux distribution and uses a specially customized Linux Kernel. The source code of Proxmox VE is released under the GNU Affero General Public License, version 3 (GNU AGPL, v3). This means that you are free to inspect the source code at any time or contribute to the project yourself.
Using open source software licensed under the GNU AGPL, v3 guarantees full access to all functionalities as well as a high level of reliability and security. Everybody is encouraged to contribute to the Proxmox VE project while Proxmox, the company behind it, ensures that the product meets consistent and professional quality criteria.
Kernel-based Virtual Machine (KVM)
The full virtualization solution Kernel-based virtual machine (KVM) is the leading Linux virtualization technique. KVM is a kernel module merged into the mainline Linux kernel and runs with near native performance on all x86 hardware with support for virtualization – either Intel VT-x or AMD-V.
You can use KVM to run both Windows and Linux in virtual machines (VMs) where each VM has private virtualized hardware: a network card, disk, graphics adapter, etc. To run multiple applications in VMs on single server hardware platforms enables you to save cost and build the agile and flexible virtualization environment that meet your business demands.
Proxmox VE includes KVM virtualization support since the beginning of our journey back in 2008 (since 0.9beta2).
Containers are a lightweight alternative to full machine virtualization offering lower overhead.
Linux Containers (LXC)
LXC is an operating-system-level virtualization environment for running multiple isolated Linux systems on a single Linux control host. LXC works as userspace interface for the Linux kernel containment features. Linux users can easily create and manage system or application containers with a powerful API and simple tools.
Move your running virtual machines from one physical host to another without any downtime.
While many people start with a single node, Proxmox VE can scale out to a large set of clustered nodes. The cluster stack is fully integrated and ships with the default installation.
Unique Multi-master Design
The integrated web-based management interface gives you a clean overview of all your KVM guests and Linux containers and even of your whole cluster. You can easily manage your VMs and containers, storage or cluster from the GUI. There is no need to install a separate, complex, and pricy management server.
Proxmox Cluster File System (pmxcfs)
Proxmox VE uses the unique Proxmox Cluster file system (pmxcfs), a database-driven file system for storing configuration files. This enables you to store the configuration of thousands of virtual machines. By using corosync, these files are replicated in real time on all cluster nodes. The file system stores all data inside a persistent database on disk, nonetheless, a copy of the data resides in RAM which provides a maximum storage size is 30MB - more than enough for thousands of VMs.
Proxmox VE is the only virtualization platform using this unique cluster file system.
Web-based Management Interface
For advanced users who are used to the comfort of the Unix shell or Windows Powershell, Proxmox VE provides a command line interface to manage all the components of your virtual environment. This command line interface has intelligent tab completion and full documentation in the form of UNIX man pages.
REST web API
Proxmox VE uses a RESTful API. We choose JSON as primary data format, and the whole API is formally defined using JSON Schema. This enables fast and easy integration for third party management tools like custom hosting environments.
You can define granular access for all objects (like VM´s, storages, nodes, etc.) by using the role based user- and permission management. This allows you to define privileges and helps you to control access to objects. This concept is also known as access control lists: Each permission specifies a subject (a user or group) and a role (set of privileges) on a specific path.
Proxmox VE supports multiple authentication sources like Microsoft Active Directory, LDAP, Linux PAM standard authentication or the built-in Proxmox VE authentication server.
Backup and Restore
Backups are a basic requirement for any sensible IT deployment. Proxmox VE provides a fully integrated solution, using the capabilities of each storage and each guest system type.
Proxmox VE backups are always full backups - containing the VM/CT configuration and all data. Backups can be easily started via the GUI or via the vzdump backup tool (via command line). The integrated backup tool (vzdump) creates consistent snapshots of running containers and KVM guests. It basically creates an archive of the VM or CT data and also includes the VM/CT configuration files.
Backup jobs can be scheduled so that they are executed automatically on specific days and times, for selectable nodes and guest systems.
KVM live backup works for all storage types including VM images on NFS, iSCSI LUN, Ceph RBD or Sheepdog. The Proxmox VE backup format is optimized for storing VM backups fast and effectively (sparse files, out of order data, minimized I/O).
Proxmox VE High Availability Cluster
A multi-node Proxmox VE HA Cluster enables the definition of highly available virtual servers. The Proxmox VE HA Cluster is based on proven Linux HA technologies, providing stable and reliable HA service.
Proxmox VE HA Manager
During deployment, the resource manager called Proxmox VE HA Manager monitors all virtual machines and containers on the whole cluster and automatically gets into action if one of them fails. The Proxmox VE HA Manager requires zero configuration, it works out of the box. Additionally, watchdog-based fencing simplifies deployments dramatically.
For easy handling the whole Proxmox VE HA Cluster settings can be configured via the integrated web-based GUI.
Proxmox VE Simulator
The integrated Proxmox VE HA Simulator enables you to learn all HA functionality and test your setup prior to going into production.Read more about the Proxmox VE HA Cluster: http://pve.proxmox.com/wiki/High_Availability and http://pve.proxmox.com/wiki/High_Availability_Cluster_4.x
Proxmox VE Firewall
The built-in Proxmox VE Firewall provides an easy way to protect your IT infrastructure. The firewall is completely customizable allowing complex configurations via GUI or CLI. You can setup firewall rules for all hosts inside a cluster, or define rules for virtual machines and containers only. Features like firewall macros, security groups, IP sets and aliases help to make that task easier.
While all configuration is stored on the cluster file system, the iptables-based firewall runs on each cluster node, and thus provides full isolation between virtual machines. The distributed nature of this system also provides much higher bandwidth than a central firewall solution.
IPv4 and IPv6
The firewall has full support for IPv4 and IPv6. IPv6 support is fully transparent, and we filter traffic for both protocols by default. So there is no need to maintain a different set of rules for IPv6.
Read more: http://pve.proxmox.com/wiki/Firewall
Proxmox VE uses a bridged networking model. Each host can have up to 4094 bridges. Bridges are like physical network switches implemented in software on the Proxmox VE host. All VMs can share one bridge as if virtual network cables from each guest were all plugged into the same switch. For connecting VMs to the outside world, bridges are attached to physical network cards assigned a TCP/IP configuration.
For further flexibility, VLANs (IEEE 802.1q) and network bonding/aggregation are possible. In this way it is possible to build complex, flexible virtual networks for the Proxmox VE hosts, leveraging the full power of the Linux network stack.
Read more on the Proxmox VE network model: http://pve.proxmox.com/wiki/Network_Model
The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages or on shared storage like NFS and on SAN. There are no limits, you may configure as many storage definitions as you like. You can use all storage technologies available for Debian Linux.
The benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime.
Via the web interface you can add the following storage types:
Network storage types supported
- LVM Group (network backing with iSCSI targets)
- iSCSI target
- NFS Share
- Ceph RBD
- Direct to iSCSI LUN
Local storage types supported
- LVM Group (local backing devices like block devices, FC devices, DRBD, etc.)
- Directory (storage on existing filesystem)
Read more on Proxmox VE storage model: https://pve.proxmox.com/wiki/Storage