Proxmox VE Ceph Benchmark 2020/09

  • Last update: 14 Oktober 2020
  • File size: 216.10 KB
  • Version: 202009-rev2

Hyper-converged infrastructure with Proxmox VE virtualization platform and integrated Ceph Storage

To optimize performance in hyper-converged deployments, with Proxmox VE and Ceph storage, the appropriate hardware setup is essential. This benchmark presents possible setups and their performance outcomes, with the intention of supporting Proxmox users in making better decisions.

Summary

Hyper-converged setups can be deployed with Proxmox VE, using a cluster that contains a minimum of three nodes, enterprise class NVMe SSDs, and a 100 gigabit network (10 gigabit network is the absolute minimum requirement and already a bottleneck). As long as CPU power and RAM are sufficient, a three node cluster can reach reasonably good levels of performance.

  • Since by default Ceph uses a replication of three, data will remain available, even after losing a node, thus providing a highly available, distributed storage solution—fully software-defined and 100 % open- source.
  • Although it is possible to run virtual machines/containers and Ceph on the same node, a separation makes sense for larger workloads.
  • To match your need for growing workloads, a Proxmox VE and Ceph server cluster can be extended with additional nodes on the fly, without any downtime.
  • The Proxmox VE virtualization platform has integrated Ceph storage, since the release of Proxmox VE 3.2, in early 2014. Since then, it has been used on thousands of servers worldwide, which has provided us with an enormous amount of feedback and experience.

Wir nutzen Cookies auf unserer Website. Einige von ihnen sind essenziell für den Betrieb der Seite, während andere uns helfen, diese Website und die Nutzererfahrung zu verbessern (Tracking Cookies). Sie können selbst entscheiden, ob Sie die Cookies zulassen möchten. Bitte beachten Sie, dass bei einer Ablehnung womöglich nicht mehr alle Funktionalitäten der Seite zur Verfügung stehen.