Proxmox VE Ceph Benchmark 2018/02

  • Last update: 29 July 2019
  • File size: 272.21 KB
  • Version: 201802

Benchmark Proxmox VE Ceph Cluster Performance

To optimize performance in hyper-converged deployments with Proxmox VE and Ceph storage the appropriate hardware setup can help a lot. This benchmark presents some possible setups and their performance outcomes with the intention to support Proxmox users to make better decisions.

Hyper-converged setups with Proxmox VE can already be deployed on a minimum cluster setup of three nodes, enterprise class SATA SSDs, and with a 10 gigabit network. As long as there is enough CPU power and enough RAM, a decent performance of a three node cluster is possible.

Since by default Ceph uses a replication of three, the data is still available even after losing one node, thus providing a highly available and distributed storage solution—fully software-defined and 100 % open-source.

  • Although it is possible to run virtual machines/containers and Ceph on the same node, a separation does make sense in bigger workloads.
  • To match your needs for growing workloads, the Proxmox VE and Ceph server clusters can be extended on the fly with additional nodes without any downtime.
  • The Proxmox VE virtualization platform integrates Ceph storage since early 2014 with the release of Proxmox VE 3.2. Since then it has been used on thousands of servers worldwide, which provided an enormous amount of feedback and experience.

Read the complete Proxmox VE Ceph benchmark document...

We use cookies on our website. Some of them are essential for the operation of the site, while others help us to improve this site and the user experience (tracking cookies). You can decide for yourself whether you want to allow cookies or not. Please note that if you reject them, you may not be able to use all the functionalities of the site.