Proxmox VE Ceph Benchmark 2018/02

  • Last update: 16 July 2018
  • File size: 272.21 KB
  • Version: 201802

Benchmark Proxmox VE Ceph Cluster Performance

To optimize performance in hyper-converged deployments with Proxmox VE and Ceph storage the appropriate hardware setup can help a lot. This benchmark presents some possible setups and their performance outcomes with the intention to support Proxmox users to make better decisions.

Hyper-converged setups with Proxmox VE can already be deployed on a minimum cluster setup of three nodes, enterprise class SATA SSDs, and with a 10 gigabit network. As long as there is enough CPU power and enough RAM, a decent performance of a three node cluster is possible.

Since by default Ceph uses a replication of three, the data is still available even after losing one node, thus providing a highly available and distributed storage solution—fully software-defined and 100 % open-source.

  • Although it is possible to run virtual machines/containers and Ceph on the same node, a separation does make sense in bigger workloads.
  • To match your needs for growing workloads, the Proxmox VE and Ceph server clusters can be extended on the fly with additional nodes without any downtime.
  • The Proxmox VE virtualization platform integrates Ceph storage since early 2014 with the release of Proxmox VE 3.2. Since then it has been used on thousands of servers worldwide, which provided an enormous amount of feedback and experience.

Read the complete Proxmox VE Ceph benchmark document...

Cookies make it easier for us to provide you with our services. With the usage of our services you permit us to use cookies.
More information Ok