This guide explores best practices for configuring virtual machines (VMs) in Proxmox to achieve optimal performance, drawing insights from expert recommendations. These settings are particularly useful for creating reusable VM templates and ensuring efficient operation, especially in demanding environments like those utilizing Ceph storage.
Here are key recommendations for Proxmox VM setup:
- SAS HDD Write Cache Enable (WCE): For enterprise-grade SAS hard drives or SSDs, enabling Write Cache (WCE) using
sdparm -s WCE=1 -S /dev/sd[x]
can significantly boost I/O performance. This allows the disk to temporarily store data in its cache before permanent writing, speeding up operations. Caution: This requires careful consideration, as data loss can occur during power outages if there’s no battery-backed cache (BBU) on the RAID controller or an Uninterruptible Power Supply (UPS) in place. -
VM Disk Cache Settings:
- ‘None’ for Clustered Environments: If your Proxmox setup is clustered (e.g., with Ceph), setting the VM disk cache to ‘None’ is generally recommended for data consistency and to leverage the underlying storage’s caching mechanisms.
- ‘Writeback’ for Standalone VMs: For standalone VMs, ‘Writeback’ cache can improve performance by allowing the VM to acknowledge writes before they are physically committed to disk. However, this also carries a risk of data loss during unexpected shutdowns.
- VM Disk Controller: VirtIO-Single SCSI with IO Thread & Discard:
- VirtIO-SCSI Single Controller: Utilize the VirtIO-SCSI driver, which is highly optimized for virtualization, offering superior performance compared to emulated IDE/SATA controllers. Assigning all VM disks to a single VirtIO-SCSI controller reduces QEMU overhead, especially beneficial for VMs with numerous disks (e.g., database servers).
- Enable IO Thread: Activating IO Thread allows each VM disk to have its own dedicated I/O thread. This enables parallel I/O processing, preventing bottlenecks and significantly improving performance, particularly on hosts with multiple CPU cores and for I/O-intensive workloads like databases or Ceph RBD.
- Enable Discard Option (TRIM/UNMAP): Enabling the discard option allows the VM to inform the underlying storage (like Ceph, ZFS, or SSDs) when blocks are no longer in use after files are deleted. This facilitates “space reclamation,” returning unused capacity to the storage backend and maintaining SSD performance by aiding in garbage collection.
- VM CPU Type to ‘Host’: Setting the VM CPU type to ‘Host’ instructs the VM to use the native CPU instruction set of the physical server hardware. This often results in higher performance, especially for CPU-sensitive applications. However, it can reduce VM portability; migrating to a Proxmox host with a different CPU generation might cause issues. This practice is best suited for homogeneous clusters where all hosts have identical CPU architectures.
-
VM CPU NUMA (Non-Uniform Memory Access) for Multi-Socket Servers: If your server features two or more physical CPU sockets (e.g., dual Intel Xeon or AMD EPYC), enabling VM CPU NUMA awareness is crucial. NUMA optimizes memory access by ensuring that vCPUs and RAM are allocated within the same NUMA node, reducing latency and improving performance.
- Recommendation: If your server has a single CPU socket, NUMA does not need to be activated. For servers with two or more sockets, enable NUMA in VM settings.
- Important: Ensure that the number of vCPUs assigned to a VM does not exceed the number of physical cores available per NUMA node.
These best practices provide a solid foundation for optimizing Proxmox VM performance, ensuring stability and efficiency for various workloads.