Proxmox® offers four storage options: ZFS for local high-performance storage with integrated snapshots and compression, Ceph for distributed high-availability systems, NFS for file-based network storage, and iSCSI for block-based enterprise solutions. The choice depends on your performance requirements, budget, and infrastructure size. Each option is suitable for specific virtualization scenarios.
Estimated reading time: 14 minutes
Proxmox VE supports four central storage technologies that meet different requirements in virtualization environments. ZFS functions as a local file system with advanced features such as snapshots and deduplication. Ceph offers distributed storage for high-availability clusters. NFS enables simple network-attached storage (NAS) integration, while iSCSI provides block-based enterprise storage (SAN).
ZFS is suitable for individual Proxmox hosts or small clusters with high performance requirements. The integrated functions significantly reduce administrative overhead. Ceph storage is used in larger clusters where resilience and scalability are critical.
NFS is ideal when existing network storage solutions need to be integrated. iSCSI storage is suitable for companies with existing SAN infrastructures and specific performance requirements.
ZFS combines a file system and volume manager into a single solution, offering copy-on-write snapshots, transparent compression, integrated deduplication, and RAID-Z for data integrity. These features work without additional software and significantly reduce storage requirements and backup times.
The snapshot function enables backups of virtual machines in seconds without performance loss. Compression automatically reduces storage consumption, while deduplication stores identical data blocks only once. RAID-Z protects against disk failures without the disadvantages of traditional RAID systems.
ZFS requires at least 8 GB of RAM and works optimally with ECC memory. Choose ZFS for local high-performance environments, development systems with frequent snapshots, or when you need advanced storage features without complex configuration.
Ceph creates a distributed storage pool across multiple Proxmox nodes and automatically replicates data for high availability. The system tolerates failures of individual nodes without data loss and scales horizontally by adding further storage nodes.
Data distribution based on the CRUSH algorithm ensures even load balancing and automatic recovery of failed components. Ceph storage supports various pool types for different performance and redundancy requirements.
Ceph is suitable for clusters with at least three nodes and separate networks for cluster and public traffic. The solution requires dedicated storage disks and sufficient network bandwidth. Choose Ceph for mission-critical environments with high availability requirements and planned scaling.
Proxmox VE offers Ceph directly from the interface as a “hyperconverged setup.” This repurposes local storage as network storage and places it under the control of Ceph. While Proxmox and Ceph can both reach their full performance potential in a separate Ceph cluster, a resource conflict is always built into a hyperconverged setting. A high-performance Ceph setup requires corresponding RAM, CPU, and network performance. These resources are then no longer available for virtualization.
Of course, it is fundamentally possible to set up and operate software-defined storage based on Ceph using additional hardware. This can be kept independent of the Proxmox VE installation and, if necessary, even serve multiple Proxmox clusters with the same Ceph instance. Ceph is very powerful—provided there are sufficient hardware resources—but also demanding in terms of management. In this case, only RBD (RADOS Block Device) storage would be provided to the Proxmox cluster. The advantage of such a setup would be independence from hardware resources that the two systems would otherwise compete for.
There are other alternatives in the open-source world, such as GlusterFS or DRBD. These are not the subject of consideration here. DRBD was the hyperconverged system of choice in the first versions of Proxmox VE, and it can still be integrated into a Proxmox system today with some manual effort.
NFS is file-based and allows multiple hosts to access shared storage areas simultaneously. iSCSI is block-based and presents remote storage as local disks. These different approaches significantly influence performance, flexibility, and application scenarios.
NFS offers simple configuration and native support for live migration of virtual machines between hosts. Performance depends heavily on network latency and implementations. Snapshots and backups take place at the storage level.
iSCSI storage delivers performance through direct block access and is suitable for I/O-intensive applications. Configuration requires more effort but offers precise control over storage allocations. iSCSI requires dedicated networks for optimal performance. However, Proxmox® VE does not support thin provisioning via LVM in its current form.
The choice depends on cluster size, performance requirements, budget, and existing infrastructure. Individual hosts or small environments benefit from ZFS, while large clusters should use Ceph for high availability. Existing network storage can be integrated via NFS or iSCSI.
Assess your I/O requirements realistically. Database servers require different storage characteristics than file servers or development environments. Also consider backup strategies and maintenance effort when making your decision.
In principle, a Proxmox cluster can also be operated using only local, LVM-based storage. Virtual machines can also be migrated with local storage, although the migration takes significantly longer than with network storage. However, in this context, there is no redundancy in the event of a node failure.
Are you unsure which storage option to use? Contact us; we are happy to help.
| ZFS Local filesystem | Ceph Distributed object storage | NFS Network filesystem | iSCSI Block-based SAN | |
|---|---|---|---|---|
| Suitability & Operation | ||||
| Typical scenario | Single host or small cluster | Large HA cluster, 3+ nodes | Existing NAS infrastructure | Existing SAN infrastructure |
| Minimum requirement | 8 GB RAM, ECC recommended | Min. 3 nodes, dedicated network | NFS-capable server | iSCSI target (HW or SW) |
| Administrative overhead | Low | High | Low | Medium |
| Technical characteristics | ||||
| Latency | Very low | Medium | Medium–high | Low |
| Throughput | High | Very high | Medium | High |
| Scalability | Limited (1 host) | Linear, very high | Medium | Medium |
| High availability (HA) | Only with cluster | Natively integrated | Depends on NFS server | Depends on target |
| Live migration | Not possible | Full | Possible | Possible |
| Features | ||||
| Snapshots | Native, fast | Native | Not native | Vendor-dependent |
| Data compression | Transparent | Possible | No | No |
| Deduplication | Yes (RAM-intensive) | Optional | No | No |
| Backup integration | Very good (PBS) | Good | Good | Limited |
| Costs | ||||
| Initial costs | Low | High (3 nodes min.) | Low (if NAS available) | Medium–high |
| Open Source | Yes | Yes | Yes | Open protocol |
credativ® offers comprehensive consulting and technical support for Proxmox® storage projects, from planning to production operation. Our open-source specialists analyze your requirements and develop customized storage solutions for optimal performance and reliability.
Our services include:
Contact us for a free initial consultation on your Proxmox storage project. Our experts will work with you to develop the optimal storage strategy for your virtualization environment.
credativ is an authorized reseller of Proxmox® (Proxmox Server Solutions GmbH). The mention of brands serves exclusively for the factual description of migration scenarios and services provided by credativ. There is no business connection to the mentioned brand owners.
| Categories: | Proxmox |
|---|---|
| Tags: | Ceph iSCSI NFS Proxmox VE ZFS |
About the author
Head of Sales & Marketing
about the person
Peter Dreuw has been working for credativ GmbH since 2016 and has been a team lead since 2017. Since 2021, he has been part of Instaclustr’s management team as VP Services. Following the acquisition by NetApp, his new role became “Senior Manager Open Source Professional Services”. As part of the spin-off, he became a member of the executive management as an authorized signatory. His responsibilities include leading sales and marketing. He has been a Linux user from the very beginning and has been running Linux systems since kernel 0.97. Despite extensive experience in operations, he is a passionate software developer and is also well versed in hardware-near systems.
You need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Brevo. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More InformationYou need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou need to load content from Turnstile to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou need to load content from reCAPTCHA to submit the form. Please note that doing so will share data with third-party providers.
More InformationYou are currently viewing a placeholder content from Turnstile. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.
More Information