Hey everyone! I'm currently exploring the option of migrating various VMware setups to Proxmox as we're facing high licensing fees, which seems to be a common struggle. Each client's infrastructure can differ, but typically, they have 2-4 hypervisor hosts (running vSphere ESXi), with one of them using local storage and the others connecting to an iSCSI SAN. We also run a vCentre, a Dell SCv3020 SAN, and a couple of backup servers using Veeam. In testing, Proxmox looks great except for the shared storage aspect. We managed to get iSCSI storage working with LVM-Thin on one node, but this doesn't support shared storage, and transitioning to just LVM involves giving up critical features like snapshots and thin-provisioning. Since these features are vital for us, I wanted to hear from others who may have faced similar situations—how did you address this, and what strategies did you use to make the migration smoother?
5 Answers
I ran into this as well during my own migration attempts. The workaround I found was setting up an NFS server on my Unity SAN, but I realize the Dell SCv3020 doesn’t directly support it. It can definitely complicate things when transitioning from a block-level SAN.
Some BIOS systems can mount iSCSI directly, presenting it as another volume to the OS. Just brainstorming here, but it might be worth checking out if that’s an option for your servers.
I’ll look into it, but I don’t think our BIOS has that feature. Plus, Proxmox might get confused with a shared setup between multiple nodes.
It’s unfortunate, but Proxmox lacks a robust clustered filesystem that competes with VMware’s capabilities—especially for shared storage and snapshot functions. Most people don’t fully appreciate how effective VMFS is. Alternatives like Hyper-V do provide support for clustered shared storage, which makes them competitive.
You're right on the money; Proxmox has its limitations with shared storage, especially when compared to VMware's smooth-running VMFS. You mentioned LVM, but anything else like NFS is less than ideal for VMs. In my tests, even setting up Ceph on a large cluster didn’t manage to hit the performance levels I expected.
So, what did you decide in the end? Stay with VMware despite its costs, or did you find a feasible alternative?
You're right; Proxmox doesn't have a direct replacement for using shared SAN storage via iSCSI like VMware does. The clustered filesystem options, such as LVM, sacrifice some features like snapshots. ZFS over iSCSI could allow for snapshots, but I don't know of many storage solutions that support that setup. It's a tricky situation, especially given your requirements for high availability and performance.
Exactly! I’m curious about what alternatives people with infrastructure like ours have chosen—maybe something like Ceph, or have they opted for a different hypervisor entirely?
That’s our challenge too! NFS adds a single point of failure for us since we'd need an extra appliance like TrueNAS. It just doesn't feel like a solid solution.