Advice Needed: Using Ceph RBD with Single Node Proxmox for K8s Lab Setup

0
15
Asked By TechWizard97 On

Hey folks! I'm in the process of setting up a K8s lab and could really use some sanity check advice on my storage configuration. Here's what I've got so far:

I'm using a single node Proxmox VE host with four 500GB Samsung EVO 870 SSDs for storage, separate from the disk running the Proxmox system. My goal is to run Ceph on this Proxmox host to provide RBD volumes for my multiple VMs (which will act as K8s nodes). I want to take advantage of a robust shared storage solution that allows my Pods to migrate seamlessly between these K8s nodes while retaining their persistent data.

I have several reasons for this choice:
- **Flexibility**: It's crucial that the Pods remain mobile across the K8s nodes.
- **Experience**: I'm quite familiar with ceph-csi.
- **Performance**: Even though I'm utilizing SATA SSDs, I'm hoping the latency will be manageable in a lab environment, despite some overhead from Ceph.

I plan to set up a single-node Ceph cluster and modify the CRUSH map to utilize the OSD or device as the failure domain rather than the host itself. Additionally, I want to create dedicated pools for the K8s persistent volumes.

Now, I have a few questions:
1. Has anyone here used Samsung Consumer SSDs for a small Ceph configuration? I'm aware of the lack of power-loss protection and potential wear-out issues due to high OSD logging—should I be worried about this in a light homelab setting?
2. Is there a significant "Maintainability Trap" when running Ceph on the same host as the K8s VMs, compared to simpler solutions like an NFS provisioner or Longhorn?
3. Would it be better to split the four SSDs into two pools (two for Proxmox VMs and two for K8s), or should I just create one large pool and handle space management through Ceph images?

I'm eager to hear your experiences!

2 Answers

Answered By ProxMoxie On

Running Ceph on just one node isn't optimal since it tends to be slow and has a lot of overhead. You're better off using three nodes if you can. Even for educational purposes, going below three nodes isn't recommended since it's still slow with that setup.

Answered By CleverThinker42 On

Have you thought about whether you really need Ceph? For a single physical host, it might just add unnecessary complexity and overhead without bringing real benefits. I personally think using something like Proxmox CSI could be simpler and more efficient for your needs. It's worth checking out!

FlatSoda1 -

I completely agree! Leaning towards Ceph might not be the best choice here. A native ZFS pool sounds cleaner for what you're trying to do. If you ever need RWX support, you could easily set up a small NFS later. Good luck!

TechGeekGamer -

But wouldn't the Ceph CSI still allow for node draining? Something like `openebs-localpv` doesn’t let you migrate pods when draining, right?

Related Questions

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.