I'm managing a 40TB file server and we want to set up a failover solution at another location. I'm considering using DFS-R (Distributed File System Replication) for this setup. Is this a good approach? The idea is that everyone will use Server A, but if it goes down, they can seamlessly switch to Server B.
5 Answers
Are you tied to an on-prem solution? Something like Egnyte with Turbo or Sync appliances could work better, avoiding some of the traditional DFS-R pitfalls. Alternatively, Azure File or Storage Sync could also be a cheaper and efficient solution here.
Honestly, I’d recommend staying away from DFS-R. It's better to avoid it entirely due to its complexities.
Yes, if you can manage to keep data mirrored at both locations, DFS-R could definitely work for you. Just make sure to set up appropriate failover rules, and you should be set. There are other methods available, but if this is just a standard document file server, it should be fairly straightforward.
DFS-R is a solid option, especially when paired with DFS Namespace. Just be careful and do your homework on the Do's and Don'ts of DFS-R to avoid accidental data overwrites, especially considering the size of your dataset. Also, ensure you have ample space for the staging area cache and reliable backups in place!
We have two production sites and utilize DFS replication effectively. Whenever the primary server is down, it typically takes about 20 seconds for clients to switch over to the secondary server. Keeping a dedicated 10G fiber link between the two locations helps with this, so low latency is crucial.
Interesting, is there a specific failover mechanism that tells it when to switch to Server B?
Why do you say that?