I'm looking to transfer a massive amount of data from one Azure BlockBlobStorage account to another StorageV2 account. Specifically, I have a container with over 30 TB of files, which amounts to millions of individual files. I've been using an AzCopy script, but it's moving at an incredibly slow pace, and I fear it might take an entire week to finish. Is there a faster and more efficient method to execute this transfer? Also, I've checked Azure Storage Mover, but it seems more geared towards transferring data into Azure from other locations rather than copying between Azure accounts.
3 Answers
You might want to consider setting up object replication for your storage accounts. This way, the system can manage the transfer for you, which can save a lot of hassle. Check out the official documentation on configuring object replication for more details!
Just a heads up, Azure Storage Mover essentially utilizes AzCopy behind the scenes for these transfers. AzCopy can handle server-to-server transfers directly without needing to go through your client, so it should indeed be quite fast. But ensure that your source and destination are compatible if you're changing storage types!
Yeah, it's going to take a week with AzCopy. There are literally millions of files. If Storage Mover uses AzCopy, it shouldn't speed up my current process. My source and destination are in the same region and resource group.
Also, keep in mind that Azure has a 'Copy Blob From URL' operation that synchronously copies blobs for smaller sizes, but you'll need to check the account key settings for that.
I've used AzCopy to sync several terabytes between regions, and I faced high RAM usage due to handling metadata for millions of files. To manage this, we ended up running the process on a 128 GB VM to handle the load effectively.

This looks very promising. I will take a look and run a few tests. Thank you!