Jump to content
  • 0

Help moving 6,5 TB from 1 GDrive to... the same GDrive


chaostheory

Question

Hello guys, I found myself in pretty tough situation. When I first heard about StableBit CloudDrive and unlimited GDrive, I wanted to set-up my "Unlimited Plex" as fast as possible. This resulted in creating a cloud drive with half-assed settings, like 10 MB chunk size and so on. Now that I know this is not optimal and it's too late to change chunk size, I need to create another CloudDrive with better settings and transfer everything to it.


Downloading it and uploading using local PC will take forever and I want to avoid that. I was thinking about using Google Cloud Compute, so my PC won't have to be turned on and more importantly -> there won't be a problem with uploading 6,5 TB again which, on my current connection, will take roughly 2 weeks. That's assuming it will go smoothly without any problems.


Is there any guide how to do this? I don't mind if it's reasonably paid option also. Any help?


Link to comment
Share on other sites

6 answers to this question

Recommended Posts

  • 0

(Sorry for the long post, but your situation has been my life for the past 2 days)
 
I'll chime in, because I'm in the same boat as you.
 
My reasons were a little different. Coming from 10 local HDD's, I had the bright idea to create 10 CloudDrive "drives" on one GSuite account, to match the content that had been stored locally. This was a mistake. Attaching, pinning data 10x, each using 5 threads to upload/download to/from the same account. Oh, and the OCD of not having the drives in order in the CloudDrive UI unless they are attached in said order? Nightmare. Decided to consolidate.
 
So after a lot of research here and on reddit, the best solutions I've seen are to either rent a VPS (as per CoveCube guru Christopher, among others), or in line with what you said - Google Cloud Platform.
 
I took the plunge yesterday around 4PM EST with GCP.
 
And boy am I glad I did. Aside from the $300 credit they offer, it's a helluva learning curve to get everything working just right. With most VPS services I've seen, you pick what you want and sign up. With GCP you can create new instances in real-time to get everything exactly the way you want/need. 6 virtual machine instances later, I think I have settings that appear to work well. To be fair, I'm actually still testing settings to see what can work better.
 
***I'd absolutely love to hear from others who have done this. maybe I don't have to re-invent the wheel here.***
 
Current VM settings:
 
Compute Engine
2 vCPU cores (they do max out, but 4 didn't = faster uploads, I found)
6GB RAM (maybe too much - never goes over ~60% usage)
50GB SSD for Windows Server 2016 (cost is the same for all Windows Server versions, 50GB is minimum allowed)
1 375GB "Local" SSD scratch disk for CloudDrive cache
 
This configuration cost is $196.18/mo or about $6.50 a day
 
Couple of things:
That 375GB "local" drive is AWFUL. Apparently when you shutdown the VM, you lose the drive and can never reload the VM again. I read about it, understood it, tested it, and will not shut down again. Tired of reinstalling and re configuring Windows.
 
The scratch drive is SLOW. Be it SCSI or NVMe. Google Claims it's local to the exact vCPU you're using. I call bull on that. Or it's garbage equipment. Transfer speeds from the OS SSD to the NVMe drive are about 25MB/s. My personal PC transfers from the HDD to the NVMe SSD at a steady 100+ MB/s. And NVMe > NVMe (copy to self) are also pretty awful on GCP.

THAT all being said. Here's my current dilemma. Ran a transfer overnight. Cloud Drive > Cloud Drive through the VM. 
 
Started at 1:37 AM EST and completed at 3:18 PM EST for 957 GB of data. Windows "copy" speeds go from 50 MB/s on the download to 0, then back. CPU's max out (not that I think that matters - even with 4 vCPU cores I didn't see a difference in total time to transfer while not maxing out the vCPU).
 
Overall the transfer rate worked out to just under 70GB/hour. That seems kinda slow, for me.
 
I've tested Source Cloud Drive > OS SSD "Cache" > Local "Cache" NVMe > Destination Cloud Drive, and speeds did not improve.

 

Still tweaking these with every mini-transfer, so please chime in if you have any input on these current settings:

-Prefetching on the source drive is currently on - 1 MB trigger, 1024 MB forward, 1800 sec. 

-Cache for source is 50GB FIXED, cache for Destination is set at 100GB FIXED (although source cache is listed as 0 used, all data is in pre-fetch. Destination shows all 100GB used)

-20 download threads for source, 2 upload with "Background I/O" OFF.

-20 upload threads for destination, 5 download with "Background I/O" ON.

-No pinning of metadata or directories for SOURCE. I found that it would randomly start pinning data and kill my total overall transfer time. 

 

Unfortunately, the bottleneck appears to be the download & upload at the same time. When I test mini-transfers DOWN to the cache drive, I get very reasonable speeds, average 140GB/Hour. When I test those same mini-transfer UP, I actually can hit 1GBps. But as a straight Cloud > Cloud transfer, speeds max out at about 250Mbps. Until the cache is full. Once the cache is full, things do pickup a bit.

 

Again. Sorry for the long post. Please feel free to ask questions. I'll be checking the forums here religiously as I sit here for the next 3 days finishing as much of this transfer as I can (end of vacation fun!)

 

-Jason

Link to comment
Share on other sites

  • 0

So.. I have already tried that. Created VM with 4 vCPU cores, 6 GB of RAM, 100 GB SSD, Windows Server 2016. 

Installed Stablebit Clouddrive right away, mounted my Plex drive on VM and tried downloading a single movie with copy-paste method.

 

Getting 30 GB/h at best :( I wonder what's wrong there... 

 

Currently shut down VM, there's no point migrating until I fix it.

Link to comment
Share on other sites

  • 0

Still not anywhere close to what I hoped to get, @Colbalt503.

 

My expectations were that *my* bottlenecks would be my server's hardware & download/upload speeds (200/35). But hey, Google shouldn't have those limitations!

 

Well, I did hit 1.08 GBps uploading from cache (again, Gdrive > Gdrive). But that was only while the VM wasn't downloading at the exact same time.

 

The bottleneck here, IMO, may be CloudDrive more than anything. Downloading to a cache/prefetcher, then transferring to either the same physical disk (or separate local disk) and then into the cache/"To Upload", and then finally uploading.

 

My overnight test resulted in ~78 GB/hour transferred. Only slightly faster than my last tests. And that was with completely revamped virtual hardware (1TB HDD cache drive, 6 vCPU, 6GB RAM).

 

Unfortunately, I don't have many other choices. One way or another, 40TB has to be duplicated. So I'll use this method until the credits dry up.

 

-Jason

 

*edit for spelling, punctuation

Link to comment
Share on other sites

  • 0

Yep I bet the driver is the bottleneck here.

 

Your best bet would likely be 2 VMs that are on the same network. Share the destination drive over CIFS to the source VM and do the copy that way. You'll have some minor network overhead, but will no longer have the concurrent upload & download overhead.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...