Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Login issues   11/07/17

      If you have issues with logging in, make sure you use your display name and not the "username" or email.  Or head here for more info.   http://community.covecube.com/index.php?/topic/3252-login-issues/  
    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 
  • 0
filthyrich

Backup / Clone of a Google Drive account

Question

Hi!

 

Just before I pull the trigger on buying this software, does anybody have a good method to back up a google drive account to a secondary account? It doesn't NEED to be a second google account, but that would be simpler.

 

Of course I need this backed up drive to be able to re-create the same drives in Windows.

 

Thanks!

Share this post


Link to post
Share on other sites

21 answers to this question

Recommended Posts

  • 0

Christopher,

 

I have copied the folder of one of my stablebit cloud drives from one Google Drive account to another using rclone.  I moved this copied "CloudPart" folder into the existing "Stablebit CloudDrive" folder on this 2nd Google Drive account. (I use stablebit on 2 separate Google Drive accounts)  I tried to get stablebit to see the folder and allow me to mount it but the software wouldn't recognize that there was a new drive.  

 

I was hoping to be able to clone the folders between accounts to have a backup but it looks like Stablebit will only read the drives from the Google Drive account that they were originally created under.  I tried reauthorizing and restarting the server.

 

EDIT:  I was trying to get this to work as there is a way using rclone to copy contents of a Google Drive to another all server-side with no local bandwidth usage.

Share this post


Link to post
Share on other sites
  • 0

  1. From the Primary account, share whatever files/folders you want with the Secondary account.

Go to the Secondary account, and click "Shared with me".

Right click on the files/folders from the Primary drive, and click "Add to my drive". ** Note ** This is not the end! Your files are currently still owned by the Primary drive and will be removed if the Primary drive no longer shares them with the Secondary drive!!

Because rclone with Google Drive supports server side copying from the same remote (meaning you don't have to download/reupload the files), you can do something like "rclone copy secondaryGDrive:/primaryDriveFilesFolderPath/ secondaryGDrive:/newPathOnSecondaryDrive"

Doing this will allow your Secondary drive to be the owner of the newly copied Primary drive's files and folders. The files will remain on your Secondary drive even if the Primary drive stops sharing with you.

 

Share this post


Link to post
Share on other sites
  • 0

It transfers the chunks pretty fast but I had to run the command about 10 times to get all of the files copied because of Google API limits.  It took about 20 minutes to copy 15,000 files from a 350GB drive.  All of this doesn't really matter right now since the copied drive won't mount in stablebit.

Share this post


Link to post
Share on other sites
  • 0

All of this doesn't really matter right now since the copied drive won't mount in stablebit.

 

This is the hugest disappointment yet. With the dubious nature of what people are putting online, it would be really nice to have a working backup.

Share this post


Link to post
Share on other sites
  • 0

It transfers the chunks pretty fast but I had to run the command about 10 times to get all of the files copied because of Google API limits.  It took about 20 minutes to copy 15,000 files from a 350GB drive.  All of this doesn't really matter right now since the copied drive won't mount in stablebit.

 

Interesting. Did you copy it exactly as a new folder (StableBit CloudDrive) or did you try to integrate it with already established disks?

Share this post


Link to post
Share on other sites
  • 0

Interesting. Did you copy it exactly as a new folder (StableBit CloudDrive) or did you try to integrate it with already established disks?

I copied it to a new folder in the root of the 2nd Google Drive account then moved the folder into the Stablebit Clouddrive folder retaining the original folder name.  Of course, so far there is no reason to do this since stablebit can't read the cloned folder from a different Google Account.

Share this post


Link to post
Share on other sites
  • 0

I copied it to a new folder in the root of the 2nd Google Drive account then moved the folder into the Stablebit Clouddrive folder retaining the original folder name.  Of course, so far there is no reason to do this since stablebit can't read the cloned folder from a different Google Account.

I hope this is a feature that they add, as I have a few tb's of movies sitting on a drive that I have no access to  :(

Share this post


Link to post
Share on other sites
  • 0

Just a heads up, when moving around the folders, you may need to "refresh" the drive list in StableBit CloudDrive. This queries the provider, and looks for the specific data.

 

Also, you may need to create at least one drive so that you have the "StableBit CloudDrive" folder present already.

Share this post


Link to post
Share on other sites
  • 0

Just a heads up, when moving around the folders, you may need to "refresh" the drive list in StableBit CloudDrive. This queries the provider, and looks for the specific data.

 

Also, you may need to create at least one drive so that you have the "StableBit CloudDrive" folder present already.

I tried doing both of these without the drive showing up unfortunately.

Share this post


Link to post
Share on other sites
  • 0

I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

  1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
  2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
    • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
    • If you're following these instructions and you want to fetch a list of all the chunks then:
      • Download: http://sqlitebrowser.org/
      • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
      • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
      • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
      • Export to CSV by clicking bottom-middle button. (New line character: Windows)
      • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
    • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
  3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
  4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
  5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

Edited by Choruptian
Clarification about the Chunk ID database ... it must be deleted on the new provider as it's no longer valid

Share this post


Link to post
Share on other sites
  • 0

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

 

Oh, god, yeah... that's bad. :P

 

And yeah, you'd want the "metadata" folder, as this is ABSOLUTELY necessary. This contains the drive's information, including revision and "geometry".  Without it, you won't be able to mount the drive. Period.  

 

 

And yeah, seeing all the usage info is very nice. :)

(but I'm a "progress bar nerd")

Share this post


Link to post
Share on other sites
  • 0

I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

  1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
  2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
    • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
    • If you're following these instructions and you want to fetch a list of all the chunks then:
      • Download: http://sqlitebrowser.org/
      • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
      • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
      • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
      • Export to CSV by clicking bottom-middle button. (New line character: Windows)
      • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
    • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
  3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
  4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
  5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

 

Added part about duplicate chunks, also to make the post more visible.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×