Jump to content
  • 0

Backup / Clone of a Google Drive account


filthyrich

Question

Hi!

 

Just before I pull the trigger on buying this software, does anybody have a good method to back up a google drive account to a secondary account? It doesn't NEED to be a second google account, but that would be simpler.

 

Of course I need this backed up drive to be able to re-create the same drives in Windows.

 

Thanks!

Link to comment
Share on other sites

21 answers to this question

Recommended Posts

  • 0

Christopher,

 

I have copied the folder of one of my stablebit cloud drives from one Google Drive account to another using rclone.  I moved this copied "CloudPart" folder into the existing "Stablebit CloudDrive" folder on this 2nd Google Drive account. (I use stablebit on 2 separate Google Drive accounts)  I tried to get stablebit to see the folder and allow me to mount it but the software wouldn't recognize that there was a new drive.  

 

I was hoping to be able to clone the folders between accounts to have a backup but it looks like Stablebit will only read the drives from the Google Drive account that they were originally created under.  I tried reauthorizing and restarting the server.

 

EDIT:  I was trying to get this to work as there is a way using rclone to copy contents of a Google Drive to another all server-side with no local bandwidth usage.

Link to comment
Share on other sites

  • 0

  1. From the Primary account, share whatever files/folders you want with the Secondary account.

Go to the Secondary account, and click "Shared with me".

Right click on the files/folders from the Primary drive, and click "Add to my drive". ** Note ** This is not the end! Your files are currently still owned by the Primary drive and will be removed if the Primary drive no longer shares them with the Secondary drive!!

Because rclone with Google Drive supports server side copying from the same remote (meaning you don't have to download/reupload the files), you can do something like "rclone copy secondaryGDrive:/primaryDriveFilesFolderPath/ secondaryGDrive:/newPathOnSecondaryDrive"

Doing this will allow your Secondary drive to be the owner of the newly copied Primary drive's files and folders. The files will remain on your Secondary drive even if the Primary drive stops sharing with you.

 

Link to comment
Share on other sites

  • 0

It transfers the chunks pretty fast but I had to run the command about 10 times to get all of the files copied because of Google API limits.  It took about 20 minutes to copy 15,000 files from a 350GB drive.  All of this doesn't really matter right now since the copied drive won't mount in stablebit.

Link to comment
Share on other sites

  • 0

It transfers the chunks pretty fast but I had to run the command about 10 times to get all of the files copied because of Google API limits.  It took about 20 minutes to copy 15,000 files from a 350GB drive.  All of this doesn't really matter right now since the copied drive won't mount in stablebit.

 

Interesting. Did you copy it exactly as a new folder (StableBit CloudDrive) or did you try to integrate it with already established disks?

Link to comment
Share on other sites

  • 0

Interesting. Did you copy it exactly as a new folder (StableBit CloudDrive) or did you try to integrate it with already established disks?

I copied it to a new folder in the root of the 2nd Google Drive account then moved the folder into the Stablebit Clouddrive folder retaining the original folder name.  Of course, so far there is no reason to do this since stablebit can't read the cloned folder from a different Google Account.

Link to comment
Share on other sites

  • 0

I copied it to a new folder in the root of the 2nd Google Drive account then moved the folder into the Stablebit Clouddrive folder retaining the original folder name.  Of course, so far there is no reason to do this since stablebit can't read the cloned folder from a different Google Account.

I hope this is a feature that they add, as I have a few tb's of movies sitting on a drive that I have no access to  :(

Link to comment
Share on other sites

  • 0

Just a heads up, when moving around the folders, you may need to "refresh" the drive list in StableBit CloudDrive. This queries the provider, and looks for the specific data.

 

Also, you may need to create at least one drive so that you have the "StableBit CloudDrive" folder present already.

Link to comment
Share on other sites

  • 0

Just a heads up, when moving around the folders, you may need to "refresh" the drive list in StableBit CloudDrive. This queries the provider, and looks for the specific data.

 

Also, you may need to create at least one drive so that you have the "StableBit CloudDrive" folder present already.

I tried doing both of these without the drive showing up unfortunately.

Link to comment
Share on other sites

  • 0

I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

  1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
  2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
    • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
    • If you're following these instructions and you want to fetch a list of all the chunks then:
      • Download: http://sqlitebrowser.org/
      • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
      • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
      • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
      • Export to CSV by clicking bottom-middle button. (New line character: Windows)
      • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
    • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
  3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
  4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
  5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

Edited by Choruptian
Clarification about the Chunk ID database ... it must be deleted on the new provider as it's no longer valid
Link to comment
Share on other sites

  • 0

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

 

Oh, god, yeah... that's bad. :P

 

And yeah, you'd want the "metadata" folder, as this is ABSOLUTELY necessary. This contains the drive's information, including revision and "geometry".  Without it, you won't be able to mount the drive. Period.  

 

 

And yeah, seeing all the usage info is very nice. :)

(but I'm a "progress bar nerd")

Link to comment
Share on other sites

  • 0

I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

  1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
  2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
    • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
    • If you're following these instructions and you want to fetch a list of all the chunks then:
      • Download: http://sqlitebrowser.org/
      • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
      • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
      • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
      • Export to CSV by clicking bottom-middle button. (New line character: Windows)
      • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
    • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
  3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
  4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
  5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

 

Added part about duplicate chunks, also to make the post more visible.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...