Jump to content

  • Log in with Twitter Log in with Windows Live Log In with Google      Sign In   
  • Create Account


Backup / Clone of a Google Drive account

  • Please log in to reply
21 replies to this topic

#21 Christopher (Drashna)

Christopher (Drashna)

    Customer and Technical Support

  • Administrators
  • 8,208 posts
  • LocationSan Diego, CA, USA

Posted 14 February 2017 - 12:51 AM

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).


Oh, god, yeah... that's bad. :P


And yeah, you'd want the "metadata" folder, as this is ABSOLUTELY necessary. This contains the drive's information, including revision and "geometry".  Without it, you won't be able to mount the drive. Period.  



And yeah, seeing all the usage info is very nice. :)

(but I'm a "progress bar nerd")

  • Ginoliggime, KiaraEvirm and Antoineki like this

Christopher Courtney

aka "Drashna"

Microsoft MVP for Windows Home Server 2009-2012

Lead Moderator for We Got Served

Moderator for Home Server Show


This is my server


Lots of "Other" data on your pool? Read about what it is here.

#22 Choruptian



  • Members
  • PipPip
  • 27 posts

Posted 20 February 2017 - 12:55 PM

I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

  1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
  2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
    • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.co...eAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
    • If you're following these instructions and you want to fetch a list of all the chunks then:
      • Download: http://sqlitebrowser.org/
      • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
      • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
      • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
      • Export to CSV by clicking bottom-middle button. (New line character: Windows)
      • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
    • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
  3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
  4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
  5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).


Added part about duplicate chunks, also to make the post more visible.

  • Christopher (Drashna) likes this

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users