kenwshmt Posted April 11 Share Posted April 11 I need some assistance, fast. about a year ago, google changed their storage policy. I had exceeded what was then the unlimited storage by about 12 tb. I immediately started evacuating it. I only got it down 10 gig from 16, and google suspended me with read only mode. it can delete, but it cannot write. I have continued to download the drive. it is now down to about 7 tb so far as the pool drive is concerned, but unless drive pool has a mode to 'bad map' and delete the sector files from the drive pool, I am still at 12 tb and way over its capacity. I do appreciate that cloud drive is discontinuing google drive, but I still have some time, but I need to delete as much from the google path as possible, and I can't do that without being able to write back to it, which I cannot do. 'delete bad/read only but empty' would be a big help. Quote Link to comment Share on other sites More sharing options...
Shane Posted April 11 Share Posted April 11 I'm not aware of a way to manually discern+delete "empty" CD chunk files. @Christopher (Drashna) is that possible without compromising the ability to continue using a (read-only) cloud drive? Would it prevent later converting the cloud drive to local storage? I take it there's something (download speeds? a quick calc suggests 7TB over 30 days would need an average of 22 Mbps, but that's not including overhead) preventing you from finishing copying the remaining uncopied 7TB on the cloud drive to local storage before the May 15th deadline? Quote Link to comment Share on other sites More sharing options...
Christopher (Drashna) Posted April 11 Share Posted April 11 Make sure that you're on the latest release version. There are some changes to better handle when the provider is read only (should be at least version 1.2.5.1670). As for the empty chunks, there isn't really a way to discern that, unfortunately. If you have the space, the best bet is to download the entire directory from Google Drive, and convert the data/drive, and then move the data out. Quote Link to comment Share on other sites More sharing options...
kenwshmt Posted April 11 Author Share Posted April 11 I have been using robocopy with move, over write. it 'deletes' the file from the cloud drive. its not shown any longer, and the 'space' is shown to be free. the mapping is being done to some degree, even as a read only path. if there were some way to scan the google drive path and 'blank' as in, bad map locally in the BAM map that would force a deletion of the relevant files from the google path. this would do wonders. glasswire reads that I'm getting about 2mb down. this is no where near the maximum, but it looks like I get about 100gb a day. Quote Link to comment Share on other sites More sharing options...
Shane Posted April 12 Share Posted April 12 If that's 100GB (gigabytes) a day then you'd only get about another 3TB done by the deadline (100gb - gigabits - would be much worse), so unless you can obtain your own API key to finish the other 4TB post-deadline (and hopefully Google doesn't do anything to break the API during that time), that won't be enough to finish before May 15th. So with no way to discern empty chunks I'd second Christopher's recommendation to instead begin manually downloading the cloud drive's folder now (I'd also be curious to know what download rate you get for that). Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.