Jump to content
  • 0

Google Drive + Read-only Enforced + Allocated drive


fjellebasse

Question

Hello!

As many of you already are aware of, Google will soon enforce Read-only mode for Accounts that exceed their limit cap. New uploads won't be possible.

We've been discussing one subject on this matter in our team. We know that they won't delete our already uploaded data (as of our understanding).

If you have an allocated drive within Stablebit that still has free space available (mounted for use with Google), will you still be able to upload within this already allocated drive? Or do you think Google will make changes to the API so upload is totally restricted regardless of the available space on the drive?

Link to comment
Share on other sites

14 answers to this question

Recommended Posts

  • 0

Honestly don't know; I was hoping to find some time to create a testbed and experiment using Local Disk providers as a stand-in (since I can toggle the VM volume read-only and see what happens) but this past week has been hectic and I needed sleep more - and I still couldn't be sure that Local Disk == Google Drive in terms of provider behaviour. If I had to guess the potential outcomes it would be similar to unexpectedly losing the connection? Frankly I'd suggest finishing moving away before the deadline arrives or ensuring you have duplication/backups elsewhere; at the very least I'd suggest don't have anything in the upload queue when the date arrives and mark the drive read-only yourself beforehand.

Link to comment
Share on other sites

  • 0
On 8/13/2023 at 5:55 AM, Shane said:

mark the drive read-only yourself beforehand.

Thanks! I wasn't aware you could do that. I won't have enough time to download everything beforehand but I already moved everything important. There is still around 100TB media data left that I plan to move to local storage over the next 4 months.

Link to comment
Share on other sites

  • 0

I'm going to be hit by this issue 28th of August myself, and I was planning to set read only a few days before, but to my dismay, once I tried it, StableBit CloudDrive notified me that "A cloud drive that is part of a StableBit DrivePool pool cannot be made read-only."

I've been downloading data (that's only stored in cloud) for four weeks now, but Google is giving me around 40-50mbit/s downstream and with this pace, I'm still fifteen days shy of grabbing all my data. It's almost as if they anticipated people attempting to migrate away from their ecosystem. Before I started the process, I would easily hit 500-600mbit/s with multiple threads.

Now I'm thinking if I just need to take the scary route of hitting "Remove disk" and let DrivePool handle fetching any unduplicated data from CloudPool to LocalPool.

Does anyone know how exactly "Duplicate files later" works in conjunction with a CloudPool?

Link to comment
Share on other sites

  • 0
4 hours ago, red said:

Does anyone know how exactly "Duplicate files later" works in conjunction with a CloudPool?

It's the same for both local and cloud drives being removed from a pool: "Normally when you remove a drive from the pool the removal process will duplicate protected files before completion. But this can be time consuming so you can instruct it to duplicate your files later in the background."

So normally: for each file on the drive being removed it ensures the duplication level is maintained on the remaining drives by making copies as necessary and only then deleting the file from the drive being removed. E.g. if you've got 2x duplication normally, any file that was on the removed drive will still have 2x duplication on your remanining drives (assuming you have at least 2 remaining drives).

With duplicate files later: for each file on the drive being removed it only makes a copy on the remaining drives if none already exist, then deletes the file from the drive being removed. DrivePool will later perform a background duplication pass after removal completes. E.g. if you've got 2x duplication normally, any file that was on the removed drive will only be on one of your remaining drives until the background pass happens later.

In short, DFL means "if at least one copy exists on the remaining drives, don't spend any time making more before deleting the file from the drive being removed."

Note #1: DFL will have no effect if files are not duplicated in the first place.

Note #2: if you don't have enough time to Remove a drive from your pool normally (even with "duplicate files later" ticked), it is possible to manually 'split' the drive off from your pool (by stopping the DrivePool service, renaming the hidden poolpart folder in the drive to be removed - e.g. from poolpart.identifier to xpoolpart.identifier - then restarting the DrivePool service) so that you should then be able to set a cloud drive read-only. This will have the side-effect of making your pool read-only as well, as the cloud drive becomes "missing" from the pool, but you could then manually copy the remaining files in the cloud poolpart into a remaining connected poolpart and then - once you're sure you've gotten everything - fix the pool by forcing removal of the missing drive. Ugly but doable if you're careful.

Link to comment
Share on other sites

  • 0

Thanks for the added info. I'll add for anyone else reading this that it seems like everything works just fine even though Google itself has marked the drive read-only. I've set balancing rules now so that the LocalPool can contain unduplicated & duplicated, and CloudPool may only contain duplicated data. Triggering "Balance" after that change has continued the process of moving files from cloud to local. So far in that mode, I've been able to move a bit over 3TB, with around 4TB to go 👍

Link to comment
Share on other sites

  • 0
On 8/28/2023 at 4:19 AM, red said:

Thanks for the added info. I'll add for anyone else reading this that it seems like everything works just fine even though Google itself has marked the drive read-only. I've set balancing rules now so that the LocalPool can contain unduplicated & duplicated, and CloudPool may only contain duplicated data. Triggering "Balance" after that change has continued the process of moving files from cloud to local. So far in that mode, I've been able to move a bit over 3TB, with around 4TB to go 👍

Did you set your cloud drive to read only?  Still able to download files?

Link to comment
Share on other sites

  • 0

For reference, the beta versions have some changes to help address these: 

 

.1648
* Fixed an issue where a read-only force attach would fail to mount successfully if the storage provider did not have write access and the drive was marked 
  as mounted.
.1647
* Fixed an issue where read-only mounts would fail to mount drives when write access to the storage provider was not available.
.1646
* [Issue #28770] Added the ability to convert Google Drive cloud drives stored locally into a format compatible with the Local Disk provider.
    - Use the "CloudDrive.Convert GoogleDriveToLocalDisk" command.

 

Link to comment
Share on other sites

  • 0
21 hours ago, tphank said:

Did you set your cloud drive to read only?  Still able to download files?


No, I could not, as it was part of a DrivePool and CloudDrive wouldn't let me. I just paused the uploads (they unpaused a few times by themselves, though). I was able to get all data out and have now removed the disks.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...