Jump to content
  • 2

All Drives shown as RAW, all drives are GDrive cloud drives


Gandalf15

Question

Gday

As we all know last night was a issue with Gdrive around the globe. Well since then it is back, but all my drives are shown in clouddrive as normal (I see how much space is used, how much free etc), in windows explorer they all have a driveletter but are not shown with the bar of how much is taken or free. When I click on them, windows asks me to format them. In drive manager they are shown as Raw.

 

All 3 drives are on a different gdrive account. What are the odds that 3 drives are hit bysuch a problem at once?

Anyone can help me? If all 3 are borken now, I lost all my data (yes it was duplicated, 3 times around all those drives....). Can someone help me?

Cheers!

Edit: what I tried so far: Reattaching one drive. Did not help. Detach and reattach, did also not help, still shown as raw. chkdsk /f says "not enough space" but finds lots of errors. I take every advice that could solve it. For me it looks like stablebit did something wrong. It started with the outtake of Gdrive, but the chunks and everything are there in the cloud... Please, I am pretty close to a meltdown if everything is gone...

Link to comment
Share on other sites

Recommended Posts

  • 0

A file system showing up as raw in disk management indicates a corrupted filesystem. It could be some strange issue with clouddrive, but it could also be file system data that was lost on Google's end during the service disruption. That's probably not the answer you wanted to hear. But it might still be salvageable. How large are the volumes and what filesystem are they? The chkdsk space error makes me wonder if you're using ntfs larger than 60tb, which is larger than chkdsk can handle. 

Regardless, you need to open an actual ticket here. I don't want to suggest anything that may further corrupt the drive. Head to the support section of the website and submit a ticket. Go ahead and run the troubleshooter and attach the ticket number once you do that. Find the troubleshooter here: http://wiki.covecube.com/StableBit_Troubleshooter

For what it's worth, my drives had service interruption last night and came back as normal. So hopefully it's just a software quirk. Remember, also, that data recovery software can be used on clouddrive volumes if it comes to that. 

Link to comment
Share on other sites

  • 0

For specifically this reason, you should try to avoid using NTFS volumes larger than 60TB. They cannot be repaired in case of corruption. Volume Shadow Copy, which chkdsk relies on to make repairs, does not support NTFS volumes larger than 60TB. So that might be why you're seeing the chkdsk error. You can always use multiple volumes partitioned from the same clouddrive drive, though. I'm not sure why the 16TB volume is also giving you problems, however. 

I'm not sure what you mean by creating new volumes, exactly, but as long as you don't delete the old volumes to make a new one the data will still be there if it can be salvaged. I want to wait to see what Christopher and Alex want you to do, but with three copies of the data, software like recuva can likely retrieve your data--even if the file system is genuinely corrupted. But wait until they respond before trying it. 

Hope you can find a solution soon. 

Link to comment
Share on other sites

  • 0
10 hours ago, srcrist said:

For specifically this reason, you should try to avoid using NTFS volumes larger than 60TB. They cannot be repaired in case of corruption. Volume Shadow Copy, which chkdsk relies on to make repairs, does not support NTFS volumes larger than 60TB. So that might be why you're seeing the chkdsk error. You can always use multiple volumes partitioned from the same clouddrive drive, though. I'm not sure why the 16TB volume is also giving you problems, however. 

I'm not sure what you mean by creating new volumes, exactly, but as long as you don't delete the old volumes to make a new one the data will still be there if it can be salvaged. I want to wait to see what Christopher and Alex want you to do, but with three copies of the data, software like recuva can likely retrieve your data--even if the file system is genuinely corrupted. But wait until they respond before trying it. 

Hope you can find a solution soon. 

I am actually waiting for a reply. I am aware that another user from reddit has exactly the same. I doubt google doesnt have redudancy and wouldnt say anything if there was dataloss. I only started using Clouddrive around 5 weeks ago, so the lost data are only 10 to 12 TB. I liked the product but had trust issues, so I only uploaded replacable data yet - and I am glad I did. By making new volumes I mean, that I created new drives to upload other data that I need to have online. 10 TB to recover over Gdrive? I assume that a long time (I have 1gbit fibre though).

I just want to make sure this does not happen again when google has another outtake.... And as you say: all 3 drives had the same problem with ckdsk, not only the bigger volumes. Maybe I think wrong, but to put it simple: If I make 2 volumes on 2 gdrive accounts and I want "raid 0 / 2x duplicity", how do I make sure that cloudpool doesnt put the same file twice on the same Gdrive? I mean it sees 4 harddisks and thinks "only 2 copies need, lets put on A and B" but A and B are from the same gdrive account? That is basicly why I made big partitions. Also, I could shrink them no?

Edit: I got a reply by Christopher for my ticket:
Basicly use CHKDSK, if that doesnt work, use Recuva

Link to comment
Share on other sites

  • 0

I think it's important to understand that Google may not even be aware of the data loss. If, for example, the issue was with a database server, or simply with their API, Google's servers may have simply been reporting to CloudDrive that its transfers were successful when they had, in fact, failed. In which case CloudDrive would operate as if the data had been successfully uploaded and remove the local copy from the cache, but Google would not actually have a copy of the data on their servers.

Because of your post and a few others here on the forums, I took my volumes offline last night and ran some tests and I did have some corruption. But only on the volume that was actively being written during the disruption, and, thankfully, not to the extent that the file system data itself was corrupt. Just some orphaned data that chkdsk had to repair. But since it was only on the volume being modified, it suggests that the problem resulted from the upload API and CloudDrive's response to it. The integrity of the five other volumes was all fine--and I don't see any missing data.

Ultimately, if you didn't have any data on the CloudDrive volumes that you didn't already have in your local storage, there isn't any reason to do data recovery. It would be a lengthy process, and all it would accomplish would be to download and create a new local copy of the files from the corrupted file system. Which is what you already have. And as long as you created new drives, the existing data on your old CloudDrive volumes will not be touched, if it can simply be fixed.

I suspect that Christopher and Alex are going to come back and say that there isn't any way that CloudDrive can be modified that would prevent this issue from ever happening--because CloudDrive is simply dependent on the provider's service and API, and that's where the issue arose in this instance. But it's a good idea, in any case, to work through support with them so that they can at least look at the troubleshooting data and perhaps give us an idea what happened. 

To prevent this from happening again, there are a few options, though: Firstly, as I said above, do not use NTFS volumes larger than 60TB. Some degree of file system corruption is as inevitable with CloudDrive as any other NTFS volume, and depriving yourself of chkdsk to repair that corruption is basically a death pact. Secondly, you can make sure that only one volume is ever being modified at a time, so that only one volume runs the risk of developing corruption if Google's service begins processing erroneous data. Honestly, I'm not sure that mirroring multiple copies of the data across several Google accounts is genuinely productive at all--since, as you said, Google has redundancy themselves and has no track record of losing data that is successfully uploaded to their service. It does, however, multiply the potential points of failure during the transfer process. And, lastly, and at the cost of a tremendous amount of bandwidth, you can enable upload verification in the drive integrity settings for your drive--which will download every chunk uploaded and ensure that the data is available on the provider before removing the data from the local cache. This means that every byte uploaded will also be downloaded, but it also means that CloudDrive will never remove local data unless it can absolutely verify that the data is able to be retrieved from the provider. 

All of this being said, though, I've been using CloudDrive for roughly three years now and I try to visit the forums regularly to help others, and this is the first time that I can remember a service outage like this, and the first time that I've seen data corruption based on a service disruption. So, *knock on wood*, this will hopefully remain a problem not worth too much anxiety on a regular basis. 

Link to comment
Share on other sites

  • 0

Your last answer is definitly a good point to start on.

I understand if googles API said "Hey mate, I got that!" so clouddrive says "aight mate, gonna delete that on my end!" and then it wasnt there. The data in the cloud were only in the cloud - 3 times though. The reason I do have 3 different gdrives is simple: One is of my company, one my studendsfree and one my own (all legit). But who knows, maybe I leave the company one day and they delete it? Or my university decides (for what ever reason) to stop working with google?

It was only 12 TB, but if I had use it longer than a few weeks, it could be 100, 300 or even more. I see your point about 60 TB volumes, but this comes again back to what I explained above: What if duplicity is only on the account of my university, that might be shut down suddenly? If I dont have an option to say "keep the file here (account Uni) and here (account work)" I dont want to use multiple volumes and taking the risk that the copy is on the same account (university, that might be terminated).

Towards my failed drives: I deleted them now manually. I dont want to block googles space and I got pretty pissed. Since its replacable data, I simply said "fuck it, dont want to waste energy on that bullshit".

Upload integrity seems like being the solution. I dont care about Bandwith, I have full duplex 1gbit at home. The download limit is high enough on gdrive anyway (its the uploadlimit that is blocking me). I will activate that, so I can be sure something like this doesnt happen again.

Thank you very much for your time and knowledge you are sharing here with me, I really aprechiate that! I also only found about 2 outtages btw, the one I suffered a loss and another one. Lets hope it stays at 2 :)

Link to comment
Share on other sites

  • 0
3 hours ago, Gandalf15 said:

The reason I do have 3 different gdrives is simple: One is of my company, one my studendsfree and one my own (all legit). But who knows, maybe I leave the company one day and they delete it? Or my university decides (for what ever reason) to stop working with google?

It's obviously entirely your decision, but I think it's safe to say that Google won't simply up and delete your legitimately paid-for data without notice. So I personally wouldn't even worry about the other two. Have you considered, instead, a local copy mirrored to your CloudDrive storage? Again, just as a point of conversation. I'm only sharing my thoughts, not, obviously, telling you what to do. 

3 hours ago, Gandalf15 said:

I see your point about 60 TB volumes, but this comes again back to what I explained above: What if duplicity is only on the account of my university, that might be shut down suddenly? If I dont have an option to say "keep the file here (account Uni) and here (account work)" I dont want to use multiple volumes and taking the risk that the copy is on the same account (university, that might be terminated).

I you *don't* drop it to less than 60TB, you might as well not even bother. Any NTFS volume *will* inevitably need to be repaired. A simple power loss or system freeze/crash could interrupt a file system modification. CloudDrive's file system is real. It's subject to any source of corruption that happens to your physical drives. Anything over 60TB could be ReFS, but that comes with its own set of potential problems. But until they re-write VSS to support larger NTFS volumes than 60TB, or another tool emerges to make repairs on volumes this large, there is no point in using such volumes. Any corruption at all runs the risk of cascading to failure. Continuing to write to an unhealthy drive is data suicide. If you simply want to mirror multiple copies at a file level, and not have to worry about the health of the drive structure, rClone might actually be a better tool for you. 

Note, to be clear, that multiple *volumes* is not the same as multiple *drives*, though. You can split a single 1PB CloudDrive drive up into many volumes. You can always use DrivePool to recombine those volumes into one pool. 

3 hours ago, Gandalf15 said:

Upload integrity seems like being the solution. I dont care about Bandwith, I have full duplex 1gbit at home. The download limit is high enough on gdrive anyway (its the uploadlimit that is blocking me). I will activate that, so I can be sure something like this doesnt happen again.

Absolutely. Just note that this will likely affect drive throughput as well. If you have, say, 10 download threads, some percentage of those will be taken up by the verification process any time you are uploading data. Just something to note. 

3 hours ago, Gandalf15 said:

Thank you very much for your time and knowledge you are sharing here with me, I really aprechiate that! I also only found about 2 outtages btw, the one I suffered a loss and another one. Lets hope it stays at 2

Any time. And I'm really sorry that you lost a drive. Early on in my CloudDrive experimentation, Windows itself corrupted a ReFS volume during a Windows Update. That feeling sucks. I also hope the damage is minimal. Good luck!

Link to comment
Share on other sites

  • 0

Maybe I mixxed drives and volumes up indeed. But it is drivepool who takes care of duplication. And it uses Volumes as pooled "drives" no? So if I set 2 volumes that are on one drive in drivepool and set duplication to 2x, it will duplicate on the same drive, which is nonsense in my case (and obviously also in non-cloud drives). Or I am wrong with that assumption?

Link to comment
Share on other sites

  • 0
8 hours ago, Gandalf15 said:

Maybe I mixxed drives and volumes up indeed. But it is drivepool who takes care of duplication. And it uses Volumes as pooled "drives" no? So if I set 2 volumes that are on one drive in drivepool and set duplication to 2x, it will duplicate on the same drive, which is nonsense in my case (and obviously also in non-cloud drives). Or I am wrong with that assumption?

Your assumption is right, but your presumed pool structure is simply wrong. You would, instead, use a hierarchical pool to ensure that duplication is handled correctly. 

Link to comment
Share on other sites

  • 0
9 hours ago, srcrist said:

Your assumption is right, but your presumed pool structure is simply wrong. You would, instead, use a hierarchical pool to ensure that duplication is handled correctly. 

How would one set up an hierarchical pool structure? I am pretty new (a bit over a month using the software)?

Link to comment
Share on other sites

  • 0
6 hours ago, Gandalf15 said:

How would one set up an hierarchical pool structure? I am pretty new (a bit over a month using the software)?

You pool each set of volumes together to create a pooled volume, and then you add the pooled volumes to a pool themselves and enable duplication on that pool. That will mirror each set of volumes with each other. 

Link to comment
Share on other sites

  • 0
30 minutes ago, srcrist said:

You pool each set of volumes together to create a pooled volume, and then you add the pooled volumes to a pool themselves and enable duplication on that pool. That will mirror each set of volumes with each other. 

Didnt think of that actually - Smart! but isnt there error potential in that with the "pool-seption" in it? That could totaly work out if I think now of of it.

Link to comment
Share on other sites

  • 0

Thank you very much. So it would be possible to shrink my new 256 TB Volumes to 50. then when my 2 50 TB mirrors are full, I create 2 more on each account, add both on the same account to a pool and the 2 remaining into a pool again and I can continue. Is this how it works right?

Link to comment
Share on other sites

  • 0

That's the gist of it, yes. What you want to do is a combination of the process I helped two other users with in other threads. You'll want to create a multi-volume CloudDrive pool as described in the second thread, and set up a nested pool for duplication as described in the first thread. You can find those threads here: 

Link to comment
Share on other sites

  • 0
On 3/16/2019 at 2:43 AM, srcrist said:

You pool each set of volumes together to create a pooled volume, and then you add the pooled volumes to a pool themselves and enable duplication on that pool. That will mirror each set of volumes with each other. 

Would doing this ensure that all data on the drives is backed up?

Link to comment
Share on other sites

  • 0
10 minutes ago, pedges said:

Would doing this ensure that all data on the drives is backed up?

Yes...with the caveat that it didn't prevent the google corruption that happened last month even if people used multiple accounts. The problem appears to be that Google rolled back data to an older version of some of the files. This is obviously fine for the actual file data itself, since that doesn't really change. But the chunks containing the filesystem data DO change. Often. So everybody's file systems were corrupted. If you mirror the pool to another pool that is on another account, and google has a similar issue, both pools will both be being modified basically simultaneously, and both pools would be corrupted if they did another rollback. It would actually be better to mirror it to an entirely different provider, or to mirror it locally. 

Link to comment
Share on other sites

  • 0
2 minutes ago, srcrist said:

Yes...with the caveat that it didn't prevent the google corruption even on multiple accounts that happened last month. The problem appears to be that Google rolled back data to an older version of some of the files. This is obviously fine for the actual file data itself, since that doesn't really change. But the chunks containing the filesystem data DO change. Often. So everybody's file systems were corrupted. If you mirror the pool to another pool that is on another account, and google has a similar issue, both pools will both be being modified basically simultaneously, and both pools would be corrupted if they did another rollback. It would actually be better to mirror it to an entirely different provider, or to mirror it locally. 

Got it.

Just a thought - I have the ability to restore the data in my Google Drive account to a certain point in time. Could I use that functionality to restore the data on my drive to point where it can be used?

Link to comment
Share on other sites

  • 0
2 minutes ago, pedges said:

Got it.

Just a thought - I have the ability to restore the data in my Google Drive account to a certain point in time. Could I use that functionality to restore the data on my drive to point where it can be used?

Now that's an interesting possibility. Maybe? Sure. Maybe. You'd want to detach the CloudDrive first, probably. It might be worth a shot. The Google outage was March 13th, so a date before that would be your best shot. If this works, it would definitely help to confirm that some sort of partial rollback is the cause of this issue. 

Link to comment
Share on other sites

  • 0
1 minute ago, srcrist said:

Now that's an interesting possibility. Maybe? Sure. Maybe. You'd want to detach the CloudDrive first, probably. It might be worth a shot. The Google outage was March 13th, so a date before that would be your best shot. 

I have two Cloud Drives saved to that Google account, and one is working. Is there any way to differentiate between the two drives and restore only data for the drive that is experiencing issues?

Link to comment
Share on other sites

  • 0
3 minutes ago, pedges said:

Got it.

Just a thought - I have the ability to restore the data in my Google Drive account to a certain point in time. Could I use that functionality to restore the data on my drive to point where it can be used?

You should consider that if it DOESN'T work, though, it may render the entire drive unrecoverable even using photorec or recuva. Once that data is missing from Google's servers, there's no getting it back. 

Link to comment
Share on other sites

  • 0
4 minutes ago, pedges said:

I have two Cloud Drives saved to that Google account, and one is working. Is there any way to differentiate between the two drives and restore only data for the drive that is experiencing issues?

You'd have to figure out which drive is contained in which folder on your drive. If you open the technical details in CloudDrive, and open the drive details window, the "Uid" under the "Overview" heading will correspond to the folder name on your Google Drive. You'll have to restore EVERYTHING under that folder. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...