Jump to content
  • 0

CloudDrive File System Damaged


Digmarx

Question

I've got a similar issue to a few others: my 16TB Google Drive CloudDrive is showing up in Scanner as "File System Damaged". I can still access files on the drive, so I'm moving everything over to an identical CloudDrive. Chkdsk reports the filesystem as RAW and fails out, same thing with Scanner (which I assume just calls chkdsk). Is there anything else I can do?

Link to comment
Share on other sites

24 answers to this question

Recommended Posts

  • 1

Hello, I also have a problem similar to your damaged filesystem, and I have seen others have this problem too, it makes me think if there is something common that has created these problems, perhaps it would be appropriate to investigate. I have now launched a chkdsk / f / r but my record was in NTFS and not seen as RAW, now I am at 50% in 1 week I should have finished, hopefully for the better.

Link to comment
Share on other sites

  • 1
9 hours ago, PizzaGuy said:

Can you expand on step 1 please?  I cannot find a menu in CloudDrive that states "Offline" in reference to the drive.  "In Disk Management" to me could mean two things, the diskmgmt.msc command from run, or the "Manage Drive" menu from the CloudDrive UI but neither has the "Take offline" verbage.  I've attached screenshots, thanks!

disk management.png

manage drive.png

It's in diskmgmt.msc. Right click in the red box area for your drive.

 

 image.png.d96150e131b7aeed6c6fa89debd8e743.png

Link to comment
Share on other sites

  • 0

Actually, this would have been better to do:

 

Quote

 

  • Take the drive offline in disk management
  • Turn off all pinning in the CloudDrive UI
  • Clear the local cache, and wait until it's down to 0 bytes (literally empty)
  • Bring the cloud drive back online from disk management

You may need to run step #3 multiple times to get it down to 0 bytes. 

 

 

Link to comment
Share on other sites

  • 0

Thanks for responding. I should have read more before posting. Has the cause been identified, and is there a way to avoid this happening in the future? I don't keep anything in my Google Drive CloudDrive that I can't lose, just backups and easily replaceable files, but it would be frustrating to have this happen again.

Link to comment
Share on other sites

  • 0

Not explicitly.  The google outage recently caused a rash of this, but we're no exactly sure why. If the data was corrupted/modified, the checksum would catch that.  If the file just wasn't available, then it would error out and retry.  

So, it's a very weird issue, that we can't really reproduce now, so it makes it VERY difficult to identify what went wrong. 

 

Link to comment
Share on other sites

  • 0
3 hours ago, Christopher (Drashna) said:

Not explicitly.  The google outage recently caused a rash of this, but we're no exactly sure why. If the data was corrupted/modified, the checksum would catch that.  If the file just wasn't available, then it would error out and retry.  

So, it's a very weird issue, that we can't really reproduce now, so it makes it VERY difficult to identify what went wrong. 

 

Christopher, I'm sure you've spoken with Alex about this issue. I'm just wondering if there's been any discussion of infrastructural changes that might be able to improve the reliability of the file system data? I was wondering if, for example, CloudDrive could store a periodic local mirror of the file system data which could be restored in case of corruption? I don't know enough about NTFS and how the journal and such are stored on the drive to know if this is feasible or not. It just seems to me that almost everyone (who had an issue) saw file system corruption, but not corruption of the actual data on the drive. Which makes sense, because that data is frequently modified and is, as such, more vulnerable to inconsistencies on Google's part. So if that data could be given some sort of added redundancy...it might help to prevent future issues of this sort. 

Do you have any thoughts on that? Or maybe Alex could chime in? My basic thought is that I'd rather have corruption of file data for individual files, which can be replaced if necessary, than lose an entire multi-terabyte volume because the file system itself (which comprises a very small minority of the actual data on the drive) gets borked. I'd love some features to take extra care with that data. 

Link to comment
Share on other sites

  • 0
On 4/8/2019 at 5:35 PM, Christopher (Drashna) said:

Actually, this would have been better to do:

 

 

Can you expand on step 1 please?  I cannot find a menu in CloudDrive that states "Offline" in reference to the drive.  "In Disk Management" to me could mean two things, the diskmgmt.msc command from run, or the "Manage Drive" menu from the CloudDrive UI but neither has the "Take offline" verbage.  I've attached screenshots, thanks!

disk management.png

manage drive.png

Link to comment
Share on other sites

  • 0
On 4/12/2019 at 8:55 PM, PizzaGuy said:

Can you expand on step 1 please?  I cannot find a menu in CloudDrive that states "Offline" in reference to the drive.  "In Disk Management" to me could mean two things, the diskmgmt.msc command from run, or the "Manage Drive" menu from the CloudDrive UI but neither has the "Take offline" verbage.  I've attached screenshots, thanks!

Check the bottom half of the window in Disk Management.  And right click on the "Disk #" part.  That should provide you with the ability to set the disk as "offline" 

Link to comment
Share on other sites

  • 0
On 4/12/2019 at 3:22 PM, srcrist said:

Christopher, I'm sure you've spoken with Alex about this issue. I'm just wondering if there's been any discussion of infrastructural changes that might be able to improve the reliability of the file system data? I was wondering if, for example, CloudDrive could store a periodic local mirror of the file system data which could be restored in case of corruption

We've definitely talked about it.  

 

And to be honest, I'm not sure what we can do.  Already, we do store the file system data, if you have pinning enabled, in theory.  Though, there are circumstances that can cause it to purge that info.

The other issue, is that by default, every block is checksummed.  That is checked on download.  So, if corrupted data is downloaded, then you would get errors, and a warning about it. 

However, that didn't happen here.  And if that is the case, more than likely, it sent old/out of date data.  Which ... I'm not sure how we can handle that in a way that isn't extremely complex. 

But again, this is something that is on our mind. 

Link to comment
Share on other sites

  • 0
On 4/9/2019 at 12:35 AM, Christopher (Drashna) said:

Actually, this would have been better to do:

 

 

Chris, 

How long should the "take the drive offline" step take? I clicked this a good half hour ago, and the drive is currently showing as readonly and CloudDrive is "prefetching" non stop?

Ideas?

Link to comment
Share on other sites

  • 0
On 4/9/2019 at 1:35 AM, Christopher (Drashna) said:

 

  • Take the drive offline in disk management
  • Turn off all pinning in the CloudDrive UI
  • Clear the local cache, and wait until it's down to 0 bytes (literally empty)
  • Bring the cloud drive back online from disk management

You may need to run step #3 multiple times to get it down to 0 bytes. 

I have the same problem, but the solution above doesn't work.

even chkdsk doesn't work

PS C:\Users\Administrator> chkdsk f: /f /r
Der Typ des Dateisystems ist NTFS. (the type of the file system is NTFS)
Version und Status des Volumes konnten nicht festgestellt werden. CHKDSK wurde abgebrochen. (version and status of the volume can't be determined. chkdsk was aborted)

 

Link to comment
Share on other sites

  • 0

So I had just managed to reupload the 10 or so TB of data to my google cloud drive, and once again I'm getting the file system damaged warning from stablebit. 

Files are accessible, but chkdsk errors out and the unmount, unpin, clear cache, remount process has no effect.

Link to comment
Share on other sites

  • 0
15 hours ago, Digmarx said:

So I had just managed to reupload the 10 or so TB of data to my google cloud drive, and once again I'm getting the file system damaged warning from stablebit. 

Files are accessible, but chkdsk errors out and the unmount, unpin, clear cache, remount process has no effect.

I can't even fix my drive with CHKDSK, because I get an error no one seems to know how to solve. 12 TB of Data.....

 

Link to comment
Share on other sites

  • 0
On 5/23/2019 at 5:06 AM, srcrist said:

To be clear: did you re-upload data to the same drive? 

No. I created a new cloud drive. Then I created a new drivepool using only the new cloud drive (with a view to possibly adding it to a pool hierarchy later). The new drive did not show any errors in Scanner until after I had reuploaded around 10TB (so weeks after creating the new cloud drive).

After the "file system damaged" notification I removed the cloud drive from the drivepool just to eliminate a variable.

I'm not sure what specifically is damaged in the cloud drive file system; Scanner obviously thinks something is wrong, but chkdsk run from the Tools tab in drive properties returned no errors. This is obviously anecdotal, but I have not encountered any corrupted files yet.

Link to comment
Share on other sites

  • 0

Okay. To be clear: Scanner's "File System Damaged" warning is something very different than the corruption issue that you are reading about other people encountering lately. Scanner is just warning about normal old "please run chkdsk" file system health, and it can give that warning for any number of reasons. That's getting quite confusing in this thread because people are confusing this with the file system corruption that has been leading to data loss and RAW mounts--which is something different entirely. A drive with "file system damage" in Scanner is a normal part of any NTFS file system and can be caused because of simple power loss or a software crash. A drive showing up RAW indicates a much larger problem. Both are technically file system damage. 

As far as the drive showing up damaged in scanner but not chkdsk, I've only personally seen that get wonky with volumes that chkdsk does not properly support. How large is the drive you're trying to use? Is it larger than 60TB total? chkdsk does not support volumes 60TB or larger. I've seen scanner throw warnings simply because chkdsk cannot properly manage the volume size, and that would explain your errors when running chkdsk as well. 

Link to comment
Share on other sites

  • 0

The volume is 16TB, encrypted, Google Drive (GSuite). The cache disk is a 240GB SSD, FWIW.

So I've had some progres with chkdsk /offlinescanandfix. The only errors it found were free space marked as allocated in both the MFT and the volume bitmap, and then I guess it errored out with "An unspecified error occurred (6e74667363686b2e 1475)"

A cursory search for the error code doesn't turn up anything of substance, but I'll look a bit harder after work.

Link to comment
Share on other sites

  • 0
3 hours ago, Digmarx said:

The volume is 16TB, encrypted, Google Drive (GSuite). The cache disk is a 240GB SSD, FWIW.

So I've had some progres with chkdsk /offlinescanandfix. The only errors it found were free space marked as allocated in both the MFT and the volume bitmap, and then I guess it errored out with "An unspecified error occurred (6e74667363686b2e 1475)"

A cursory search for the error code doesn't turn up anything of substance, but I'll look a bit harder after work.

In that case, I'm not sure. There might be a handle issue causing a problem, but chkdsk generally asks you if you want to close all handles if it encounters one. You could try /X and see if it helps. I would definitely open an actual ticket with covecube about the problem, though. I would be hesitant to use any drive that chkdsk cannot operate on. Especially considering some of the recent problems. 

Link to comment
Share on other sites

  • 0

I did, the first time the problem occurred. From my understanding there is no solution from Stablebit's end. My options as I see them are a) abandon my use of Cloud Drive (since I am limited to using it with Google Drive for all intents and purposes, and the combination of CD and GD is proving unreliable) or b) continue using it. The prospect of (once again) downloading and reuploading 10TB at 750GB/day is also prohibitive at this time. So I'm basically at a point where it's either b1) find a way to fix the file system, or b2) continue using it knowing that it's probably going to fall over at some point. If this were my only set of backups I'd be terrified.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...