Jump to content
  • 0

Files starting to go corrupt, Cloud Drive throws no errors


steffenmand

Question

Recently some of my drives have started to make pretty much all files corrupt on my drives.

 

At first i thought, OK, BETA errors may happen, but now this is starting to happen on multiple drives.

 

Stablebit Cloud Drive seems to prefetch fine and throws no errors, but what arrives is just corrupt data.

 

The data in question have all been lying at rest, so we have not uploaded new corrupt data on top

 

This is the second drive this has happened to,

 

The funny part is, if i try to copy one of the corrupt files from the cloud drive to my own disc, it shows copy speeds of 500+ MB/s and finishes in a few seconds as if it didnt even have to try and download stuff first. In the technical overview i can see prefetch showing "Transfer Completed Successfully" with 0 kbps and 0% completion (attached a snippet of one so you can see)

 

lSh7nMx.jpg

 

Something is seriously wrong here and it seems that it somehow can avoid trying to download the actual chunk and instead just fill in some empty data.

 

I have verified that the chunks do exist on the google drive and i am able to download it myself

 

Been talking to other users who have experienced the same thing!

Link to comment
Share on other sites

Recommended Posts

  • 0

Another folder full of files got corrupted. Drive has not been disconnected since files have been added.

And another folder corrupted. Now im doing chkdsk /f. Lets see if that changes something...

Annoying is that u can't even remove this corrupted folder.

 

This is scary.

 

I just started using CloudDrive about a week ago. Still under trial and will probably purchase. 

 

I have transferred about 1.5 TB of data and discovered one folder completely corrupted and cannot be opened. Using google as my provider and was transferring data over from Amazon Cloud. Had to recopy the entire folder again.

 

 

1. What's the best method or software to check if more of my existing data in Stablebit Cloud Drive is corrupted? 

 

2. Is it safer to create 4 x 1 TB drives for usage as opposed to having just one large 4TB drive?

 

3. Is data corruption less likely to occur if i use an unencrypted Cloud Drive instead of a full encrypted one.

 

4. Is there any situation where the entire Cloud Drive can be corrupted and data completely irrecoverable?

 

 

I do not have the resources to keep separate copies of my data in different clouds or physical HDD. 

 

Thanks !

 

1: Probably if u do md5 hash before upload and after that u check if this match with original

 

2: Yes, most people here probably is using 100+tb drives.

 

3: I don't belive there is difference.

 

4: In forum someone mentioned few months back that whole drive was corrupted.

Link to comment
Share on other sites

  • 0

How do i resolve this? Thanks. I have to stop using Cloud drive in case errors continue to mount. 

 

 

https://i.imgur.com/gao1cis.jpg

 

 

What size drive are you using?

 

 

Specifically, it looks like Windows (NTFS specifically) may have problems with drives larger than +55TB (the actual limit is 64TB, but there seems to be some window around that.

 

We would recommend smaller sized disks, or at least smaller volumes on those disks, to ensure you don't exceed this limit. Or ... use ReFS, which doesn't have this limitation.

 

 

Specifically, there seems to be a design flaw with the volume snapshot system for drives larger than 64TBs.  CHKDSK takes a snapshot of the disk, and then scans *that*.  Since ReFS doesn't use chkdsk, this isn't a consideration. 

 

 

As for preventing corruption, if the integrity checking is enabled (it is by default) when you create a disk, it will always validate the data it gets back from the cloud.

 

 

 

And yes, it is definitely safer to use more, smaller drives, and to pool them together.

 

 

 

As for encryption vs regular, no.

 

 

 

And the data should always be recoverable. Worst case, most data recovery tools will work on the drives.

Link to comment
Share on other sites

  • 0

I currently have a single 100TB volume on drive that I would like to shrink to 50TB, then create a second 50TB volume on drive and pool them together to avoid any problems in the future.

 

When I try to resize the disk using cloud drive, it tells me that I need to check the disk for errors before continuing. When I check the disk for errors, it says that errors are found and need to be corrected, however chkdsk will ultimately freeze at one point or another (for days and days at a time). 

 

When I try to shrink the disk using disk manager, I get the same message.

 

Is there anything I can do to shrink this volume, or should I just start fresh with 2 new cloud drives?

Link to comment
Share on other sites

  • 0

I currently have a single 100TB volume on drive that I would like to shrink to 50TB, then create a second 50TB volume on drive and pool them together to avoid any problems in the future.

 

When I try to resize the disk using cloud drive, it tells me that I need to check the disk for errors before continuing. When I check the disk for errors, it says that errors are found and need to be corrected, however chkdsk will ultimately freeze at one point or another (for days and days at a time). 

 

When I try to shrink the disk using disk manager, I get the same message.

 

Is there anything I can do to shrink this volume, or should I just start fresh with 2 new cloud drives?

 

 

 

In this case, then unfortunately, yes, creating new drives may be simplest.  (alternatively, you can delete the volumes and create new news, but it's the same thing, aka destructive). 

Link to comment
Share on other sites

  • 0

In this case, then unfortunately, yes, creating new drives may be simplest.  (alternatively, you can delete the volumes and create new news, but it's the same thing, aka destructive). 

 

Thanks - I have a good internet connection, so I'm not too worried about re-uploading, the biggest pain is the api limit from drive now.

Link to comment
Share on other sites

  • 0

well, glad to hear that.

 

And yeah, api/bandwidth limits are the pits.

 

 

And if you're using Google Drive specifically, the bandwidth limit from Google is pretty low, unfortunately.  75mbps combined over 24 hours will hit about 700GB.  So... if you want to throttle the connection, that's a good starting point. 

Link to comment
Share on other sites

  • 0

Sorry to refloat the thread but I've seen something related to what I want to do and it may add up to a lot of useful information for the future.

In my case is there a way to erase some folder that is corrupt, has reading errors and can't be deleted in a normal way. So how should I proceed to delete it permanently? My google drive is 250 TB fully encrypted (ntfs).

Link to comment
Share on other sites

  • 0

Thanks Christopher, unfortunately it didn't work out, fire out this: 

from admin cmd -->   sdelete64 -s I:UNT\oue

Error deleting I:UNT\oue:
The file or directory is damaged or unreadable.

any other suggestions?

Thanks for your time.

ps: instead if I create a normal directory with files inside the Gdrive it is able to delete it.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...