Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Login issues   11/07/17

      If you have issues with logging in, make sure you use your display name and not the "username" or email.  Or head here for more info.   http://community.covecube.com/index.php?/topic/3252-login-issues/  
    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 
  • 0
steffenmand

Files starting to go corrupt, Cloud Drive throws no errors

Question

Recently some of my drives have started to make pretty much all files corrupt on my drives.

 

At first i thought, OK, BETA errors may happen, but now this is starting to happen on multiple drives.

 

Stablebit Cloud Drive seems to prefetch fine and throws no errors, but what arrives is just corrupt data.

 

The data in question have all been lying at rest, so we have not uploaded new corrupt data on top

 

This is the second drive this has happened to,

 

The funny part is, if i try to copy one of the corrupt files from the cloud drive to my own disc, it shows copy speeds of 500+ MB/s and finishes in a few seconds as if it didnt even have to try and download stuff first. In the technical overview i can see prefetch showing "Transfer Completed Successfully" with 0 kbps and 0% completion (attached a snippet of one so you can see)

 

lSh7nMx.jpg

 

Something is seriously wrong here and it seems that it somehow can avoid trying to download the actual chunk and instead just fill in some empty data.

 

I have verified that the chunks do exist on the google drive and i am able to download it myself

 

Been talking to other users who have experienced the same thing!

Share this post


Link to post
Share on other sites

57 answers to this question

Recommended Posts

  • 0

Another folder full of files got corrupted. Drive has not been disconnected since files have been added.

And another folder corrupted. Now im doing chkdsk /f. Lets see if that changes something...

Annoying is that u can't even remove this corrupted folder.

 

This is scary.

 

I just started using CloudDrive about a week ago. Still under trial and will probably purchase. 

 

I have transferred about 1.5 TB of data and discovered one folder completely corrupted and cannot be opened. Using google as my provider and was transferring data over from Amazon Cloud. Had to recopy the entire folder again.

 

 

1. What's the best method or software to check if more of my existing data in Stablebit Cloud Drive is corrupted? 

 

2. Is it safer to create 4 x 1 TB drives for usage as opposed to having just one large 4TB drive?

 

3. Is data corruption less likely to occur if i use an unencrypted Cloud Drive instead of a full encrypted one.

 

4. Is there any situation where the entire Cloud Drive can be corrupted and data completely irrecoverable?

 

 

I do not have the resources to keep separate copies of my data in different clouds or physical HDD. 

 

Thanks !

 

1: Probably if u do md5 hash before upload and after that u check if this match with original

 

2: Yes, most people here probably is using 100+tb drives.

 

3: I don't belive there is difference.

 

4: In forum someone mentioned few months back that whole drive was corrupted.

Share this post


Link to post
Share on other sites
  • 0

How do i resolve this? Thanks. I have to stop using Cloud drive in case errors continue to mount. 

 

 

https://i.imgur.com/gao1cis.jpg

 

 

What size drive are you using?

 

 

Specifically, it looks like Windows (NTFS specifically) may have problems with drives larger than +55TB (the actual limit is 64TB, but there seems to be some window around that.

 

We would recommend smaller sized disks, or at least smaller volumes on those disks, to ensure you don't exceed this limit. Or ... use ReFS, which doesn't have this limitation.

 

 

Specifically, there seems to be a design flaw with the volume snapshot system for drives larger than 64TBs.  CHKDSK takes a snapshot of the disk, and then scans *that*.  Since ReFS doesn't use chkdsk, this isn't a consideration. 

 

 

As for preventing corruption, if the integrity checking is enabled (it is by default) when you create a disk, it will always validate the data it gets back from the cloud.

 

 

 

And yes, it is definitely safer to use more, smaller drives, and to pool them together.

 

 

 

As for encryption vs regular, no.

 

 

 

And the data should always be recoverable. Worst case, most data recovery tools will work on the drives.

Share this post


Link to post
Share on other sites
  • 0

I currently have a single 100TB volume on drive that I would like to shrink to 50TB, then create a second 50TB volume on drive and pool them together to avoid any problems in the future.

 

When I try to resize the disk using cloud drive, it tells me that I need to check the disk for errors before continuing. When I check the disk for errors, it says that errors are found and need to be corrected, however chkdsk will ultimately freeze at one point or another (for days and days at a time). 

 

When I try to shrink the disk using disk manager, I get the same message.

 

Is there anything I can do to shrink this volume, or should I just start fresh with 2 new cloud drives?

Share this post


Link to post
Share on other sites
  • 0

I currently have a single 100TB volume on drive that I would like to shrink to 50TB, then create a second 50TB volume on drive and pool them together to avoid any problems in the future.

 

When I try to resize the disk using cloud drive, it tells me that I need to check the disk for errors before continuing. When I check the disk for errors, it says that errors are found and need to be corrected, however chkdsk will ultimately freeze at one point or another (for days and days at a time). 

 

When I try to shrink the disk using disk manager, I get the same message.

 

Is there anything I can do to shrink this volume, or should I just start fresh with 2 new cloud drives?

 

 

 

In this case, then unfortunately, yes, creating new drives may be simplest.  (alternatively, you can delete the volumes and create new news, but it's the same thing, aka destructive). 

Share this post


Link to post
Share on other sites
  • 0

In this case, then unfortunately, yes, creating new drives may be simplest.  (alternatively, you can delete the volumes and create new news, but it's the same thing, aka destructive). 

 

Thanks - I have a good internet connection, so I'm not too worried about re-uploading, the biggest pain is the api limit from drive now.

Share this post


Link to post
Share on other sites
  • 0

well, glad to hear that.

 

And yeah, api/bandwidth limits are the pits.

 

 

And if you're using Google Drive specifically, the bandwidth limit from Google is pretty low, unfortunately.  75mbps combined over 24 hours will hit about 700GB.  So... if you want to throttle the connection, that's a good starting point. 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×