Jump to content
  • 0

Files starting to go corrupt, Cloud Drive throws no errors


steffenmand

Question

Recently some of my drives have started to make pretty much all files corrupt on my drives.

 

At first i thought, OK, BETA errors may happen, but now this is starting to happen on multiple drives.

 

Stablebit Cloud Drive seems to prefetch fine and throws no errors, but what arrives is just corrupt data.

 

The data in question have all been lying at rest, so we have not uploaded new corrupt data on top

 

This is the second drive this has happened to,

 

The funny part is, if i try to copy one of the corrupt files from the cloud drive to my own disc, it shows copy speeds of 500+ MB/s and finishes in a few seconds as if it didnt even have to try and download stuff first. In the technical overview i can see prefetch showing "Transfer Completed Successfully" with 0 kbps and 0% completion (attached a snippet of one so you can see)

 

lSh7nMx.jpg

 

Something is seriously wrong here and it seems that it somehow can avoid trying to download the actual chunk and instead just fill in some empty data.

 

I have verified that the chunks do exist on the google drive and i am able to download it myself

 

Been talking to other users who have experienced the same thing!

Link to comment
Share on other sites

Recommended Posts

  • 0

Deleting all the newest chunks when we had multiple chunks with the same key worked. drive now mounts properly! Before trying this make sure to backup the chunks so you can fix it again!

 

if you already indexed just restart the indexing procedure by resetting the settings and the database under troubleshooting -> reset settings as your index currently will be pointing to the wrong chunk where we had multiple

 

I'm sorry I think I'm being slow.. What chunks are you deleting?

 

My drive is still RAW, the CONTENT folder is full of chunks, the ChunkIdStorage is empty. I'm not an admin on my google drive, so I can't restore anything deleted but looks like I can roll back individual files a day or two. Is there any hope?

Link to comment
Share on other sites

  • 0

go to technical overview and look at the service log.

 

you will get a warning about multiple chunks with the same id and the number it has: search for this on the drive and you should find two chunks. Download the newest (for backup) and then delete it afterwards. do this for all the chunks which have duplicates.

 

after you finish go to troubleshooting - > reset settings and include the databases.

 

reboot and start :-) should mount fine now! if not then reupæoad all the chunks you dowbloaded and make a ticket with stablebit

Link to comment
Share on other sites

  • 0

I think using the original ChunkIDs can fix the drive, but i would suspect waiting for developer to help you as modifying files could potentially start a chain reaction wiping everything, I am probably gonna remake my drive as I am not sure how corrupt this one is and would be better as I know it's clean. 

Link to comment
Share on other sites

  • 0

Is there a way to have the application automatically discard the duplicate chunks that were made after a certain date? Or automatically keep/use the oldest file if it finds duplicates? That may fix my problem, then I can just restore my Google drive to say the 28th of Jan or something, and voila?

Link to comment
Share on other sites

  • 0

Is there a way to have the application automatically discard the duplicate chunks that were made after a certain date? Or automatically keep/use the oldest file if it finds duplicates? That may fix my problem, then I can just restore my Google drive to say the 28th of Jan or something, and voila?

Apparently the new beta has the ability to combine chunks into one, did that one fix it for you?

Link to comment
Share on other sites

  • 0

Apparently the new beta has the ability to combine chunks into one, did that one fix it for you?

 

I think my drive is too far poked. I tried it but it remained RAW.

 

I was part way through a backup to another cloud drive.  That one recovered fine, but I'm now missing a load of stuff that wasn't backed up.

 

I wonder if it's a good idea to create a full backup of your drives, and detaching them, then maybe once a while, connect it, and sync it.  Like a backup, although it will be on the same provider... But it'd mitigate these issues. 

Link to comment
Share on other sites

  • 0

I think my drive is too far poked. I tried it but it remained RAW.

 

I was part way through a backup to another cloud drive.  That one recovered fine, but I'm now missing a load of stuff that wasn't backed up.

 

I wonder if it's a good idea to create a full backup of your drives, and detaching them, then maybe once a while, connect it, and sync it.  Like a backup, although it will be on the same provider... But it'd mitigate these issues. 

I guess you could use another utility to copy the chunks to a backup folder once a month. It will not take any client side bandwidth as google drive supports server side moving. One utility that can be used together with drivepool is rclone as it supports server-side moving. 

Link to comment
Share on other sites

  • 0

I guess you could use another utility to copy the chunks to a backup folder once a month. It will not take any client side bandwidth as google drive supports server side moving. One utility that can be used together with drivepool is rclone as it supports server-side moving. 

 

Ah that's a good idea, I will check that out :D

Link to comment
Share on other sites

  • 0

Heh still stays RAW... I guess I better dust off the old local disks and plug in the array controller :\ See what I can recover here.

 

I'm just thinking of a way to prevent this in the future without having local disks around... Any suggestions other than the rclone suggestion by Akame?

Link to comment
Share on other sites

  • 0

Heh still stays RAW... I guess I better dust off the old local disks and plug in the array controller :\ See what I can recover here.

 

I'm just thinking of a way to prevent this in the future without having local disks around... Any suggestions other than the rclone suggestion by Akame?

 

 

Well, if you're using the 1.0.0.830+ build, this should not happen in the future.  The issue that allowed this ot happen has been patched. 

 

Otherwise, the best advice I can give you is to always, ALWAYS have multiple copies of any important files. Even if you stop using StableBit CloudDrive, you should absolutely do this, because you never know when "something" might happen to one of those sets of data. 

 

https://www.veeam.com/blog/how-to-follow-the-3-2-1-backup-rule-with-veeam-backup-replication.html

Link to comment
Share on other sites

  • 0

Hey Chris,

 

Yeah I haven't lost anything important, I have quite a number of backups of my important stuff on multiple services and also multiple physical drives on separate locations.

 

I  was just hopeful with this since I just sorted out my TV Show library and took a while to get sorted out and working the way I wanted, so was hoping I wouldn't have to do it over again heh.

 

Thanks for the help and so on though, loving the product so far!

Link to comment
Share on other sites

  • 0

Now i have this kind of error. Randomly files get corrupted. So far i have found ~50 files. Using 1.0.1.877 beta

 

What provider? File system?  integrity checking enabled?  Upload verification enabled? 

 

Have you run a memory test on the system? 

Have you tried checking the disk used for the cache?

 

 

The reason I ask, is that the storage chunks adds a checksum block to each chunk for the integrity checking.  If this information is intact, then it's not the software that is causing the corruption, but something causing it to be corrupted as it's being written. 

Link to comment
Share on other sites

  • 0

*Gdrive

*NTFS, 100TB disk

*Can't find on latest version integrity check

*Upload verification off

 

 

Memtest done. It's ok. Cache SSD ok.

 

Some corrupted files are dated few months back. Some files few days or week. Hundreds files that have been copied between are ok. Just some of them are broken. Annoying is that i can't even delete them. 

Link to comment
Share on other sites

  • 0

Not sure if this is related to your issue, but I had drive corruptions as well with my 256tb cloud drive. Chkdsk would detect the errors, but could never repair them. I then found an article on Microsoft's knowledgebase stating that chkdsk repairs require the Volume Shadow Copy service, and that service does not work on partitions over 64TB. I couldn't shrink my partition size in clouddrive, but I could drink in in Windows Disk manager to 50TB, then I was successful in having chkdsk repair the volume.

Link to comment
Share on other sites

  • 0

*Gdrive

*NTFS, 100TB disk

*Can't find on latest version integrity check

*Upload verification off

 

 

Memtest done. It's ok. Cache SSD ok.

 

Some corrupted files are dated few months back. Some files few days or week. Hundreds files that have been copied between are ok. Just some of them are broken. Annoying is that i can't even delete them. 

 

Integrity checking is enabled at drive creation.   And it's on by default.  

 

You can check the statu by mousing over the disk info in the UI (towards the top).

 

 

 

Not sure if this is related to your issue, but I had drive corruptions as well with my 256tb cloud drive. Chkdsk would detect the errors, but could never repair them. I then found an article on Microsoft's knowledgebase stating that chkdsk repairs require the Volume Shadow Copy service, and that service does not work on partitions over 64TB. I couldn't shrink my partition size in clouddrive, but I could drink in in Windows Disk manager to 50TB, then I was successful in having chkdsk repair the volume.

 

Hmm.

 

In this case, the "scan" option may fix this, as it forces an online scan.  Though.... it does skip some checks.

Link to comment
Share on other sites

  • 0

*Gdrive

*NTFS, 100TB disk

*Can't find on latest version integrity check

*Upload verification off

 

 

Memtest done. It's ok. Cache SSD ok.

 

Some corrupted files are dated few months back. Some files few days or week. Hundreds files that have been copied between are ok. Just some of them are broken. Annoying is that i can't even delete them. 

Another folder full of files got corrupted. Drive has not been disconnected since files have been added.

Link to comment
Share on other sites

  • 0

This is scary.

 

I just started using CloudDrive about a week ago. Still under trial and will probably purchase. 

 

I have transferred about 1.5 TB of data and discovered one folder completely corrupted and cannot be opened. Using google as my provider and was transferring data over from Amazon Cloud. Had to recopy the entire folder again.

 

 

1. What's the best method or software to check if more of my existing data in Stablebit Cloud Drive is corrupted? 

 

2. Is it safer to create 4 x 1 TB drives for usage as opposed to having just one large 4TB drive?

 

3. Is data corruption less likely to occur if i use an unencrypted Cloud Drive instead of a full encrypted one.

 

4. Is there any situation where the entire Cloud Drive can be corrupted and data completely irrecoverable?

 

 

I do not have the resources to keep separate copies of my data in different clouds or physical HDD. 

 

Thanks !

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...