Yup it's me again. On the latest version. I completely trashed the previous cloud drives I was working with before and made a new one a couple months ago. Was working fine until now when I just scanned through all my files to find random files in random directories all over my drive had gone missing again. Cloud drive is version 1.1.0.1051. Drive pool is 2.2.2.934. Running on Windows Server 2012. My cloud drive is a 256TB drive split into 8 NTFS partitions (I ditched ReFS in case that was causing issues before. apparently not) which are each mounted to an NTFS directory. Multiple levels of drivepool then pool all 8 partitions into one drive, then pool that drive (on its own for now, but designed this way for future expansion) into another drive. All content is then dealt with from that drive. Seemed to be working fine for months until I looked through the files today and found that random ones (a good 5-10%) have gone missing entirely. I really need to get this issue figured out as soon as possible. I've wasted so much time due to this issue since I first reported it almost a year ago. What information do you need?
Edit:
No errors in either software. Even went and bought two ssds for a RAID1 to use as my cache drive so I've eliminated bandwidth issues.
Edit again:
Just tried remounting the drive and drivepool shows "Error: (0xC0000102) - {Corrupt File}" which resulted in it failing to measure a particular path within one of the 8 partitions. How does this happen? Is there a way to restore it?
Another note:
The thing is, I don't think it's just a drivepool issue because the first time I reported this, i was using one single large clouddrive without drivepool.
More:
Tried going into the file system within that partition directly and found that one of the folders inside one of drivepool's directories was corrupt and couldn't be accessed. I'm running a chkdsk on that entire partition now. Will update once done.
Sigh...:
If it ever completes that is. It's hanging on the second "recovering orphaned file" task it got to. I'll be leaving it running for however long seems reasonable....
A question:
as long as I'm waiting for this... is there a better way to get a large (on the order of hundreds of terabytes) cloud drive set up without breaking things such as the ability to grow it in the future or the ability to run chkdsk (which fails on large volumes). I'm at my wit's end here and I'm not sure how to move forward with this. Is anyone here storing large amounts of data on gdrive without running into this file deletion/corruption issue? Is it my server? I've had almost 10tb of local storage sit here just fine without ever losing any data. From searching this forum, just about no one else seems to be having the issues I've been plagued with. Where do i go from here? I'm almost ready to just my server to you guys for monitoring and debugging (insert forced laughter as my mind slowly devolves into an insane mess here).
Question
darkly
Yup it's me again. On the latest version. I completely trashed the previous cloud drives I was working with before and made a new one a couple months ago. Was working fine until now when I just scanned through all my files to find random files in random directories all over my drive had gone missing again. Cloud drive is version 1.1.0.1051. Drive pool is 2.2.2.934. Running on Windows Server 2012. My cloud drive is a 256TB drive split into 8 NTFS partitions (I ditched ReFS in case that was causing issues before. apparently not) which are each mounted to an NTFS directory. Multiple levels of drivepool then pool all 8 partitions into one drive, then pool that drive (on its own for now, but designed this way for future expansion) into another drive. All content is then dealt with from that drive. Seemed to be working fine for months until I looked through the files today and found that random ones (a good 5-10%) have gone missing entirely. I really need to get this issue figured out as soon as possible. I've wasted so much time due to this issue since I first reported it almost a year ago. What information do you need?
Edit:
No errors in either software. Even went and bought two ssds for a RAID1 to use as my cache drive so I've eliminated bandwidth issues.
Edit again:
Just tried remounting the drive and drivepool shows "Error: (0xC0000102) - {Corrupt File}" which resulted in it failing to measure a particular path within one of the 8 partitions. How does this happen? Is there a way to restore it?
Another note:
The thing is, I don't think it's just a drivepool issue because the first time I reported this, i was using one single large clouddrive without drivepool.
More:
Tried going into the file system within that partition directly and found that one of the folders inside one of drivepool's directories was corrupt and couldn't be accessed. I'm running a chkdsk on that entire partition now. Will update once done.
Sigh...:
If it ever completes that is. It's hanging on the second "recovering orphaned file" task it got to. I'll be leaving it running for however long seems reasonable....
A question:
as long as I'm waiting for this... is there a better way to get a large (on the order of hundreds of terabytes) cloud drive set up without breaking things such as the ability to grow it in the future or the ability to run chkdsk (which fails on large volumes). I'm at my wit's end here and I'm not sure how to move forward with this. Is anyone here storing large amounts of data on gdrive without running into this file deletion/corruption issue? Is it my server? I've had almost 10tb of local storage sit here just fine without ever losing any data. From searching this forum, just about no one else seems to be having the issues I've been plagued with. Where do i go from here? I'm almost ready to just my server to you guys for monitoring and debugging (insert forced laughter as my mind slowly devolves into an insane mess here).
Link to comment
Share on other sites
3 answers to this question
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.