Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by srcrist

  1. srcrist

    Google Corruption

    Your file data is probably largely still intact. What it appears that people lost was their NTFS data itself--which is why so much is missing. The data is still on the disk, but the file system data required to access it was corrupted by google's outage. It isn't a problem with an easy fix, unfortunately. You'll have to try data recovery software to see if it can restore the files.
  2. srcrist

    Data corrupted..?

    The volume is repaired and healthy though? It's normal for chkdsk to discard data to repair corruption if it has to. The next step would then be data recovery software to try and recover the lost files. Look into Recuva. It's a free tool that can do exactly that. It will take the time to search the data on the disk, recreate the files, and place them into the now healthy file system.
  3. srcrist

    Google Corruption

    Chkdsk can take many hours to repair if there are problems with the system. Are you sure that it's hanging? You'll really need to let it do it's thing, if it can.
  4. If everything is still working at all, I don't think you're losing data. I couldn't tell you what's going on, as I've never seen that. But if was *actually* dropping all of the data from the cache spontaneously, it would be catastrophic for the drive. So whatever is going on isn't that. So that's good. I'd say just keep an eye on it for a bit and see if it keeps happening. How long has this been going on?
  5. To be clear: the drive is still functioning when this happens, correct? You're not having any trouble accessing the data?
  6. The drive size doesn't really matter. What is important is the volume, or partition size. You want volumes no larger than 60TB, because windows cannot repair NTFS volumes larger than 60TB. A single large CloudDrive drive can be divided up into multiple volumes smaller than 60TB. I am not aware, however, of multiple drives or larger drives leading to system instability. I use a very large drive myself, and DrivePool, and have no such issue. So there may be something else going on with your server. Sure! DrivePool works great to expand the drive and add additional volumes.
  7. That's the gist of it, yes. What you want to do is a combination of the process I helped two other users with in other threads. You'll want to create a multi-volume CloudDrive pool as described in the second thread, and set up a nested pool for duplication as described in the first thread. You can find those threads here:
  8. What sort of cache are you using, and what are you doing with the drive when this happens? Based on your description I am sort of wondering if what you're seeing is the expandable cache growing to accommodate new written data, and then simply uploading that data to the cloud. That being said, the short answer is simply "no." That isn't normal behavior for the cache. It shouldn't even be possible based on how the CloudDrive system operates. If your cache simply disappeared, you should have severe drive corruption. So it's likely that something else is going on here if your drive is
  9. I will note that I really do hope that Alex and Christopher can figure out what happened with the folks that lost data, and make a post to help the rest of us understand. It does worry me somewhat, even though I didn't suffer any data loss myself.
  10. I understand that it's frustrating to lose data, but I also don't think there is really any indication that this incident was CloudDrive's fault. CloudDrive, like any other cloud-based storage solution, is at the mercy of the provider. If Google has issues, and they admitted that they did, CloudDrive can't compensate for them short of storing all of your content locally--which defeats the purpose. It very well may be the case that rClone users did not have problems, since rClone interacts with the API far less frequently than CloudDrive does. It also might simply be the case that rClone users
  11. No, DrivePool is designed to work with nested pools.
  12. You pool each set of volumes together to create a pooled volume, and then you add the pooled volumes to a pool themselves and enable duplication on that pool. That will mirror each set of volumes with each other.
  13. Your assumption is right, but your presumed pool structure is simply wrong. You would, instead, use a hierarchical pool to ensure that duplication is handled correctly.
  14. srcrist

    Data corrupted..?

    Just use chkdsk /f on the drive letter or volume. If it needs to take the drive offline it will ask you. You can use the /X switch if you know that it needs to dismount for sure.
  15. srcrist

    Data corrupted..?

    Have you run chkdsk on the drive yet?
  16. It's obviously entirely your decision, but I think it's safe to say that Google won't simply up and delete your legitimately paid-for data without notice. So I personally wouldn't even worry about the other two. Have you considered, instead, a local copy mirrored to your CloudDrive storage? Again, just as a point of conversation. I'm only sharing my thoughts, not, obviously, telling you what to do. I you *don't* drop it to less than 60TB, you might as well not even bother. Any NTFS volume *will* inevitably need to be repaired. A simple power loss or system freeze/crash could interrupt
  17. I think it's important to understand that Google may not even be aware of the data loss. If, for example, the issue was with a database server, or simply with their API, Google's servers may have simply been reporting to CloudDrive that its transfers were successful when they had, in fact, failed. In which case CloudDrive would operate as if the data had been successfully uploaded and remove the local copy from the cache, but Google would not actually have a copy of the data on their servers. Because of your post and a few others here on the forums, I took my volumes offline last night an
  18. srcrist

    Data corrupted..?

    Based on this and the other post from this afternoon, it's looking like Google may have actually lost some data during their service disruption last night. If that is, in fact, the case, there really isn't anything that can be done. CloudDrive is, of course, built on top of the provider's service, and if the provider loses data, or erroneously reports success when there was actually failure, there isn't really anything that anyone can do about it. I would submit a ticket and a troubleshooter though, in any case.
  19. For specifically this reason, you should try to avoid using NTFS volumes larger than 60TB. They cannot be repaired in case of corruption. Volume Shadow Copy, which chkdsk relies on to make repairs, does not support NTFS volumes larger than 60TB. So that might be why you're seeing the chkdsk error. You can always use multiple volumes partitioned from the same clouddrive drive, though. I'm not sure why the 16TB volume is also giving you problems, however. I'm not sure what you mean by creating new volumes, exactly, but as long as you don't delete the old volumes to make a new one the data
  20. A file system showing up as raw in disk management indicates a corrupted filesystem. It could be some strange issue with clouddrive, but it could also be file system data that was lost on Google's end during the service disruption. That's probably not the answer you wanted to hear. But it might still be salvageable. How large are the volumes and what filesystem are they? The chkdsk space error makes me wonder if you're using ntfs larger than 60tb, which is larger than chkdsk can handle. Regardless, you need to open an actual ticket here. I don't want to suggest anything that may further
  21. It was intended to be somewhat abrupt. There are quite literally 3 other threads answering your questions on the front page right now. All of which I have spent my time to respond to. This is not accurate. Nothing in the existing threads would suggest this to be true if you had read through them.With the exception of the prefetcher settings, the variance of which is also covered, nothing else really needs to change based on connection speed. Most of your questions weren't even related to this. I would presume because these are not the Plex forums, and Covecube do not want to e
  22. All of it. Do that for the entire pool. If you have trouble changing the permissions by simply opening permissions on the entire pool volume, do it on each individual drive that makes up the pool. If you have additional data on any of the drives, doing it on the poolpart folders is probably fine.
  23. I try to be helpful but this question is simply getting exhausting. Not to be rude by please use the search function to find previous answers to your questions about recommended settings for plex. This question is asked almost every week. Beyond that: Stop doing this. CloudDrive does not have a mode that is not completely dependent on the cache. Because of how CloudDrive works, it must read and write data from the cache. You're only slowing down the entire process by constantly clearing it. The speed that the data is being pulled from your provider is clearly shown in the UI, and isn't
  24. Sounds like a permissions issue. Try this first: http://wiki.covecube.com/StableBit_DrivePool_Q5510455
  25. There is a performance setting to adjust your upload threshold, but turning off background IO in the performance settings will help this too. The default IO priority will prioritize writes over reads.
×
×
  • Create New...