Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. srcrist

    Google Corruption

    Your file data is probably largely still intact. What it appears that people lost was their NTFS data itself--which is why so much is missing. The data is still on the disk, but the file system data required to access it was corrupted by google's outage. It isn't a problem with an easy fix, unfortunately. You'll have to try data recovery software to see if it can restore the files.
  2. srcrist

    Data corrupted..?

    The volume is repaired and healthy though? It's normal for chkdsk to discard data to repair corruption if it has to. The next step would then be data recovery software to try and recover the lost files. Look into Recuva. It's a free tool that can do exactly that. It will take the time to search the data on the disk, recreate the files, and place them into the now healthy file system.
  3. srcrist

    Google Corruption

    Chkdsk can take many hours to repair if there are problems with the system. Are you sure that it's hanging? You'll really need to let it do it's thing, if it can.
  4. If everything is still working at all, I don't think you're losing data. I couldn't tell you what's going on, as I've never seen that. But if was *actually* dropping all of the data from the cache spontaneously, it would be catastrophic for the drive. So whatever is going on isn't that. So that's good. I'd say just keep an eye on it for a bit and see if it keeps happening. How long has this been going on?
  5. To be clear: the drive is still functioning when this happens, correct? You're not having any trouble accessing the data?
  6. The drive size doesn't really matter. What is important is the volume, or partition size. You want volumes no larger than 60TB, because windows cannot repair NTFS volumes larger than 60TB. A single large CloudDrive drive can be divided up into multiple volumes smaller than 60TB. I am not aware, however, of multiple drives or larger drives leading to system instability. I use a very large drive myself, and DrivePool, and have no such issue. So there may be something else going on with your server. Sure! DrivePool works great to expand the drive and add additional volumes.
  7. That's the gist of it, yes. What you want to do is a combination of the process I helped two other users with in other threads. You'll want to create a multi-volume CloudDrive pool as described in the second thread, and set up a nested pool for duplication as described in the first thread. You can find those threads here:
  8. What sort of cache are you using, and what are you doing with the drive when this happens? Based on your description I am sort of wondering if what you're seeing is the expandable cache growing to accommodate new written data, and then simply uploading that data to the cloud. That being said, the short answer is simply "no." That isn't normal behavior for the cache. It shouldn't even be possible based on how the CloudDrive system operates. If your cache simply disappeared, you should have severe drive corruption. So it's likely that something else is going on here if your drive is still operational.
  9. I will note that I really do hope that Alex and Christopher can figure out what happened with the folks that lost data, and make a post to help the rest of us understand. It does worry me somewhat, even though I didn't suffer any data loss myself.
  10. I understand that it's frustrating to lose data, but I also don't think there is really any indication that this incident was CloudDrive's fault. CloudDrive, like any other cloud-based storage solution, is at the mercy of the provider. If Google has issues, and they admitted that they did, CloudDrive can't compensate for them short of storing all of your content locally--which defeats the purpose. It very well may be the case that rClone users did not have problems, since rClone interacts with the API far less frequently than CloudDrive does. It also might simply be the case that rClone users had problems and haven't noticed, because rClone doesn't create an actual filesystem that can be corrupted. It's a lot easier to notice a problem when Windows simply stops reading your data than if you're missing chunks of data in particular files and simply haven't tried to read them. It should also be noted that CloudDrive also already has upload verification that can be turned on to prevent API and provider issues from causing problems--but most people do not use it, including myself, because Google tends to be so reliable. Maybe that's changing and we should all turn it on. In any case, though, you've been a little less than responsive with respect to diagnosing your problem. The last time you posted I asked if you had tried chkdsk, explained how to try it, and that was the last thing I heard. Here you say it did nothing. What does that mean? Chkdsk would do something when you tried to use it. Did you provide you an error? Did it say that it didn't find any problems? Did it try to correct the problems on the drive? Chkdsk isn't magic. It was a suggestion to see if your files were simply orphaned and could have their filesystem entries repaired. But, honestly, I don't even know if that's the problem because the amount of detail you've explained has been minimal. Another user had his filesystem itself corrupt and his disk was showing up RAW in disk management. Is that the case for you as well? It sounds to me like your case is different, and you simply lost files--which suggests that chkdsk should, theoretically, be of some use here. I had also suggested submitting a ticket and running the troubleshooter to submit the data to Stablebit for analysis. Did you do those things as well? Ultimately, though, if you're only talking about 200 files, your best option might simply be data recovery software like recuva, which can scan the raw data on the disk and rebuild lost files--as long as you haven't continued to write to the drive in an unhealthy state. Have you considered or tried any of those options? Unless Google simply up and deleted your data at random, there is no reason that older files should simply disappear. All of that data should still be on your drive. I'd be more than happy to try and help you, but you'll have to be responsive and detailed. But it also might be true that CloudDrive simply isn't the tool for you. Cloud storage has risks. And those are valid considerations for what you're trying to accomplish. CloudDrive has advantages over rClone with respect to drive performance and the ability to make changes in the cloud, but those advantages come with the risks associated with storing an actual drive image in the cloud, as opposed to simply uploading one file at a time. Nobody, in the end, can decide what your risk tolerance is other than you. I, personally, would not store anything on any cloud solution that is important and irreplaceable without a local mirror--rClone, CloudDrive, or otherwise. I think it works wonderfully for a media library and other similarly impersonal and replaceable files, but I, personally, would never use it for, say, family photographs or important documents. I just don't have the risk tolerance for that.
  11. No, DrivePool is designed to work with nested pools.
  12. You pool each set of volumes together to create a pooled volume, and then you add the pooled volumes to a pool themselves and enable duplication on that pool. That will mirror each set of volumes with each other.
  13. Your assumption is right, but your presumed pool structure is simply wrong. You would, instead, use a hierarchical pool to ensure that duplication is handled correctly.
  14. srcrist

    Data corrupted..?

    Just use chkdsk /f on the drive letter or volume. If it needs to take the drive offline it will ask you. You can use the /X switch if you know that it needs to dismount for sure.
  15. srcrist

    Data corrupted..?

    Have you run chkdsk on the drive yet?
  16. It's obviously entirely your decision, but I think it's safe to say that Google won't simply up and delete your legitimately paid-for data without notice. So I personally wouldn't even worry about the other two. Have you considered, instead, a local copy mirrored to your CloudDrive storage? Again, just as a point of conversation. I'm only sharing my thoughts, not, obviously, telling you what to do. I you *don't* drop it to less than 60TB, you might as well not even bother. Any NTFS volume *will* inevitably need to be repaired. A simple power loss or system freeze/crash could interrupt a file system modification. CloudDrive's file system is real. It's subject to any source of corruption that happens to your physical drives. Anything over 60TB could be ReFS, but that comes with its own set of potential problems. But until they re-write VSS to support larger NTFS volumes than 60TB, or another tool emerges to make repairs on volumes this large, there is no point in using such volumes. Any corruption at all runs the risk of cascading to failure. Continuing to write to an unhealthy drive is data suicide. If you simply want to mirror multiple copies at a file level, and not have to worry about the health of the drive structure, rClone might actually be a better tool for you. Note, to be clear, that multiple *volumes* is not the same as multiple *drives*, though. You can split a single 1PB CloudDrive drive up into many volumes. You can always use DrivePool to recombine those volumes into one pool. Absolutely. Just note that this will likely affect drive throughput as well. If you have, say, 10 download threads, some percentage of those will be taken up by the verification process any time you are uploading data. Just something to note. Any time. And I'm really sorry that you lost a drive. Early on in my CloudDrive experimentation, Windows itself corrupted a ReFS volume during a Windows Update. That feeling sucks. I also hope the damage is minimal. Good luck!
  17. I think it's important to understand that Google may not even be aware of the data loss. If, for example, the issue was with a database server, or simply with their API, Google's servers may have simply been reporting to CloudDrive that its transfers were successful when they had, in fact, failed. In which case CloudDrive would operate as if the data had been successfully uploaded and remove the local copy from the cache, but Google would not actually have a copy of the data on their servers. Because of your post and a few others here on the forums, I took my volumes offline last night and ran some tests and I did have some corruption. But only on the volume that was actively being written during the disruption, and, thankfully, not to the extent that the file system data itself was corrupt. Just some orphaned data that chkdsk had to repair. But since it was only on the volume being modified, it suggests that the problem resulted from the upload API and CloudDrive's response to it. The integrity of the five other volumes was all fine--and I don't see any missing data. Ultimately, if you didn't have any data on the CloudDrive volumes that you didn't already have in your local storage, there isn't any reason to do data recovery. It would be a lengthy process, and all it would accomplish would be to download and create a new local copy of the files from the corrupted file system. Which is what you already have. And as long as you created new drives, the existing data on your old CloudDrive volumes will not be touched, if it can simply be fixed. I suspect that Christopher and Alex are going to come back and say that there isn't any way that CloudDrive can be modified that would prevent this issue from ever happening--because CloudDrive is simply dependent on the provider's service and API, and that's where the issue arose in this instance. But it's a good idea, in any case, to work through support with them so that they can at least look at the troubleshooting data and perhaps give us an idea what happened. To prevent this from happening again, there are a few options, though: Firstly, as I said above, do not use NTFS volumes larger than 60TB. Some degree of file system corruption is as inevitable with CloudDrive as any other NTFS volume, and depriving yourself of chkdsk to repair that corruption is basically a death pact. Secondly, you can make sure that only one volume is ever being modified at a time, so that only one volume runs the risk of developing corruption if Google's service begins processing erroneous data. Honestly, I'm not sure that mirroring multiple copies of the data across several Google accounts is genuinely productive at all--since, as you said, Google has redundancy themselves and has no track record of losing data that is successfully uploaded to their service. It does, however, multiply the potential points of failure during the transfer process. And, lastly, and at the cost of a tremendous amount of bandwidth, you can enable upload verification in the drive integrity settings for your drive--which will download every chunk uploaded and ensure that the data is available on the provider before removing the data from the local cache. This means that every byte uploaded will also be downloaded, but it also means that CloudDrive will never remove local data unless it can absolutely verify that the data is able to be retrieved from the provider. All of this being said, though, I've been using CloudDrive for roughly three years now and I try to visit the forums regularly to help others, and this is the first time that I can remember a service outage like this, and the first time that I've seen data corruption based on a service disruption. So, *knock on wood*, this will hopefully remain a problem not worth too much anxiety on a regular basis.
  18. srcrist

    Data corrupted..?

    Based on this and the other post from this afternoon, it's looking like Google may have actually lost some data during their service disruption last night. If that is, in fact, the case, there really isn't anything that can be done. CloudDrive is, of course, built on top of the provider's service, and if the provider loses data, or erroneously reports success when there was actually failure, there isn't really anything that anyone can do about it. I would submit a ticket and a troubleshooter though, in any case.
  19. For specifically this reason, you should try to avoid using NTFS volumes larger than 60TB. They cannot be repaired in case of corruption. Volume Shadow Copy, which chkdsk relies on to make repairs, does not support NTFS volumes larger than 60TB. So that might be why you're seeing the chkdsk error. You can always use multiple volumes partitioned from the same clouddrive drive, though. I'm not sure why the 16TB volume is also giving you problems, however. I'm not sure what you mean by creating new volumes, exactly, but as long as you don't delete the old volumes to make a new one the data will still be there if it can be salvaged. I want to wait to see what Christopher and Alex want you to do, but with three copies of the data, software like recuva can likely retrieve your data--even if the file system is genuinely corrupted. But wait until they respond before trying it. Hope you can find a solution soon.
  20. A file system showing up as raw in disk management indicates a corrupted filesystem. It could be some strange issue with clouddrive, but it could also be file system data that was lost on Google's end during the service disruption. That's probably not the answer you wanted to hear. But it might still be salvageable. How large are the volumes and what filesystem are they? The chkdsk space error makes me wonder if you're using ntfs larger than 60tb, which is larger than chkdsk can handle. Regardless, you need to open an actual ticket here. I don't want to suggest anything that may further corrupt the drive. Head to the support section of the website and submit a ticket. Go ahead and run the troubleshooter and attach the ticket number once you do that. Find the troubleshooter here: http://wiki.covecube.com/StableBit_Troubleshooter For what it's worth, my drives had service interruption last night and came back as normal. So hopefully it's just a software quirk. Remember, also, that data recovery software can be used on clouddrive volumes if it comes to that.
  21. It was intended to be somewhat abrupt. There are quite literally 3 other threads answering your questions on the front page right now. All of which I have spent my time to respond to. This is not accurate. Nothing in the existing threads would suggest this to be true if you had read through them.With the exception of the prefetcher settings, the variance of which is also covered, nothing else really needs to change based on connection speed. Most of your questions weren't even related to this. I would presume because these are not the Plex forums, and Covecube do not want to endorse this particular use of the product? Or simply because nobody has gotten around to it. To my knowledge, Plex also does not have a sticky about how to use CloudDrive. You could ask them the same question. I cannot, in any case, sticky threads. I can only answer them. Okay. You do that. I certainly won't be providing you any more.
  22. All of it. Do that for the entire pool. If you have trouble changing the permissions by simply opening permissions on the entire pool volume, do it on each individual drive that makes up the pool. If you have additional data on any of the drives, doing it on the poolpart folders is probably fine.
  23. I try to be helpful but this question is simply getting exhausting. Not to be rude by please use the search function to find previous answers to your questions about recommended settings for plex. This question is asked almost every week. Beyond that: Stop doing this. CloudDrive does not have a mode that is not completely dependent on the cache. Because of how CloudDrive works, it must read and write data from the cache. You're only slowing down the entire process by constantly clearing it. The speed that the data is being pulled from your provider is clearly shown in the UI, and isn't really meaningful with respect to how quickly the drive will operate in actual practice. CloudDrive is more than capable of saturating a 60mbps connection when pulling data from Google, if it needs to. Torrents are fine. There is no file corruption from using CloudDrive, for torrents or otherwise. The data is written to the cache drive the same way. Your biggest limitation will simply be your 60mbps connection. For the sake of avoiding constant saturation, I would probably avoid pointing a torrent client directly at the drive. Streaming video will already be using 10-25mbps, and that's not including any overhead or other disk functions. If you plan on doing anything else with your internet connection, you probably don't want a torrent client constantly pulling random chunks off of the CloudDrive volume.
  24. Sounds like a permissions issue. Try this first: http://wiki.covecube.com/StableBit_DrivePool_Q5510455
  25. There is a performance setting to adjust your upload threshold, but turning off background IO in the performance settings will help this too. The default IO priority will prioritize writes over reads.
×
×
  • Create New...