Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by srcrist

  1. I mean, this assumption is faulty, and important data should never be trusted exclusively to a single backup solution--cloud or otherwise. There is no such thing as the perfect backup solution. There is a reason that the 3-2-1 backup rule exists. 3 copies, 2 different types of media, at least 1 offsite location. If you're relying exclusively on CloudDrive or any other storage solution as your only backup option, you're going to have a bad time. Google rolled back data and corrupted many CloudDrive volumes (we think), and they may do the same in the future. Google Drive is a consumer solu
  2. Again, other providers *can* still use larger chunks. Please see the changelog: This was because of issue 24914, documented here. Again, this isn't really correct. The problem, as documented above, is that larger chunks results in more retrieval calls to particular chunks, thus triggering Google's download quota limitations. That is the problem that I could not remember. It was not because of concerns about the speed, and it was not a general problem with all providers. EDIT: It looks like the issue with Google Drive might be resolved with an increase in the p
  3. Christopher, I'm sure you've spoken with Alex about this issue. I'm just wondering if there's been any discussion of infrastructural changes that might be able to improve the reliability of the file system data? I was wondering if, for example, CloudDrive could store a periodic local mirror of the file system data which could be restored in case of corruption? I don't know enough about NTFS and how the journal and such are stored on the drive to know if this is feasible or not. It just seems to me that almost everyone (who had an issue) saw file system corruption, but not corruption of the act
  4. The maximum chunk size is actually a per-provider limitation. Some providers *can* use chunks larger than 20MB. During Beta, Google could use chunks as large as 100MB, if I remember right, but that caused some sort of issue, which escapes me, with Google's service and API limitations. So this isn't really a matter of CloudDrive's features, but those supported by the provider you're using.
  5. You'd have to figure out which drive is contained in which folder on your drive. If you open the technical details in CloudDrive, and open the drive details window, the "Uid" under the "Overview" heading will correspond to the folder name on your Google Drive. You'll have to restore EVERYTHING under that folder.
  6. You should consider that if it DOESN'T work, though, it may render the entire drive unrecoverable even using photorec or recuva. Once that data is missing from Google's servers, there's no getting it back.
  7. Now that's an interesting possibility. Maybe? Sure. Maybe. You'd want to detach the CloudDrive first, probably. It might be worth a shot. The Google outage was March 13th, so a date before that would be your best shot. If this works, it would definitely help to confirm that some sort of partial rollback is the cause of this issue.
  8. well that's odd. There are other options, like photorec. Try that instead.
  9. Yes...with the caveat that it didn't prevent the google corruption that happened last month even if people used multiple accounts. The problem appears to be that Google rolled back data to an older version of some of the files. This is obviously fine for the actual file data itself, since that doesn't really change. But the chunks containing the filesystem data DO change. Often. So everybody's file systems were corrupted. If you mirror the pool to another pool that is on another account, and google has a similar issue, both pools will both be being modified basically simultaneously, and both p
  10. You can just go ahead and use recuva. It's going to scan the drive sector by sector, so it doesn't matter if the file system is screwed.
  11. I'm afraid I don't have good news for you... I did all of the research I could, and, as far as I could tell, that just means the drive is borked. That error would usually indicate a failing hard drive, but that's obviously not the case here. It's just unrecoverable corruption. The data on the drive is probably recoverable with recuva. I could recover mine that way, at least. Ultimately, though, I didn't have anything irreplaceable on my drive, so I just opted to wipe it and start over rather than go through and rename everything. Any files recovered will have arbitrary names. That d
  12. Nobody in the community is 100% positive why this is happening (including Alex and Christopher). Christopher has said that the only way this *should* be able to happen is if the chunks were modified in the cloud. Google had a pretty significant service outage on March 13th, and we started seeing reports of these corruption issues on the forum immediately after that. My best personal theory is that whatever Google's issue was, they did a rollback, and restored older versions of some of the chunks in the cloud. This would obviously corrupt a CloudDrive drive. The above post covers the only
  13. To be clear: None of those show how much of the drive is actually used by data in the same way that the system sees it. CloudDrive simply cannot tell you that. For some providers, like the local disk provider, all of the chunks are created when the drive is created--so "Cloud Unused" isn't a relevant piece of data. It creates the chunks to represent the entire drive all at once, so the amount of space you specify is also always the amount of space used on your storage device--in this case, a local drive. For some providers, like Google Drive, CloudDrive does not create all of the ch
  14. Again, CloudDrive itself has no way to know that. You specify a size because that is the amount of space that CloudDrive creates in order to store things, but whether or not space is available for USE is simply not something that the drive infrastructure is aware of. The file system, at the OS level, handles that information. You can always, of course, simply open up Windows Explorer to see how much space is available on your drive. But at the level at which CloudDrive operates, that information simply is not available. Furthermore, the drive size can contain multiple volumes--so it can't
  15. I think there might be a fundamental misunderstanding of how CloudDrive operates here. Christopher can correct me if I'm wrong, but my understanding is that CloudDrive, as an application, is simply not aware of the filesystem usage on the drive. Think of the CloudDrive software as analogous to the firmware that operates a hard drive. It might be able to tell you if a particular portion of the drive has been written to at least once, but it can't tell you how much space is available on the drive because it simply doesn't operate at that level. In order for CloudDrive, or your HDD's firmware, to
  16. To my knowledge, Google does not throttle bandwidth at all, no. But they do have the upload limit of 750GB/day, which means that a large number of upload threads is relatively pointless if you're constantly uploading large amounts of data. It's pretty easy to hit 75mbps or so with only 2 or 3 upload threads, and anything more than that will exceed Google's upload limit anyway. If you *know* that you're uploading less than 750GB that day anyway, though, you could theoretically get several hundred mbps performance out of 10 threads. So it's sort of situational. Many of us do use servers wit
  17. Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  18. Glad to see an official response on this. Christopher, are you able to provide a quick explanation of *why* that process would help? What exactly is going on with these RAW errors, and can they be prevented in case of a Google outage in the future? Would turning on file upload verification help?
  19. What result did chkdsk give you? Does it report that the volume is fine? Or is it giving you some other error? Also open an actual support ticket here: https://stablebit.com/Contact And run the troubleshooter and attach your support ticket number after you submit that request. The troubleshooter is located here: http://wiki.covecube.com/StableBit_Troubleshooter This is probably a result from Google's issues a few weeks ago, but different people are experiencing different levels of corruption from that. So we'll need to figure out your specific situation to get a solution--if one
  20. It won't really limit your ability to upload larger amounts of data, it just throttles writes to the drive when the cache drive fills up. So if you have 150GB of local disk space on the cache drive, but you copy 200GB of data to it, the first roughly 145GB of data will copy at essentially full speed, as if you're just copying from one local drive to another, and then it will throttle the drive writes so that the last 55GB of data will slowly copy to the CloudDrive drive as chunks are uploaded from your local cache to the cloud provider. Long story short: it isn't a problem unless high sp
  21. SSD. Disk usage for the cache, particularly with a larger drive, can be heavy. I always suggest an SSD cache drive. You'll definitely notice a significant impact. Aside from upload space, most drives don't need or generally benefit from a cache larger than 50-100GB or so. You'll definitely get diminishing returns with anything larger than that. So speed is far more important than size.
  22. I'm not sure why it would need to reindex all of the files on the drive like that, but if it does, indeed, need to search everything once, you could probably use an application like WinDirStat to do it all in one go. It'll touch every file on the drive in a few minutes.
  23. No, you can only throttle on a per-drive basis.
  24. That's just a warning. You thread count is a bit too high, and you're probably getting throttled. Google only allows around 15 simultaneous threads at a time. Try dropping your upload threads to 5 and keeping your download threads where they are. That warning will probably go away. Ultimately, though, even temporary network hiccups can occasionally cause those warnings. So it might also be nothing. It's only something to worry about if it happens regularly and frequently.
  25. srcrist

    Data corrupted..?

    Right. That is how chkdsk works. It repairs the corrupted volume information and will discard entries if it needs to. Now you have a healthy volume, but you need to recover the files if you can. That is a separate process. It's important to understand how your file system works if you're going to be managing terabytes of data in the cloud. The alternative would have been operating with an unhealthy volume and continuing to corrupt data every time you wrote to the drive. Here is some additional information that may help you: https://www.minitool.com/data-recovery/recover-data-
×
×
  • Create New...