Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    272
  • Joined

  • Last visited

  • Days Won

    16

srcrist last won the day on April 15

srcrist had the most liked content!

About srcrist

  • Rank
    Advanced Member

Recent Profile Visitors

392 profile views
  1. Unclear. You need to submit a proper ticket for this to support. Do so here: https://stablebit.com/Contact It's possible that a memory error might have fundamentally corrupted something, it's also possible that it might be fixed by simply deleting the data on the PC. Let official support walk you through that, to be sure. This is true, but remember that this limitation is per volume, not per drive. His volumes could be smaller than 60TB on a 256TB drive.
  2. In that case, I'm not sure. There might be a handle issue causing a problem, but chkdsk generally asks you if you want to close all handles if it encounters one. You could try /X and see if it helps. I would definitely open an actual ticket with covecube about the problem, though. I would be hesitant to use any drive that chkdsk cannot operate on. Especially considering some of the recent problems.
  3. Okay. To be clear: Scanner's "File System Damaged" warning is something very different than the corruption issue that you are reading about other people encountering lately. Scanner is just warning about normal old "please run chkdsk" file system health, and it can give that warning for any number of reasons. That's getting quite confusing in this thread because people are confusing this with the file system corruption that has been leading to data loss and RAW mounts--which is something different entirely. A drive with "file system damage" in Scanner is a normal part of any NTFS file system and can be caused because of simple power loss or a software crash. A drive showing up RAW indicates a much larger problem. Both are technically file system damage. As far as the drive showing up damaged in scanner but not chkdsk, I've only personally seen that get wonky with volumes that chkdsk does not properly support. How large is the drive you're trying to use? Is it larger than 60TB total? chkdsk does not support volumes 60TB or larger. I've seen scanner throw warnings simply because chkdsk cannot properly manage the volume size, and that would explain your errors when running chkdsk as well.
  4. When was the drive created? After March 13th? My drive did not exhibit corruption for weeks afterwards, but it was still corrupted by the same outage. The answer to your question remains, in any case, that the way a drive becomes corrupted when the data is verified is because the data is modified in place on Google's servers after the fact. That's how it happens, we just don't know exactly why. It obviously shouldn't happen. But I'm really not sure what can prevent it.
  5. To be clear: did you re-upload data to the same drive?
  6. Because on March 13th Google had some sort of service disruption, and whatever their solution was to that problem appears to have modified some of the data on their service. So any drive that existed before March 13th appears to have some risk of having been corrupted. Nobody knows why, and Google obviously isn't talking about what they did. My best guess is that they rolled some amount of data back to a prior state, which would corrupt the drive if some chunks were a version from mid-February while others were a newer version from March--assuming the data was modified during that time. But ultimately nobody can answer this question. Nobody seems to know exactly what happened--except for Google. And they aren't talking about it. No amount of detaching and reattaching a drive, or changing the settings in CloudDrive, is going to change how the data exists on Google's severs or protect you from anything that Google does that might corrupt that data in the future. It's simply one of the risks of using cloud storage. All anyone here can do is tell you what steps can be taken to try and recover whatever data still exists. And "RAW" is just Windows' way of telling you that the file system is corrupt. RAW just means "I can't read this file system data correctly." Raw isn't actually a specific file system type. It's simply telling you that its broken. Why it's broken is something we don't seem to have a clear answer for. We just know nothing was broken before March 13th, and then people have had problems since. Drives that i have created since March 13th have also not had issues.
  7. Chkdsk can never do more harm than good. If it recovered your files to a Lost.dir or the data is corrupted than the data was unrecoverable. Chkdsk restores your file system to stability, even at the cost of discarding corrupt data. Nothing can magically restore corrupt data to an uncorrupt state. The alternative, not using chkdsk, just means you would continue to corrupt additional data by working with an unhealthy volume. Chkdsk may restore files depending on what is wrong with the volume, but it's never guaranteed. No tool can work miracles. If the volume is damaged enough, nothing can repair it from RAW to NTFS. You'll have to use file recovery or start over.
  8. This might actually be on the provider's side. I once deleted about 100TB from Google Drive and it took months for the data to actually be removed from Google's servers. It didn't matter, since I didn't have a limit, but you might be running into a similar situation here. Contact your administrator and see if there is something they can do.
  9. Those are generally network errors, not I/O errors in the traditional sense. Something is probably interfering with your connection to Google's servers. You'll need to troubleshoot the network connection. Logs might help to pin down exactly what is happening.
  10. That's a good question, and one probably better asked in the DrivePool forum, because it's really a DrivePool question. I'm not sure if there is any way to tell DrivePool to prefer reads off of a specific drive. I know that it's smart enough to prefer local drives over cloud drives if the same data exists on both, but I'm not aware, off the top of my head, of a way to tell it to prefer a specific drive that falls within the same tier of the built-in hierarchy. That doesn't mean it doesn't exist, though. One work-around that I can think of would be to simply point your applications at the hidden folder on the drive, rather than pointing them at the pool itself. That would work.
  11. Sure thing. And also do not be scared to simply use the DNS servers provided by your ISP to test them. They're likely to be the most local servers for you, and the most likely to provide you with efficient routing to Google--despite the other flaws with using ISP provided DNS. No, I think that's fine. I used DrivePool to consolidate when I recognized this problem myself. You can always click the little arrows next to the status bar in the DrivePool UI to give the transfer a higher priority, if you're doing other things on the drives. That may help speed up the process somewhat. Just make sure that they aren't all caching on the same physical drive. Move the other two to another cache drive, even if you're only using them for duplication. That's fine. Any time.
  12. So, to be clear, there isn't anything that CloudDrive can do that should ever really cause pixelation. That just isn't how digital video works. Buffering, maybe. But not pixelation. That would be a Plex thing, and a result of how Plex is encoding or decoding the video. The drive throughput is either going to work or not. It's going to be fast enough to keep up, or not. It won't pixelate the video, it just won't work. I suppose Plex, as a client, might display pixelation if it isn't able to get data fast enough, as a sort of stop-gap to keep the stream going--so that might be why you're seeing that. The settings I gave you were the ones I use, and am able to stream 4K video with at original quality. So that's effectively all the settings we can tweak within CloudDrive--at least in the short term. There are some tweaks we might be able to make to optimize for you later, but those should be working and they're a good starting point for your connection speed. At this point, it's probably best to start looking at the network, if you've ruled out the possibility that there is some issue with the file. Play with your DNS servers and see if you're being sent to a non-optimal Google server, and check your network settings across the board. You should easily be able to hit 400mbps or so with the CloudDrive settings I gave you. If you aren't, there is something wrong. As far as the maximum chunk size goes, there isn't any way to change that without recreating the drive. Though you may want to consider that anyway, as you cannot use chkdsk on volume sizes of 100TB, and it's a lot less optimal to use 3 separate CloudDrive drives than 1 drive partitioned into multiple volumes. In fact, if all three of those drives are all caching from the same SSD, that might be the cause of all of your issues here, as well. The cache can be very disk usage heavy, and I don't generally recommend any more than a single virtual drive caching to a single physical drive. I had problems with that myself. Note that a single 1PB CloudDrive drive can be divided up into many volumes, and all can share a single cache. It's probably better to make these changes now, while there is still relatively little on the drives, than later, when it can take months. Each volume should be smaller than 60TB to work with Volume Shadow Copy and CHKDSK.
  13. I mostly meant the chunk size, but drive structure would be things like chunk size, cluster size, sector size, file system type. The things you chose when you created the drive. But really only the chunk size is relevant here.
  14. Some of this may seem counter-intuitive, but try this: Drop your download threads to ten, drop your prefetch window to 150-175MB, raise your prefetch trigger to 15MB and drop the window to 10 seconds. Try that, test it, and report back. Also, what is your drive structure? Most importantly, what is the chunk size that you specified at drive creation? Also, to rule out one more factor, have you tried playing the files off of the CloudDrive with a local media player like VLC?
  15. That simply isn't true. Are you sure that you aren't running into some sort of I/O issue? CloudDrive will upload without restriction as soon as your upload threshold is met in the performance settings. Leave "Background I/O" enabled to ensure that writes are prioritized over reads in Windows' I/O and see if that fixes your problem. Or try disabling it if it's already enabled and see if other processes are simply getting in the way. I know this, btw, because I transferred 90TB from one drive to another, and my cache was full for months. So I know from experience that the cache being full does not throttle upstream performance.
×
×
  • Create New...