Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    466
  • Joined

  • Last visited

  • Days Won

    34

Everything posted by srcrist

  1. It's at the very bottom of the main UI window. Just below the drive status indicator bar.
  2. As steffanmand said, that was a known bug in the 1165 release. It's fixed in the latest BETAs. Just use the link he provided. It'll be in the next stable version.
  3. Haven't seen it give that error before. Looks like a problem with the uninstaller cache. Submit an official ticket at https://stablebit.com/Contact. You might just be able to use the installer to repair the install and then uninstall.
  4. There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force m
  5. You can find 1145 here: http://dl.covecube.com/CloudDriveWindows/beta/download/
  6. You should definitely run the troubleshooter and submit an official ticket though, before you downgrade, if you haven't already done so. You can find the troubleshooter here: http://wiki.covecube.com/StableBit_Troubleshooter
  7. Uninstall then use the .1145 installer. The other way may work, but uninstall/reinstall is always cleaner.
  8. So the short answer is that a checksum mismatch error ostensibly indicates unrecoverable data corruption on the provider. That being said, 1165 was giving me checksum errors on one drive, and downgrading to .1145 no longer gave the errors, even though it has the same checksum verification system, which was added in .1114. So if you're on 1165, it might actually be related to all of the other problems that people are seeing with 1165 right now. Try downgrading to .1145 and seeing if that makes them go away. If they persist, though, there is no recovery. That error simply means that the da
  9. srcrist

    Fred

    I know that they are actively working on the problems with .1165. I spoke with Christopher about it over the weekend/Monday. For now, it is safe to simply downgrade to .1145 to resolve the issues. If you are getting these errors, submit a troubleshooter report (BEFORE DOWNGRADING TO .1145) with full details while the issue is happening. It should help them to narrow down the cause. The troubleshooter is located here: http://wiki.covecube.com/StableBit_Troubleshooter If you need the installer for .1145, you can find it here: http://dl.covecube.com/CloudDriveWindows/beta/download/
  10. From what I'm reading here, no. That isn't right. So, again, the process moving forward will depend on what, exactly, you are trying to accomplish, which is something that's still a bit unclear. If you simply want some data redundancy, that can be accomplished as easily as creating a pool with DrivePool, adding enough space to duplicate all of your data, and then enabling 2x duplication. That will place duplicate copies of your data across all drives, but it will do so algorithmically. That means that the two copies of file 1 may be on drives A and B, the two copies of file 2 may be
  11. Sort of depends on what, exactly, you want to accomplish. You could simply add new drives to DrivePool and enable duplication if you don't care where the duplicates are placed, and it'll handle the duplication in the background. If you wanted to create a copy of your existing pool you'd have to nest the pools. You can find instructions for that here (sort by date to make it easier to read):
  12. Gotcha. Well, DrivePool can do this relatively easily. So that's one option. But any sort of drive or folder sync tool should be able to do the trick. If you throttle CloudDrive to around 70mbps, you'll never hit Google's usage cap. It's generally a good idea to just do that no matter what.
  13. Is this content on CloudDrive drives, or are you trying to copy existing uploaded native content from one google drive to another? I'm a little unclear on the goal here.
  14. srcrist

    Fred

    I'm afraid that I'm not familiar with the error you're seeing, so I don't have a lot of advice to give about how to solve it or work around it. You should, I think, be able to uninstall and reinstall CloudDrive without impacting the cache space, or uninstall and roll back to a previously working version if you know of one. But I would generally just wait to hear back from Christopher via the ticket system before proceeding.
  15. srcrist

    Fred

    I would suggest that both of you submit an official ticket here: https://stablebit.com/Contact Then run the troubleshooter located here, and attach the ticket number for review: http://wiki.covecube.com/StableBit_Troubleshooter
  16. Because of the way DrivePool works, you could save yourself a SMALL headache by placing a single volume in a pool and pointing Plex and your other applications at the pool right from the start. But the process to convert a standalone drive to a pool isn't terribly difficult later, either. So it's up to you. If you create a pool from a single volume right from the start, adding additional volumes and drives is as easy as creating them and clicking the add button in DrivePool later. If you don't create the pool until later, you'll simply have to move all of the existing data on the drive to the
  17. The important thing is that each volume (read: partition) should be less than 60TB, so that Volume Shadow Copy and Chkdsk can operate on the volume to fix problems. In light of some of the Google Drive problems in March, some of us switched to using even smaller volumes in the thought that the smaller the volume the less data might be corrupted by an outage. But the changes in the recent beta like the file system data redundancy should, ideally, make this a non-issue today. Just keep each volume under 60TB. There is not, in any case, any significant performance difference between using 25TB vo
  18. Note, again, that these errors generally indicate a connection issue. So if you're having trouble with forced unmounts and I/O errors, the first thing you want to look at is the stability of the network connection between your PC and Google's servers.
  19. Unclear. You need to submit a proper ticket for this to support. Do so here: https://stablebit.com/Contact It's possible that a memory error might have fundamentally corrupted something, it's also possible that it might be fixed by simply deleting the data on the PC. Let official support walk you through that, to be sure. This is true, but remember that this limitation is per volume, not per drive. His volumes could be smaller than 60TB on a 256TB drive.
  20. In that case, I'm not sure. There might be a handle issue causing a problem, but chkdsk generally asks you if you want to close all handles if it encounters one. You could try /X and see if it helps. I would definitely open an actual ticket with covecube about the problem, though. I would be hesitant to use any drive that chkdsk cannot operate on. Especially considering some of the recent problems.
  21. Okay. To be clear: Scanner's "File System Damaged" warning is something very different than the corruption issue that you are reading about other people encountering lately. Scanner is just warning about normal old "please run chkdsk" file system health, and it can give that warning for any number of reasons. That's getting quite confusing in this thread because people are confusing this with the file system corruption that has been leading to data loss and RAW mounts--which is something different entirely. A drive with "file system damage" in Scanner is a normal part of any NTFS file system a
  22. When was the drive created? After March 13th? My drive did not exhibit corruption for weeks afterwards, but it was still corrupted by the same outage. The answer to your question remains, in any case, that the way a drive becomes corrupted when the data is verified is because the data is modified in place on Google's servers after the fact. That's how it happens, we just don't know exactly why. It obviously shouldn't happen. But I'm really not sure what can prevent it.
  23. To be clear: did you re-upload data to the same drive?
  24. Because on March 13th Google had some sort of service disruption, and whatever their solution was to that problem appears to have modified some of the data on their service. So any drive that existed before March 13th appears to have some risk of having been corrupted. Nobody knows why, and Google obviously isn't talking about what they did. My best guess is that they rolled some amount of data back to a prior state, which would corrupt the drive if some chunks were a version from mid-February while others were a newer version from March--assuming the data was modified during that time.
×
×
  • Create New...