Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. It's at the very bottom of the main UI window. Just below the drive status indicator bar.
  2. As steffanmand said, that was a known bug in the 1165 release. It's fixed in the latest BETAs. Just use the link he provided. It'll be in the next stable version.
  3. Haven't seen it give that error before. Looks like a problem with the uninstaller cache. Submit an official ticket at https://stablebit.com/Contact. You might just be able to use the installer to repair the install and then uninstall.
  4. There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain. As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it.
  5. You can find 1145 here: http://dl.covecube.com/CloudDriveWindows/beta/download/
  6. You should definitely run the troubleshooter and submit an official ticket though, before you downgrade, if you haven't already done so. You can find the troubleshooter here: http://wiki.covecube.com/StableBit_Troubleshooter
  7. Uninstall then use the .1145 installer. The other way may work, but uninstall/reinstall is always cleaner.
  8. So the short answer is that a checksum mismatch error ostensibly indicates unrecoverable data corruption on the provider. That being said, 1165 was giving me checksum errors on one drive, and downgrading to .1145 no longer gave the errors, even though it has the same checksum verification system, which was added in .1114. So if you're on 1165, it might actually be related to all of the other problems that people are seeing with 1165 right now. Try downgrading to .1145 and seeing if that makes them go away. If they persist, though, there is no recovery. That error simply means that the data was modified on/by your provider. If the data has genuinely been changed and corrupted, it cannot be fixed. It would be advisable to start copying the data off of that volume and onto a new one. Note that all of the other genuine corruptions were accompanied by a service problem at Google, and I have not heard of one since July. So there is cause to be optimistic that you're being given false errors.
  9. srcrist

    Fred

    I know that they are actively working on the problems with .1165. I spoke with Christopher about it over the weekend/Monday. For now, it is safe to simply downgrade to .1145 to resolve the issues. If you are getting these errors, submit a troubleshooter report (BEFORE DOWNGRADING TO .1145) with full details while the issue is happening. It should help them to narrow down the cause. The troubleshooter is located here: http://wiki.covecube.com/StableBit_Troubleshooter If you need the installer for .1145, you can find it here: http://dl.covecube.com/CloudDriveWindows/beta/download/
  10. From what I'm reading here, no. That isn't right. So, again, the process moving forward will depend on what, exactly, you are trying to accomplish, which is something that's still a bit unclear. If you simply want some data redundancy, that can be accomplished as easily as creating a pool with DrivePool, adding enough space to duplicate all of your data, and then enabling 2x duplication. That will place duplicate copies of your data across all drives, but it will do so algorithmically. That means that the two copies of file 1 may be on drives A and B, the two copies of file 2 may be on drive B and C, and two copies of file 3 may be on A and C. The files will all be duplicated, but it will be somewhat random where the duplication is placed. If your needs require more control over where the data is placed, and you need, for example, a pool of drives A, B, and C that is identically mirrored by a pool containing drives D, E, and F (because, for example, drives A, B, and C are on one cloud account, while drives D, E, and F are on another, and you want a copy of the data on each cloud account), then you'll need to create a nested pool setup wherein pool ABC and pool DEF are inside of another pool, Z. Then you enable duplication on pool Z, which will duplicate pool ABC with pool DEF. Note: this process is complicated, and may not be necessary for you. If all you want is some data redundancy, you can just create a pool and enable duplication. You, will, yes, have to move all of the existing data to the Poolpart folder that will be created when the drives are added to a pool, or else the data will not appear within the pool. That's simply how DrivePool works. All of the pool data is stored within a hidden Poolpart directory on the drive. There is no circumstance where you would need to remove a drive from the pool, and certainly no need to delete any of the drives, as a part of this process. You just need to create the pool structure that meets you needs, move the data to where it needs to be in order to be seen within the pool, and enable duplication. If you are still unclear about how DrivePool operates, and what it actually does, I would strongly encourage you to read the user manual here: https://stablebit.com/Support/DrivePool/2.X/Manual
  11. Sort of depends on what, exactly, you want to accomplish. You could simply add new drives to DrivePool and enable duplication if you don't care where the duplicates are placed, and it'll handle the duplication in the background. If you wanted to create a copy of your existing pool you'd have to nest the pools. You can find instructions for that here (sort by date to make it easier to read):
  12. Gotcha. Well, DrivePool can do this relatively easily. So that's one option. But any sort of drive or folder sync tool should be able to do the trick. If you throttle CloudDrive to around 70mbps, you'll never hit Google's usage cap. It's generally a good idea to just do that no matter what.
  13. Is this content on CloudDrive drives, or are you trying to copy existing uploaded native content from one google drive to another? I'm a little unclear on the goal here.
  14. srcrist

    Fred

    I'm afraid that I'm not familiar with the error you're seeing, so I don't have a lot of advice to give about how to solve it or work around it. You should, I think, be able to uninstall and reinstall CloudDrive without impacting the cache space, or uninstall and roll back to a previously working version if you know of one. But I would generally just wait to hear back from Christopher via the ticket system before proceeding.
  15. srcrist

    Fred

    I would suggest that both of you submit an official ticket here: https://stablebit.com/Contact Then run the troubleshooter located here, and attach the ticket number for review: http://wiki.covecube.com/StableBit_Troubleshooter
  16. Because of the way DrivePool works, you could save yourself a SMALL headache by placing a single volume in a pool and pointing Plex and your other applications at the pool right from the start. But the process to convert a standalone drive to a pool isn't terribly difficult later, either. So it's up to you. If you create a pool from a single volume right from the start, adding additional volumes and drives is as easy as creating them and clicking the add button in DrivePool later. If you don't create the pool until later, you'll simply have to move all of the existing data on the drive to the hidden pool folder when you want to use a pool.
  17. The important thing is that each volume (read: partition) should be less than 60TB, so that Volume Shadow Copy and Chkdsk can operate on the volume to fix problems. In light of some of the Google Drive problems in March, some of us switched to using even smaller volumes in the thought that the smaller the volume the less data might be corrupted by an outage. But the changes in the recent beta like the file system data redundancy should, ideally, make this a non-issue today. Just keep each volume under 60TB. There is not, in any case, any significant performance difference between using 25TB volumes or 50TB volumes combined with DrivePool. It depends on what you mean by important. CloudDrive can be quite I/O intensive, and you'll notice a significant difference between the cache performance of an SSD vs a spinning rust drive. This will be particularly noticeable if you will be both writing and reading to and from the drive simultaneously. Will it work on an HDD? Probably. Will an SSD be markedly better? Absolutely. SSDs are cheap these days. I would suggest picking one up. It doesn't need to be a fancy Samsung EVO 960 Pro or anything. As long as you're using a processor with AES-NI, the resource impact of the StableBit software should be negligible. DrivePool simply forwards I/O requests to the underlying drives, so its impact is effectively non-existent, and, setting aside the obvious I/O needs of the cache, CloudDrive's actual resource requirements are all around the encryption and decryption--all of which should be offloaded to AES-NI, as long as you have it. I think that using CloudDrive on the NUC is wise. There should be no major issue sharing CloudDrive volumes or DrivePool pools via SMB to the NAS.
  18. Note, again, that these errors generally indicate a connection issue. So if you're having trouble with forced unmounts and I/O errors, the first thing you want to look at is the stability of the network connection between your PC and Google's servers.
  19. Unclear. You need to submit a proper ticket for this to support. Do so here: https://stablebit.com/Contact It's possible that a memory error might have fundamentally corrupted something, it's also possible that it might be fixed by simply deleting the data on the PC. Let official support walk you through that, to be sure. This is true, but remember that this limitation is per volume, not per drive. His volumes could be smaller than 60TB on a 256TB drive.
  20. In that case, I'm not sure. There might be a handle issue causing a problem, but chkdsk generally asks you if you want to close all handles if it encounters one. You could try /X and see if it helps. I would definitely open an actual ticket with covecube about the problem, though. I would be hesitant to use any drive that chkdsk cannot operate on. Especially considering some of the recent problems.
  21. Okay. To be clear: Scanner's "File System Damaged" warning is something very different than the corruption issue that you are reading about other people encountering lately. Scanner is just warning about normal old "please run chkdsk" file system health, and it can give that warning for any number of reasons. That's getting quite confusing in this thread because people are confusing this with the file system corruption that has been leading to data loss and RAW mounts--which is something different entirely. A drive with "file system damage" in Scanner is a normal part of any NTFS file system and can be caused because of simple power loss or a software crash. A drive showing up RAW indicates a much larger problem. Both are technically file system damage. As far as the drive showing up damaged in scanner but not chkdsk, I've only personally seen that get wonky with volumes that chkdsk does not properly support. How large is the drive you're trying to use? Is it larger than 60TB total? chkdsk does not support volumes 60TB or larger. I've seen scanner throw warnings simply because chkdsk cannot properly manage the volume size, and that would explain your errors when running chkdsk as well.
  22. When was the drive created? After March 13th? My drive did not exhibit corruption for weeks afterwards, but it was still corrupted by the same outage. The answer to your question remains, in any case, that the way a drive becomes corrupted when the data is verified is because the data is modified in place on Google's servers after the fact. That's how it happens, we just don't know exactly why. It obviously shouldn't happen. But I'm really not sure what can prevent it.
  23. To be clear: did you re-upload data to the same drive?
  24. Because on March 13th Google had some sort of service disruption, and whatever their solution was to that problem appears to have modified some of the data on their service. So any drive that existed before March 13th appears to have some risk of having been corrupted. Nobody knows why, and Google obviously isn't talking about what they did. My best guess is that they rolled some amount of data back to a prior state, which would corrupt the drive if some chunks were a version from mid-February while others were a newer version from March--assuming the data was modified during that time. But ultimately nobody can answer this question. Nobody seems to know exactly what happened--except for Google. And they aren't talking about it. No amount of detaching and reattaching a drive, or changing the settings in CloudDrive, is going to change how the data exists on Google's severs or protect you from anything that Google does that might corrupt that data in the future. It's simply one of the risks of using cloud storage. All anyone here can do is tell you what steps can be taken to try and recover whatever data still exists. And "RAW" is just Windows' way of telling you that the file system is corrupt. RAW just means "I can't read this file system data correctly." Raw isn't actually a specific file system type. It's simply telling you that its broken. Why it's broken is something we don't seem to have a clear answer for. We just know nothing was broken before March 13th, and then people have had problems since. Drives that i have created since March 13th have also not had issues.
×
×
  • Create New...