Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Posts posted by srcrist

  1. There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain.

    As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. 

    My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it. 

  2. So the short answer is that a checksum mismatch error ostensibly indicates unrecoverable data corruption on the provider. That being said, 1165 was giving me checksum errors on one drive, and downgrading to .1145 no longer gave the errors, even though it has the same checksum verification system, which was added in .1114. So if you're on 1165, it might actually be related to all of the other problems that people are seeing with 1165 right now. Try downgrading to .1145 and seeing if that makes them go away. 

    If they persist, though, there is no recovery. That error simply means that the data was modified on/by your provider. If the data has genuinely been changed and corrupted, it cannot be fixed. It would be advisable to start copying the data off of that volume and onto a new one. 

    Note that all of the other genuine corruptions were accompanied by a service problem at Google, and I have not heard of one since July. So there is cause to be optimistic that you're being given false errors. 

  3. I know that they are actively working on the problems with .1165. I spoke with Christopher about it over the weekend/Monday. For now, it is safe to simply downgrade to .1145 to resolve the issues. If you are getting these errors, submit a troubleshooter report (BEFORE DOWNGRADING TO .1145) with full details while the issue is happening. It should help them to narrow down the cause. The troubleshooter is located here: http://wiki.covecube.com/StableBit_Troubleshooter

    If you need the installer for .1145, you can find it here: http://dl.covecube.com/CloudDriveWindows/beta/download/

  4. From what I'm reading here, no. That isn't right.

    So, again, the process moving forward will depend on what, exactly, you are trying to accomplish, which is something that's still a bit unclear.

    If you simply want some data redundancy, that can be accomplished as easily as creating a pool with DrivePool, adding enough space to duplicate all of your data, and then enabling 2x duplication. That will place duplicate copies of your data across all drives, but it will do so algorithmically. That means that the two copies of file 1 may be on drives A and B, the two copies of file 2 may be on drive B and C, and two copies of file 3 may be on A and C. The files will all be duplicated, but it will be somewhat random where the duplication is placed.

    If your needs require more control over where the data is placed, and you need, for example, a pool of drives A, B, and C that is identically mirrored by a pool containing drives D, E, and F (because, for example, drives A, B, and C are on one cloud account, while drives D, E, and F are on another, and you want a copy of the data on each cloud account), then you'll need to create a nested pool setup wherein pool ABC and pool DEF are inside of another pool, Z. Then you enable duplication on pool Z, which will duplicate pool ABC with pool DEF. Note: this process is complicated, and may not be necessary for you.

    If all you want is some data redundancy, you can just create a pool and enable duplication. You, will, yes, have to move all of the existing data to the Poolpart folder that will be created when the drives are added to a pool, or else the data will not appear within the pool. That's simply how DrivePool works. All of the pool data is stored within a hidden Poolpart directory on the drive.

    There is no circumstance where you would need to remove a drive from the pool, and certainly no need to delete any of the drives, as a part of this process. You just need to create the pool structure that meets you needs, move the data to where it needs to be in order to be seen within the pool, and enable duplication.

    If you are still unclear about how DrivePool operates, and what it actually does, I would strongly encourage you to read the user manual here: https://stablebit.com/Support/DrivePool/2.X/Manual

  5. Sort of depends on what, exactly, you want to accomplish. You could simply add new drives to DrivePool and enable duplication if you don't care where the duplicates are placed, and it'll handle the duplication in the background. If you wanted to create a copy of your existing pool you'd have to nest the pools. You can find instructions for that here (sort by date to make it easier to read): 

     

  6. 1 hour ago, Farmand said:

    Sorry.

    my content is already on clouddrive but would need to be copied to a new clouddrive with duplikation enabled to make the data more redundant and Secure 

    Gotcha. Well, DrivePool can do this relatively easily. So that's one option. But any sort of drive or folder sync tool should be able to do the trick. If you throttle CloudDrive to around 70mbps, you'll never hit Google's usage cap. It's generally a good idea to just do that no matter what. 

  7. I'm afraid that I'm not familiar with the error you're seeing, so I don't have a lot of advice to give about how to solve it or work around it. You should, I think, be able to uninstall and reinstall CloudDrive without impacting the cache space, or uninstall and roll back to a previously working version if you know of one. But I would generally just wait to hear back from Christopher via the ticket system before proceeding. 

  8. 1 hour ago, davidkain said:

    One final question as a follow-up to your answer here: Would it be reasonable to begin with a single 50TB cloud drive, and then expand to a DrivePool managed multi-drive setup if necessary? Or, would I save myself a lot of headache by setting up a pool of 50TB cloud drives from the get-go?

     

    Because of the way DrivePool works, you could save yourself a SMALL headache by placing a single volume in a pool and pointing Plex and your other applications at the pool right from the start. But the process to convert a standalone drive to a pool isn't terribly difficult later, either. So it's up to you. If you create a pool from a single volume right from the start, adding additional volumes and drives is as easy as creating them and clicking the add button in DrivePool later. If you don't create the pool until later, you'll simply have to move all of the existing data on the drive to the hidden pool folder when you want to use a pool. 

  9. 6 hours ago, davidkain said:

    I've seen some differing opinions on how to setup drives in the cloud. Ideally I can keep my existing folder structure, but I've seen some folk say that it's better to have several smaller drives (partitions?) rather than a few big ones (i.e. better to have 10 10TB partitions than a single 100TB drive). If that's correct, I'm assuming I'd need to add StableBit's DrivePool to the mix, right?

    The important thing is that each volume (read: partition) should be less than 60TB, so that Volume Shadow Copy and Chkdsk can operate on the volume to fix problems. In light of some of the Google Drive problems in March, some of us switched to using even smaller volumes in the thought that the smaller the volume the less data might be corrupted by an outage. But the changes in the recent beta like the file system data redundancy should, ideally, make this a non-issue today. Just keep each volume under 60TB. There is not, in any case, any significant performance difference between using 25TB volumes or 50TB volumes combined with DrivePool.

    6 hours ago, davidkain said:
    • A few threads have mentioned using a local SSD for "caching." I don't have a spare SSD, and I'm wondering how important this option is to performance and/or longevity.

    It depends on what you mean by important. CloudDrive can be quite I/O intensive, and you'll notice a significant difference between the cache performance of an SSD vs a spinning rust drive. This will be particularly noticeable if you will be both writing and reading to and from the drive simultaneously. Will it work on an HDD? Probably. Will an SSD be markedly better? Absolutely. SSDs are cheap these days. I would suggest picking one up. It doesn't need to be a fancy Samsung EVO 960 Pro or anything.

    6 hours ago, davidkain said:

    What's the best way to leverage my existing hardware in this new configuration? Which device is best suited to run the StableBit software(s)? Should my NAS continue to run Sonarr/NZBGet? Right now I'm leaning toward running the Stablebit suite on the Intel NUC, but I wasn't sure if that could overtax the system already running Plex Server.

    As long as you're using a processor with AES-NI, the resource impact of the StableBit software should be negligible. DrivePool simply forwards I/O requests to the underlying drives, so its impact is effectively non-existent, and, setting aside the obvious I/O needs of the cache, CloudDrive's actual resource requirements are all around the encryption and decryption--all of which should be offloaded to AES-NI, as long as you have it. I think that using CloudDrive on the NUC is wise. There should be no major issue sharing CloudDrive volumes or DrivePool pools via SMB to the NAS. 

  10. On 5/28/2019 at 3:36 AM, Neaoxas said:
    Is there any way to recover this drive? (Please say yes!)

     

    Unclear. You need to submit a proper ticket for this to support. Do so here: https://stablebit.com/Contact

    It's possible that a memory error might have fundamentally corrupted something, it's also possible that it might be fixed by simply deleting the data on the PC. Let official support walk you through that, to be sure. 

     

    On 5/28/2019 at 6:39 AM, steffenmand said:

    You should never go above 60 TB for a drive as chkdsk wont work :(

    This is true, but remember that this limitation is per volume, not per drive. His volumes could be smaller than 60TB on a 256TB drive. 

  11. 3 hours ago, Digmarx said:

    The volume is 16TB, encrypted, Google Drive (GSuite). The cache disk is a 240GB SSD, FWIW.

    So I've had some progres with chkdsk /offlinescanandfix. The only errors it found were free space marked as allocated in both the MFT and the volume bitmap, and then I guess it errored out with "An unspecified error occurred (6e74667363686b2e 1475)"

    A cursory search for the error code doesn't turn up anything of substance, but I'll look a bit harder after work.

    In that case, I'm not sure. There might be a handle issue causing a problem, but chkdsk generally asks you if you want to close all handles if it encounters one. You could try /X and see if it helps. I would definitely open an actual ticket with covecube about the problem, though. I would be hesitant to use any drive that chkdsk cannot operate on. Especially considering some of the recent problems. 

  12. Okay. To be clear: Scanner's "File System Damaged" warning is something very different than the corruption issue that you are reading about other people encountering lately. Scanner is just warning about normal old "please run chkdsk" file system health, and it can give that warning for any number of reasons. That's getting quite confusing in this thread because people are confusing this with the file system corruption that has been leading to data loss and RAW mounts--which is something different entirely. A drive with "file system damage" in Scanner is a normal part of any NTFS file system and can be caused because of simple power loss or a software crash. A drive showing up RAW indicates a much larger problem. Both are technically file system damage. 

    As far as the drive showing up damaged in scanner but not chkdsk, I've only personally seen that get wonky with volumes that chkdsk does not properly support. How large is the drive you're trying to use? Is it larger than 60TB total? chkdsk does not support volumes 60TB or larger. I've seen scanner throw warnings simply because chkdsk cannot properly manage the volume size, and that would explain your errors when running chkdsk as well. 

  13. 9 hours ago, Bowsa said:

    This happened this month, not in march

    When was the drive created? After March 13th?

    My drive did not exhibit corruption for weeks afterwards, but it was still corrupted by the same outage.

    The answer to your question remains, in any case, that the way a drive becomes corrupted when the data is verified is because the data is modified in place on Google's servers after the fact. That's how it happens, we just don't know exactly why. It obviously shouldn't happen. But I'm really not sure what can prevent it.

  14. On 5/21/2019 at 11:04 AM, Bowsa said:

    How does a Virtual Volume even get damaged? It makes no sense to me. I had the drive running normally, no power outages, and upload verification was on.

    So why does StableBit connecting 200 times (user-rate limit exceeded) cause a drive to even become RAW in the first place, what's the point of Upload Verification? In some software it can still detect the drive as NTFS, but it still makes no sense. It's a virtual drive reliant on Chunks (that should be safe due to upload verification). I detached the drive, and mounted it again, and the Drive was still RAW.

    How do chunks even become RAW in the first place, it's like mounting the drive on another computer and it being raw too.

    Because on March 13th Google had some sort of service disruption, and whatever their solution was to that problem appears to have modified some of the data on their service. So any drive that existed before March 13th appears to have some risk of having been corrupted. Nobody knows why, and Google obviously isn't talking about what they did. My best guess is that they rolled some amount of data back to a prior state, which would corrupt the drive if some chunks were a version from mid-February while others were a newer version from March--assuming the data was modified during that time. 

    But ultimately nobody can answer this question. Nobody seems to know exactly what happened--except for Google. And they aren't talking about it. 

    No amount of detaching and reattaching a drive, or changing the settings in CloudDrive, is going to change how the data exists on Google's severs or protect you from anything that Google does that might corrupt that data in the future. It's simply one of the risks of using cloud storage. All anyone here can do is tell you what steps can be taken to try and recover whatever data still exists. 

    And "RAW" is just Windows' way of telling you that the file system is corrupt. RAW just means "I can't read this file system data correctly." Raw isn't actually a specific file system type. It's simply telling you that its broken. Why it's broken is something we don't seem to have a clear answer for. We just know nothing was broken before March 13th, and then people have had problems since. Drives that i have created since March 13th have also not had issues. 

×
×
  • Create New...