Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Reputation Activity

  1. Like
    srcrist got a reaction from Leonardo in Cannot reattach drive   
    When it's attached via USB, it would use the local disk provider--which I believe is a separate format from the FTP provider. I'm not sure if there is a way to do a quick conversion between the two. If you can share your drive from the router as a network drive instead of an FTP, both of those could use the local disk provider. I haven't seen Christopher on the forums lately, but if you submit a ticket he can let you know if there is an easier way to convert the data without just creating a new local disk drive and copying it over. 
  2. Like
    srcrist got a reaction from Hurricane Hunia in Google Drive Existing Files   
    Unfortunately, because of the way CloudDrive operates, you'll have to download the data and reupload it again to use CloudDrive. 
    CloudDrive is a block-based solution that creates an actual drive image, chops it up into chunks, and stores those chunks on your cloud provider. CloudDrive's data is not accessible directly from the provider--by design. The reverse of this is that CloudDrive also cannot access data that you already have on your provider, because it isn't stored in the format that CloudDrive requires. 
    There are other solutions, including Google's own Google File Stream application, that can mount your cloud storage and make it directly accessible as a drive on your PC. Other similar tools are rClone, ocaml-fuse, NetDrive, etc. There are pros and cons to both approaches. I'll list some below to help you make an informed decision: 
    Block-based Pros:
    A block-based solution creates a *real* drive (as far as Windows is concerned). It can be partitioned like a physical drive, you can use file-system tools like chkdsk to preserve the data integrity, and literally any program that can access any other drive in your PC works natively without any hiccups. You can even use tools like DrivePool or Storage Spaces to combine multiple CloudDrive drives or volumes into one larger pool.  A block-based solution enables end to end encryption. An encrypted drive is completely obfuscated both from your provider and anyone who might access your data by hacking your provider's services. Not even the number of files, let alone the file names, is visible unless the drive is mounted. CloudDrive has built-in encryption that encrypts the data before it is even written to your local disk.  A block-based solution also enables more sophisticated sorts of data manipulation as well. Consider the ability to access parts of files without first downloading the entire file. That sort of thing. The ability to cache sections of data locally also falls under this category, which can greatly reduce API calls to your provider. Block-based Cons:
    Data is obfuscated even if unencrypted, and unable to be accessed directly from the provider. We already discussed this above, but it's definitely one of the negatives--depending on your use case. The only thing that you'll see on your provider is thousands of chunks of a few dozen megabytes of size. The drive is inaccessible in any way unless mounted by the drivers that decrypt the data and provide it to the operating system. You'll be tethered to CloudDrive for as long as you keep the data on the cloud. Moving the data outside of that ecosystem would require it to again be downloaded and reuploaded in its native format. Hope that helps. 
  3. Like
    srcrist got a reaction from Christopher (Drashna) in Sub Folders w/ Radarr + Stablebit   
    CloudDrive essentially cannot hit the API ban because of its implementation of exponential back-off. rClone used to hit the bans because it did not respect the back-off requests Google's servers were sending, though I think they've solved that problem at this point. In any case, don't worry about that with CloudDrive. The only "ban" you'll need to worry about is the ~750gb/day upload threshold. 
  4. Like
    srcrist got a reaction from ssmith1936 in Best practices to shut down before rebooting?   
    Detaching the drives before a reboot will prevent the reindexing process, yes. Though, if you're not on the latest betas, I've noticed that the drives have been far better at recovering from an unclean shutdown than they used to be. 
  5. Like
    srcrist got a reaction from Simon83 in I delete files on the Cloud drive, but the changes are not reflected in the software   
    The "cloud used" amount refers to the space used on your provider, not the amount of data stored on the drive. When you delete something on your CloudDrive drive, it doesn't remove the drive architecture that has already been uploaded from from the provider. So, if it's a 500GB drive and you've already created and uploaded 500GB worth of chunks to the provider, those will remain, just like a real drive, to be used (overwritten) later. This is why you can use recovery software to recover data on your CloudDrive drive just like a physical hard drive. If you want to remove the additional space, you'll need to shrink the drive, which you can do from the "manage drive" options. 
  6. Like
    srcrist got a reaction from Christopher (Drashna) in Drive with Plex updating libraries very slowly   
    Chupa, my library is many thousands of titles and only takes a minute or two to update. Are you, maybe, referring to the initial adding of new media from a directory to your library? It does an analysis scan on the files during that addition that can take some time, but a normal scan where it simply looks for changes and additions should be a matter of minutes, no matter how large your library, even if you don't have the "run partial scan" option enabled. 
  7. Like
    srcrist got a reaction from Christopher (Drashna) in I delete files on the Cloud drive, but the changes are not reflected in the software   
    The "cloud used" amount refers to the space used on your provider, not the amount of data stored on the drive. When you delete something on your CloudDrive drive, it doesn't remove the drive architecture that has already been uploaded from from the provider. So, if it's a 500GB drive and you've already created and uploaded 500GB worth of chunks to the provider, those will remain, just like a real drive, to be used (overwritten) later. This is why you can use recovery software to recover data on your CloudDrive drive just like a physical hard drive. If you want to remove the additional space, you'll need to shrink the drive, which you can do from the "manage drive" options. 
  8. Like
    srcrist got a reaction from Christopher (Drashna) in GSuite, Cloud Drive, Plex, Gigabit, Prefetch Settings   
    The window is actually a threshold that triggers the prefetch if the trigger amount of data is requested within that time. For plex, I would leave your trigger at 1MB and the window at something like 30 secs. 150MB forward prefetch should be sufficient for even the highest bitrate video files. 
     
    You also should definitely lower that Plex transcoder throttle time. That's 10 minutes of CPU time, not 10 minutes of normal old clock time. That's *waaaaaay* too much. Leave it at like 60 secs to 120 secs. 
  9. Like
    srcrist got a reaction from Christopher (Drashna) in Files deleting themselves..   
    You might be able to use something like security auditing to see how and when the file is deleted: http://www.monitorware.com/common/en/articles/audit_file_deletion.php
  10. Like
    srcrist got a reaction from Christopher (Drashna) in Reindexing Google Internal Server Errors   
    The new BETA (930) seems to have solved the enumeration problem on reboot. Just FYI, if anyone else was having that problem. 
  11. Like
    srcrist got a reaction from Christopher (Drashna) in GSuite Upload Limit   
    It will start uploading again. The retries will not lengthen the API ban. It will still just start working again in at whatever time your reset is. Generally, you likely won't even notice, as long as your cache drive isn't full. 
  12. Like
    srcrist got a reaction from Christopher (Drashna) in Reindexing Google Internal Server Errors   
    That's great, Christopher. Honestly, though, the efficiency changes have also done wonders to simply make sure that it doesn't have to enumerate every time there is an unclean shutdown, as well. So I actually haven't have to work around this issue since those changes several weeks ago. Good news, in nay case. 
  13. Like
    srcrist got a reaction from Christopher (Drashna) in [.901] Google Drive - server side disconnects/throttling   
    To be clear, Christopher: that actually is not the case here. When you hit the threshold it will begin to give you the 403 errors (User rate limit exceeded), but ONLY on uploads. It will still permit downloaded data to function as normal. It isn't actually an API limit, wholesale. They're just using the API to enforce an upload quota. Once you exceed the quota, all of your upload API calls fail but your download calls will succeed (unless you hit the 10TB/day quota on those as well). 
     
     
    Right. For the people who think they aren't hitting the 750GB/day limit, remember that it's a total limit across *every* application that is accessing that drive. So your aggregate cannot be more than that. But enough people have confirmed that the limit is somewhere right around 750GB/day, at this point, that I think we can call that settled. 
  14. Like
    srcrist got a reaction from Christopher (Drashna) in Best practices to shut down before rebooting?   
    Detaching the drives before a reboot will prevent the reindexing process, yes. Though, if you're not on the latest betas, I've noticed that the drives have been far better at recovering from an unclean shutdown than they used to be. 
  15. Like
    srcrist got a reaction from Cobalt503 in Automatically re-attach after drive error   
    I don't believe that's how it works at the moment. That might be a feature request. I think it just dismounts and waits for you to tell it to retry.
  16. Like
    srcrist got a reaction from achok16 in Best way to transfer files from Cloudrive to rclone?   
    That would be the only way to do it. CloudDrive is a block based solution, while rClone is a file based solution. The data, as stored on the cloud provider, is incompatible between them. You'll have to download and reupload the data. 
  17. Like
    srcrist got a reaction from Antoineki in Reindexing Google Internal Server Errors   
    So my service crashed last night. I opened a ticket and sent you the logs to take a look at, so we can set that aside. But while it was reindexing it got one of the Internal Server Error responses from Google Drive. Just one. Then it started reindexing the entire drive again starting at chunk 4,300,000 or so. Does it really have to do that? This wouldn't have been a big deal when this drive was small...but this process takes about 8 hours now every time it has to reindex the drive, and it happened at around the halfway mark. Four hours of lost time is frustrating. Does it HAVE to start over? Can it not just retry at the point where it got the error? Am I missing something?
     
    Just wanted to see what the thought was on this. 
  18. Like
    srcrist got a reaction from KiaraEvirm in Reindexing Google Internal Server Errors   
    So my service crashed last night. I opened a ticket and sent you the logs to take a look at, so we can set that aside. But while it was reindexing it got one of the Internal Server Error responses from Google Drive. Just one. Then it started reindexing the entire drive again starting at chunk 4,300,000 or so. Does it really have to do that? This wouldn't have been a big deal when this drive was small...but this process takes about 8 hours now every time it has to reindex the drive, and it happened at around the halfway mark. Four hours of lost time is frustrating. Does it HAVE to start over? Can it not just retry at the point where it got the error? Am I missing something?
     
    Just wanted to see what the thought was on this. 
  19. Like
    srcrist got a reaction from pants in How to choose what drive CloudDrive uses for copying/storing local data?   
    The drive needs to be detached and reattached to the system. This is only an option you can change when creating a drive or reattaching one. 
  20. Like
    srcrist reacted to Firerouge in How do I get a larger than 10tb drive?   
    When a drive is first created, the last advanced setting, cluster size, dictates maximum drive size.
     
    Any sizes over 10TB you're required to type into the cloud drive size setting box.
     
    If you want the maximum size, simply type 256TB.
  21. Like
    srcrist got a reaction from Christopher (Drashna) in Plex playback issues   
    On the Plex forums, there seem to be a number of complaints about the 1.6.X PMS versions. I would stick with 1.5.X for now until they work out those kinks. Even people who are not using CloudDrive are reporting connectivity and responsiveness issues. 
  22. Like
    srcrist got a reaction from Christopher (Drashna) in Looking to tweak some settings   
    If you hover over the size in the drive size at the top of the UI it will tell you what your minimum download size is set to. You can change it by detaching the drive and reattaching it.
     
    I would drop your prefetch time window to somewhere around 30-150secs.
     
    I generally suggest a cache at least as large as your largest media file.
     
    There is a good guide here: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/
     
    It was written with Plex in mind, but should work well for any media-based drive. 
  23. Like
    srcrist got a reaction from jaynew in Files Accidentally Deleted - Space Still Used?   
    So CloudDrive creates a real filesystem on a real (though not physical) drive structure. That means that NTFS on your CloudDrive will behave just like NTFS on a physical hard drive. So, just like a physical drive, when you delete data, NTFS simply marks that data as deleted and the drive space as available for future use. The data remains on the drive structure until overwritten by something else. So, to directly answer your questions:
     
    1) Sure. It will "go away" once it is overwritten by new data. If some sort of information security is important to you (beyond that provided by end to end drive encryption) you'd want to use one of the many tools available to overwrite hard drive data with zeros or random binary.
    2) Yes. It can. Just like any physical drive, you can use recovery tools to recover "deleted" or "lost" data off of your mounted CloudDrive. I think, on the balance, this is a huge plus for CloudDrive as a storage solution.
    3) You've already reclaimed the space. At least as far as the operating system and filesystem are concerned. Windows will freely write to any drive space that NTFS has marked as available.
     
    What's probably confusing you a little is that unlike a physical drive, where all of the sectors and data space are available from the day you purchase the drive by virtue of the fact that they are stored on a literal, physical, platter; CloudDrive only uploads the blocks once something has written to them at least the first time. This is default behavior for all online storage providers for fairly obvious reasons. You wouldn't want to have to upload, say, an entire 256TB drive structure to Google Drive BEFORE you could start using it.
     
    Nevertheless, when you created your CloudDrive the software DID actually create the full filesystem and make it accessible to your OS. So your OS will treat it as if all of that space already exists--even if it only exists conceptually until CloudDrive uploads the data.
     
    If you used a local disk space provider to create a drive, btw, you would see that it creates all of the blocks at drive creation--since local storage doesn't have the same concerns as online providers. 
  24. Like
    srcrist got a reaction from bnbg in Windows explorer shows wrong drive size?   
    EDIT: This was wrong. And old. 
×
×
  • Create New...