Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Community Answers

  1. srcrist's post in Is deleting files functional? How is it handled? was marked as the answer   
    Note that deleting files off of the drive and deleting the drive itself are not the same thing. Simply removing files off of the file system of the drive will not remove the drive data from your provider, because a drive of X capacity still exists, just like removing files off of your physical drive does not cause it to physically shrink. If you want to remove the storage files located on the provider then you have to either shrink the drive or destroy it.
    Also note that, with CloudDrive, the "files" stored on the provider are not representative of the files stored on the drive in the first place. They are chunks of data that comprise the drive structure that CloudDrive uses to store information. Once they exist on the provider, there is no need to delete them unless the drive structure itself changes. CloudDrive will simply overwrite and modify them as data is added and removed. 
  2. srcrist's post in Custom OAuth not being accepted was marked as the answer   
    You're missing the quotations around the client ID and secret. That's why its treating it as a number. Notice that the example and the screencap on the instructional page you linked have quotes. 
  3. srcrist's post in (pre-sales) Spesific info on what services SB CloudDrive can interface with? was marked as the answer   
    I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this.
    The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like.
    CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported.
    The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible.
    I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it.
    There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations.
    Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda.
    Hope some of that helps.
  4. srcrist's post in Move data to a partition was marked as the answer   
    Volumes each have their own file system. Moving data between volumes will require the data to be reuploaded. Only moves within the same file system can be made without reuploading the data, because only the file system data needs to be modified to make such a change.
  5. srcrist's post in Cloud Drive + G Suite = Backup disk was marked as the answer   
    Change your minimum download to your chunk size (probably 20MB, if you used the largest chunk possible). If you're just using this for backup, you really only need to engage the prefetcher when you're making large copies off the drive, so set it to 10MB in 10secs and have it grab maybe 100MB at a time. You can probably even disable it, if you want to. Keeping it enabled will basically only help smooth out network hiccups and help copies move smoother when you're copying data off of the drive. Other than that, you look good. 
    Glad to help. Hope everything works out well for you. 
  6. srcrist's post in Longevity Concerns was marked as the answer   
    I think those are fine concerns. One thing that Alex and Christopher has said before is that 1) Covecube isn't in any danger of shutting down any time soon and 2) if it would, they would release a tool to convert the chunks on your cloud storage back to native files. So as long as you had access to retrieve the individual chunks from your storage, you'd be able to convert it. But, ultimately, there aren't any guarantees in life. It's just a risk we take by relying on cloud storage solutions. 
  7. srcrist's post in CloudDrive drops my Internet connection was marked as the answer   
    OK. In that case, I think my follow up question is for some more detail about what you mean when you say that the router is "dropping" your internet connection. Is it simply not responding? Are you losing your DHCP to your modem? What specifically is it doing when it drops? 
    I'm trying to think of things that *any* application on your PC could be doing that would cause a problem with the connection between your router and your WAN without there being an underlying router issue, but I'm having trouble thinking of anything. Some more detail might help.
     
     
    You're welcome. Some of them are probably out of date compared to what I'm using today. I had written up a little tutorial on reddit to help people, but I ended up deleting that account and wiping all of the posts. 
    Nothing will influence your upload more than the number of upload threads. Personally, I throttle my upload to 70 mbps so that I never exceed Google's 750GB/day limit, but my current settings easily saturate that. My settings are as follows, and I highly recommend them:
    10 Download Threads 5 Upload Threads No background I/O (I disable this because I want to actually prioritize the reads over the writes for the Plex server) Uploads throttled to 70mbps Upload Threshold set to 1MB or 5mins 10MB minimum download size (seems to provide a relatively responsive drive while fetching more than enough data for the server in most cases) 20MB Prefetch Trigger 500MB Prefetch forward 10 second prefetch window I actually calculated the prefetch settings based on the typical remux quality video file on my server. 20MB in 10 seconds is roughly equivalent to a 16mbps video file, if I remember the standard that I used correctly. That's actually a very low bitrate for a remuxed bluray, which is typically closer to 35mbps. The 10MB minimum download seems to handle any lower quality files just fine. 
    I still play with the settings once in awhile, but this setup successfully serves up to 6 or 7 very high quality video streams at a time. I've been extremely happy with it. I'm using one large drive (275TB) divided up into 5 55TB volumes (so I can use chkdsk), and it has about 195TB of content on it. I combine all of the volumes using DrivePool and point Plex at the DrivePool drive. Works wonderfully. 
    The drive structure is 20MB chunks with a 100MB chunk cache, running with 50GB expandable cache on an SSD. 
     
  8. srcrist's post in Clouddrive, Gsuite unlimited, and Plex was marked as the answer   
    It says Gsuite in the title. I'm assuming that means Google Drive. Correct me if all of the following is wrong, Middge. 
     
    Hey Middge,
    I use a CloudDrive for Plex with Google Drive myself. I can frequently do 5-6 remux quality streams at once. I haven't noticed a drop in capacity aside from Google's relatively new upload limits. 
    Yes. But this is going to require that you remake the drive. My server also has a fast pipe, and I've also raised the minimum download to 20MB as well. I really haven't noticed any slowdown in responsiveness because the connection is so fast, and it keeps the overall throughput up.
    This is fine. You can play with it a bit. Some people like higher numbers like 5 or 10 MB triggers, but I've tried those and I keep going back to 1 MB as well, because I've just found it to be the most consistent, performance-wise, and I really want it to grab a chunk of *all* streaming media immediately.
    This is way too low, for a lot of media. I would raise this to somewhere between 150-300MB. Think of the prefetch as a rolling buffer. It will continue to fill the prefetch as the data in the prefetch is used. The higher the number, then, the more tolerant your stream will be to periodic network hiccups. The only real danger is that if you make it too large (and I'm talking like more than 500MB here) it will basically always be prefetching, and you'll congest your connection if you hit like 4 or 5 streams. 
    I would drop this to 30, no matter whether you want to go with a 1MB, 5MB, or 10MB trigger. 240 seconds almost makes the trigger amount pointless anyway--you're going to hit all of those benchmarks in 4 minutes if you're streaming most modern media files. A 30 second window should be fine.
    WAAAAY too many. You're almost certainly throttling yourself with this. Particularly with the smaller than maximum chunk size, since it already has to make more requests than if you were using 20MB chunks. I use three clouddrives in a pool (a legacy arrangement from before I understood things better, don't do it. Just expand a single clouddrive with additional volumes), and I keep them all at 5 upload and 5 download threads. Even if I had a single drive, I'd probably not exceed 5 upload, 10 download. 20 and 20 is *way* too high and entirely unnecessary with a 1gbps connection. 
    These are all fine. If you can afford a larger cache, bigger is *always* better. But it isn't necessary. The server I use only has 3x 140GB SSDs, so my caches are even smaller than yours and still function great. The fast connection goes a long way toward making up for a small cache size...but if I could have a 500GB cache I definitely would. 
  9. srcrist's post in GSuite Upload Limit was marked as the answer   
    It will start uploading again. The retries will not lengthen the API ban. It will still just start working again in at whatever time your reset is. Generally, you likely won't even notice, as long as your cache drive isn't full. 
  10. srcrist's post in Best way to transfer files from Cloudrive to rclone? was marked as the answer   
    That would be the only way to do it. CloudDrive is a block based solution, while rClone is a file based solution. The data, as stored on the cloud provider, is incompatible between them. You'll have to download and reupload the data. 
  11. srcrist's post in Files Accidentally Deleted - Space Still Used? was marked as the answer   
    So CloudDrive creates a real filesystem on a real (though not physical) drive structure. That means that NTFS on your CloudDrive will behave just like NTFS on a physical hard drive. So, just like a physical drive, when you delete data, NTFS simply marks that data as deleted and the drive space as available for future use. The data remains on the drive structure until overwritten by something else. So, to directly answer your questions:
     
    1) Sure. It will "go away" once it is overwritten by new data. If some sort of information security is important to you (beyond that provided by end to end drive encryption) you'd want to use one of the many tools available to overwrite hard drive data with zeros or random binary.
    2) Yes. It can. Just like any physical drive, you can use recovery tools to recover "deleted" or "lost" data off of your mounted CloudDrive. I think, on the balance, this is a huge plus for CloudDrive as a storage solution.
    3) You've already reclaimed the space. At least as far as the operating system and filesystem are concerned. Windows will freely write to any drive space that NTFS has marked as available.
     
    What's probably confusing you a little is that unlike a physical drive, where all of the sectors and data space are available from the day you purchase the drive by virtue of the fact that they are stored on a literal, physical, platter; CloudDrive only uploads the blocks once something has written to them at least the first time. This is default behavior for all online storage providers for fairly obvious reasons. You wouldn't want to have to upload, say, an entire 256TB drive structure to Google Drive BEFORE you could start using it.
     
    Nevertheless, when you created your CloudDrive the software DID actually create the full filesystem and make it accessible to your OS. So your OS will treat it as if all of that space already exists--even if it only exists conceptually until CloudDrive uploads the data.
     
    If you used a local disk space provider to create a drive, btw, you would see that it creates all of the blocks at drive creation--since local storage doesn't have the same concerns as online providers. 
×
×
  • Create New...