Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Reputation Activity

  1. Like
    srcrist got a reaction from Edward in CloudDrive: Pool into DP or separate?   
    Once you've created the nested pool, you'll need to move all of the existing data into the poolpart hidden folder within the outer poolpart hidden folder before it will be accessible from the pool. It's the same process that you need to complete if you simply added a drive to a non-nested pool that already had data on it. If you want the data to be accessible within the pool, you'll have to move the data into the pool structure. Right now you should have drives with a hidden poolpart folder and all of the other data on the drive within your subpool. You need to take all of that other data and simply move it within the hidden folder. See this older thread for a similar situation: https://community.covecube.com/index.php?/topic/4040-data-now-showing-in-hierarchical-pool/&sortby=date
     
  2. Thanks
    srcrist got a reaction from msq in Google Drive is suddenly marked as experimental provider?   
    So an API key is for an application. It's what the app uses to contact Google and make requests of their service. So your new API key will be used by the application to request access to all of your drives--regardless of the Google account that they are on. The API key isn't the authorization to use a given account. It's just the key that the application uses to request access to the data on whatever account you sign-in with. 
    As an obvious example, Stablebit's default key for CloudDrive was obviously created on their Google account, but you were still using it to access your drives before changing it to your own key right now.
    When you set it up, you'll see that you still have to sign in and approve your app. It'll even give you a warning, since, unlike actual CloudDrive, Google can't actually vouch for the app requesting access with your key. 
    This just isn't how an API key works. Are you sure you're logging in with the correct account for each drive once you've added the new key? You don't log in with the account you used to create the key. You still log in with whatever credentials you used to create each drive. 
  3. Like
    srcrist got a reaction from CokeZero in Optimal settings for Plex   
    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex:
    20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  4. Like
    srcrist got a reaction from buddyboy101 in Pooling/Duplication Question (Plex Scenario)   
    This is correct. 
     
    It isn't so much that you should not, it's that you can not. Google has a server-side hard limit of 750GB per day. You can avoid hitting the cap by throttling the upload in CloudDrive to around 70mbps. As long as it's throttled, you won't have to worry about it. Just let CloudDrive and DrivePool do their thing. It'll upload at the pace it can, and DrivePool will duplicate data as it's able.
    Yes. DrivePool simply passes the calls to the underlying file systems in the pool. It should happen effectively simultaneously. 
     
    This is all configurable in the balancer settings. You can choose how it handles drive failure, and when. DrivePool can also work in conjunction with Scanner to move data off of drives as soon as SMART indicates a problem, if you configure it to do so. 
     
    DrivePool can differentiate between these situations, but if YOU inadvertently issue a delete command, it will be deleted from both locations if your balancer settings and file placement settings are configured to do so. It will pass the deletion on to the underlying file system on all relevant drives. If a file went "missing" because of some sort of error, though, DrivePool would reduplicate on the next duplication pass. Obviously files mysteriously disappearing, though, is a worrying sign worthy of further investigation and attention. 
     
    It matters in the sense that your available write cache will influence the speed of data flow to the drive if you're writing data. Once the cache fills up, additional writes to the drive will be throttled. But this isn't really relevant immediately, since you'll be copying more than enough data to fill the cache no matter how large it is. If you're only using the drive for redundancy, I'd probably suggest going with a proportional mode cache set to something like 75% write, 25% read. 
    Note that DrivePool will also read stripe off of the CloudDrive if you let it, so you'll have some reads when the data is accessed. So you'll want some read cache available. 
    This isn't really relevant for your use case. The size of the files you are considering for storage will not be meaningfully influenced by a larger cluster size. Use the size you need for the volume size you require. 
    Note that volumes over 60TB cannot be addressed by Volume Shadow Copy and, thus, Chkdsk. So you'll want to keep it below that. Relatedly, note that you can partition a single CloudDrive into multiple sub 60TB volumes as your collection grows, and each of those volumes can be addressed by VSC. Just some future-proofing advice. I use 25TB volumes, personally, and expand my CloudDrive and add a new volume to DrivePool as necessary. 
  5. Like
    srcrist got a reaction from LogicDaemon in How does the encryption work when not selected   
    There is no encryption if you did not choose to enable it. The data is simply obfuscated by the storage format that CloudDrive uses to store the data on your provider. It is theoretically possible to analyze the chunks of storage data on your provider to view the data they contain.
    As far as reinstalling Windows or changing to a different computer, you'll want to detach the drive from your current installation and reattach it to the new installation or new machine. CloudDrive can make sense of the data on your provider. In the case of some sort of system failure, you would have to force mount the drive, and CloudDrive will read the data, but you may lose any data that was sitting in your cache waiting to be uploaded during the failure. Note that CloudDrive does not upload user-accessible data to your provider by design. Other tools like rClone would be required to accomplish that. 
    My general advice, in any case, would be to enable encryption, though. There is effectively no added overhead from using it, and the piece of mind is well worth it. 
  6. Like
    srcrist got a reaction from otravers in Optimal settings for Plex   
    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex:
    20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
  7. Like
    srcrist reacted to Christopher (Drashna) in CloudDrive File System Damaged   
    We've definitely talked about it.  
     
    And to be honest, I'm not sure what we can do.  Already, we do store the file system data, if you have pinning enabled, in theory.  Though, there are circumstances that can cause it to purge that info.
    The other issue, is that by default, every block is checksummed.  That is checked on download.  So, if corrupted data is downloaded, then you would get errors, and a warning about it. 
    However, that didn't happen here.  And if that is the case, more than likely, it sent old/out of date data.  Which ... I'm not sure how we can handle that in a way that isn't extremely complex. 
    But again, this is something that is on our mind. 
  8. Like
    srcrist got a reaction from steffenmand in Request: Increased block size   
    Again, other providers *can* still use larger chunks.
    Please see the changelog:
    This was because of issue 24914, documented here.
    Again, this isn't really correct. The problem, as documented above, is that larger chunks results in more retrieval calls to particular chunks, thus triggering Google's download quota limitations. That is the problem that I could not remember. It was not because of concerns about the speed, and it was not a general problem with all providers. 
     
    EDIT: It looks like the issue with Google Drive might be resolved with an increase in the partial read size as you discussed in this post, but the code change request for that is still incomplete. So this prerequisite still isn't met. Maybe something to follow up with Christopher and Alex about. 
  9. Thanks
    srcrist got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    To my knowledge, Google does not throttle bandwidth at all, no. But they do have the upload limit of 750GB/day, which means that a large number of upload threads is relatively pointless if you're constantly uploading large amounts of data. It's pretty easy to hit 75mbps or so with only 2 or 3 upload threads, and anything more than that will exceed Google's upload limit anyway. If you *know* that you're uploading less than 750GB that day anyway, though, you could theoretically get several hundred mbps performance out of 10 threads. So it's sort of situational.
    Many of us do use servers with 1gbps synchronous pipes, in any case, so there is a performance benefit to more threads...at least in the short term. 
    But, ultimately, I'm mostly just interested in understanding the technical details from Christopher so that I can experiment and tweak. I just feel like I have a fundamental misunderstanding of how the API limits work. 
  10. Thanks
    srcrist got a reaction from MandalorePatriot in Cache Drive, and the limits of Transferable data?   
    It won't really limit your ability to upload larger amounts of data, it just throttles writes to the drive when the cache drive fills up. So if you have 150GB of local disk space on the cache drive, but you copy 200GB of data to it, the first roughly 145GB of data will copy at essentially full speed, as if you're just copying from one local drive to another, and then it will throttle the drive writes so that the last 55GB of data will slowly copy to the CloudDrive drive as chunks are uploaded from your local cache to the cloud provider. 
    Long story short: it isn't a problem unless high speeds are a concern. As long as you're fine copying data at roughly the speed of your upload, it will work fine no matter how much data you're writing to the CloudDrive drive. 
  11. Like
    srcrist got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    That's just a warning. You thread count is a bit too high, and you're probably getting throttled. Google only allows around 15 simultaneous threads at a time. Try dropping your upload threads to 5 and keeping your download threads where they are. That warning will probably go away.
    Ultimately, though, even temporary network hiccups can occasionally cause those warnings. So it might also be nothing. It's only something to worry about if it happens regularly and frequently. 
  12. Like
    srcrist got a reaction from MandalorePatriot in Warning from GDrive (Plex)   
    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  13. Like
    srcrist reacted to MandalorePatriot in CloudDrive Cache - SSD vs HDD   
    Thank you, I really appreciate your quick and informative answer!
  14. Thanks
    srcrist got a reaction from MandalorePatriot in CloudDrive Cache - SSD vs HDD   
    SSD. Disk usage for the cache, particularly with a larger drive, can be heavy. I always suggest an SSD cache drive. You'll definitely notice a significant impact. Aside from upload space, most drives don't need or generally benefit from a cache larger than 50-100GB or so. You'll definitely get diminishing returns with anything larger than that. So speed is far more important than size. 
  15. Like
    srcrist got a reaction from jonesc in Can't recommend this product anymore.   
    I will note that I really do hope that Alex and Christopher can figure out what happened with the folks that lost data, and make a post to help the rest of us understand. It does worry me somewhat, even though I didn't suffer any data loss myself. 
  16. Like
    srcrist reacted to Bigsease30 in Cloud Drive + G Suite = Backup disk   
    Made the recommended changes. Now the waiting game begins. Thanks again.
  17. Like
    srcrist got a reaction from jaynew in Longevity Concerns   
    I think those are fine concerns. One thing that Alex and Christopher has said before is that 1) Covecube isn't in any danger of shutting down any time soon and 2) if it would, they would release a tool to convert the chunks on your cloud storage back to native files. So as long as you had access to retrieve the individual chunks from your storage, you'd be able to convert it. But, ultimately, there aren't any guarantees in life. It's just a risk we take by relying on cloud storage solutions. 
  18. Like
    srcrist got a reaction from Fyaskass in Download Speed problem   
    EDIT: Disregard my previous post. I missed some context. 
    I'm not sure why it's slower for you. Your settings are mostly fine, except you're probably using too many threads. Leave the download threads at 10, and drop the upload threads to 5. Turn off background i/o as well, and you can raise your minimum download to 20MB if that's your chunk size. Those will help a little bit, but I'm sure you're able to hit at least 300mbps even with the settings you're using.
    Here is my CloudDrive copying a 23GB file:
     

     
  19. Like
    srcrist got a reaction from RG9400 in Optimal settings for Plex   
    I wouldn't make your minimum download size any larger than a single chunk. There really isn't any point in this use-case, as we can use smart prefetching to grab larger chunks of data when needed. 
    Your prefetcher probably need some adjustment too. A 1MB trigger in 10secs means that it will grab 300MB of data every time an application requests 1MB or more of data within 10 seconds...which is basically all the time, and already covered by a minimum download of 20MB with a 20MB chunk size. Instead, change the trigger to 20MB and leave the window at 10 seconds. That is about 16mbps, or the rate of a moderate quality 1080p encode MKV. This will engage the prefetcher for video streaming, but let it rest if you're just looking at smaller files like EPUB or PDF. We really only need to engage the prefetcher for higher quality streams here, since a 1gbps connection can grab smaller files in seconds regardless. 
    A prefetch amount of 300MB is fine with a 1000mbps connection. You could drop it, if you wanted to be more efficient, but there probably isn't any need with a 1gbps connection. 
  20. Thanks
    srcrist got a reaction from ndr in Optimal settings for Plex   
    Nope. No need to change anything at all. Just use DrivePool to create a pool using your existing CloudDrive drive, expand your CloudDrive using the CloudDrive UI, format the new volume with Windows Disk Management, and add the new volume to the pool. You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. You will also want to disable most if not all of DrivePool's balancers because a) they don't matter, and b) you don't want DrivePool wasting bandwidth downloading and moving data around between the drives. 
    So let's say you have an existing CloudDrive volume at E:.
    First you'll use DrivePool to create a new pool, D:, and add E: Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space. Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive. Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F: Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool. Then just navigate to D: and all of your content will be there, as well as plenty of room for more. You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier.  NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either.  My drive layout looks like this:
     

  21. Like
    srcrist got a reaction from Zanena in Optimal settings for Plex   
    No. You can't change cluster size after you've formatted the drive. But, again, I mentioned that disclaimer. It won't affect you. You want the optimal settings for plex, not the theoretical minimum and maximum for the drive. The only salient question for you is "will a larger cluster size negatively impact plex's ability to serve numerous large video files on the fly, or the ability of other applications to manage a media library?" And the answer is no. It may have some theoretical inefficiencies for some purposes, but you don't care about those. They won't affect your use case in the least. 
  22. Thanks
    srcrist got a reaction from albur in Google Drive slow   
    Your prefetcher settings are effectively useless, and your minimum download is far too low. Your upload threads are probably too high as well. 
    Change your minimum download to 20MB, drop your upload threads to no more than 5. Use more reasonable prefetcher settings like a 20MB trigger in 10 seconds with a 500MB prefetch. 
    All of that should make a huge difference. 
  23. Thanks
    srcrist reacted to Quinn in [HOWTO] File Location Catalog   
    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files.  I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need.  I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic.  Feel free to use this, or let me know if it would be more helpful "tweaked" a different way...
     
    Prerequisites:
     
    You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: 
    http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results.  Need to know what files were on drive A? Done.  Need to know which drives are holding all of the files in your Movies folder? Done.  Your imagination is the limit.
     
    Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example):
     

     
    Here is the code I used (it's also attached in the .zip file):
    # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above.
     
    Hope it helps!
     
    -Quinn
     
     
    gj80 has written a further improvement to this script:
     
    DPFileList.zip
    And B00ze has further improved the script (Win7 fixes):
    DrivePool-Generate-CSV-Log-V1.60.zip
     
  24. Thanks
    srcrist got a reaction from schudtweb in CloudDrive and GSuite and some errors   
    That makes me wonder if there is a corruption in the installation. Do a search and see if you can find the service folder. Search for a directory called "ChunkIds", which is where your DB is stored. 
  25. Thanks
    srcrist got a reaction from schudtweb in CloudDrive and GSuite and some errors   
    The file errors make me wonder if there is something wrong with the installation. Go ahead and run the troubleshooter (http://wiki.covecube.com/StableBit_Troubleshooter) and include your ticket number. I think you'll have to wait for Christopher to handle this one. I'm not sure what's going on. 
×
×
  • Create New...