Jump to content
Covecube Inc.

thnz

Members
  • Content Count

    139
  • Joined

  • Last visited

  • Days Won

    7

Reputation Activity

  1. Like
    thnz got a reaction from Antoineki in Amazon Cloud Drive - Why is it not supported?   
    I have several files on ACD that return 404 errors when downloads are attempted. {"statusCode":"404"}" to be precise! They appear as normal in the web interface, but downloads always fail. Is this the same issue? I've made contact with Amazon regarding this, and have had it escalated to the Drive team, so am hoping for a response soonTM.
     
    On the other hand, if it was the 429 errors when downloading, then that has since been solved.
  2. Like
    thnz got a reaction from KiaraEvirm in Amazon Cloud Drive - Why is it not supported?   
    I have several files on ACD that return 404 errors when downloads are attempted. {"statusCode":"404"}" to be precise! They appear as normal in the web interface, but downloads always fail. Is this the same issue? I've made contact with Amazon regarding this, and have had it escalated to the Drive team, so am hoping for a response soonTM.
     
    On the other hand, if it was the 429 errors when downloading, then that has since been solved.
  3. Like
    thnz reacted to Christopher (Drashna) in Subpools or Drive Groups Within a Pool   
    It's in Alpha.
    http://dl.covecube.com/DrivePoolWindows/beta/download/changes.txt
     
    [D] Added hierarchical pooling support:
    - Pools are now made up of either disks or other pools.
    - Each pool handles its own separate folder duplication settings.
    - Balancing can work over a pool part, but file placement settings are limited to the entire pool part only.
    - Circular pooling is not allowed (e.g. Pool A -> Pool B AND Pool B -> Pool A). Contrary to popular belief, this does not lead to infinite disk space.
  4. Like
    thnz got a reaction from 17thspartan in Full Cache Re-upload after crash?   
    I see the latest beta version (.818) contains the following change:
     
     
    Hopefully that fixes this issue, and I can now go back to a much larger cache size without the risk of it going into recovery so often after a restart.
  5. Like
    thnz got a reaction from postvlad in Amazon Cloud Drive - Why is it not supported?   
    I have several files on ACD that return 404 errors when downloads are attempted. {"statusCode":"404"}" to be precise! They appear as normal in the web interface, but downloads always fail. Is this the same issue? I've made contact with Amazon regarding this, and have had it escalated to the Drive team, so am hoping for a response soonTM.
     
    On the other hand, if it was the 429 errors when downloading, then that has since been solved.
  6. Like
    thnz reacted to wid_sbdp in Drive keeps dismounting   
    I changed it to 15 and still got a dismount. Sorry but I'm done with CloudDrive* for the moment, I'll revisit it in a few months and see what sort of improvements have been implemented. I'm 99% sure it has to do with the 1MB prefetch block downloads, that's just absurd I/O being put on ACD for a 3GB file that needs to be prefetched. And why that is tied to "minimum download size" is beyond me (may need to change the wording of that dropdown to clarify). And the fact it defaults to 1MB if you choose "no minimum size" which makes absolutely no sense. Yeah if I remade my drive and made it with settings that worked with what I know now, might not have as many issues (haven't been having issues with the Google Drive that I made with the increased minimum download size), but honestly I'm just tired of getting up to ~3TB of content loaded into a drive and then having to start over when someone notices another random setting that can only be done at drive creation.
     
    * From a media storage standpoint, still using some licenses on my PCs at home because it's great for backing up "regular" stuff. I use it daily at my desktop moving off Dropbox to ACD and having an encrypted store mounted locally on my PC. AMAZING for that.
  7. Like
    thnz got a reaction from Christopher (Drashna) in Provider Speeds Feedback   
    I had mostly been testing with ACD (with a custom dev limited security profile), and have only recently been trying Google Drive instead - speeds are far superior on the latter. Google traffic seems to go through Sydney, so its also a lot closer than Amazon (I'm in NZ - Amazon data goes to the US). Granted its more expensive ($10 vs $5 /mo), but its probably worth it for the speed and reliability - files on Amazon seem to have a very small chance to randomly disappear altogether.
     
    NZ is making great progress in net speeds - so long as youre not rural - Gigabit speeds are now available in a lot of places - you can get uncapped/unthrottled/unshaped 1000/500 for $130NZD ($95USD)/mo.
  8. Like
    thnz got a reaction from steffenmand in Provider Speeds Feedback   
    I had mostly been testing with ACD (with a custom dev limited security profile), and have only recently been trying Google Drive instead - speeds are far superior on the latter. Google traffic seems to go through Sydney, so its also a lot closer than Amazon (I'm in NZ - Amazon data goes to the US). Granted its more expensive ($10 vs $5 /mo), but its probably worth it for the speed and reliability - files on Amazon seem to have a very small chance to randomly disappear altogether.
     
    NZ is making great progress in net speeds - so long as youre not rural - Gigabit speeds are now available in a lot of places - you can get uncapped/unthrottled/unshaped 1000/500 for $130NZD ($95USD)/mo.
  9. Like
    thnz got a reaction from Christopher (Drashna) in New drives only pinning 588KB data   
    https://addons.mozilla.org/en-US/firefox/addon/alertbox/
     
    Have it set to poll for changes every 8 hrs atm.
  10. Like
    thnz got a reaction from Christopher (Drashna) in New drives only pinning 588KB data   
    Posted what was causing it before going to bed last night. When I first opened Firefox this morning I got a popup saying the changelog had been updated - checked it out and found it had been fixed 20mins prior
  11. Like
    thnz got a reaction from Christopher (Drashna) in Long term use. Will things change much?   
    Here's a thread on Amazon's dev forums I saw earlier in the year regarding storing a lot (~100TB) of data on Amazon Drive.
     
     
    FWIW I have ~3TB backed up to CrashPlan, with most of that also backed up to Amazon via CloudDrive.
  12. Like
    thnz got a reaction from msq in Full Cache Re-upload after crash?   
    http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/
     
    Might be by design:
     
     
  13. Like
    thnz reacted to Alex in Large Chunks and the I/O Manager   
    In this post I'm going to talk about the new large storage chunk support in StableBit CloudDrive 1.0.0.421 BETA, why that's important, and how StableBit CloudDrive manages provider I/O overall.
     
    Want to Download it?
     
    Currently, StableBit CloudDrive 1.0.0.421 BETA is an internal development build and like any of our internal builds, if you'd like, you can download it (or most likely a newer build) here:
    http://wiki.covecube.com/Downloads
     
    The I/O Manager
     
    Before I start talking about chunks and what the current change actually means, let's talk a bit about how StableBit CloudDrive handles provider I/O. Well, first let's define what provider I/O actually is. Provider I/O is the combination of all of the read and write request (or download and upload requests) that are serviced by your provider of choice. For example, if your cloud drive is storing data in Amazon S3, provider I/O consists of all of the download and upload requests from and to Amazon S3.
     
    Now it's important to differentiate provider I/O from cloud drive I/O because provider I/O is not really the same thing as cloud drive I/O. That's because all I/O to and from the drive itself is performed directly in the kernel by our cloud drive driver (cloudfs_disk.sys). But as a result of some cloud drive I/O, provider I/O can be generated. For example, this happens when there is an incoming read request to the drive for some data that is not stored in the local cache. In this case, the kernel driver cooperatively coordinates with the StableBit CloudDrive system service in order generate provider I/O and to complete the incoming read request in a timely manner.
     
    All provider I/O is serviced by the I/O Manager, which lives in the StableBit CloudDrive system service.
     
    Particularly, the I/O Manager is responsible for:
    As an optimization, coalescing incoming provider read and write I/O requests into larger requests. Parallelizing all provider read and write I/O requests using multiple threads. Retrying failed provider I/O operations. Error handling and error reporting logic. Chunks
     
    Now that I've described a little bit about the I/O manager in StableBit CloudDrive, let's talk chunks. StableBit CloudDrive doesn't inherently work with any types of chunks. They are simply the format in which data is stored by your provider of choice. They are an implementation that exists completely outside of the I/O manager, and provide some convenient functions that all chunked providers can use.
     
    How do Chunks Work?
     
    When a chunked cloud provider is first initialized, it is asked about its capabilities, such as whether it can perform partial reads, whether the remote server is performing proper multi-threaded transactional synchronization, etc... In other words, the chunking system needs to know how advanced the provider is, and based on those capabilities it will construct a custom chunk I/O processing pipeline for that particular provider.
     
    The chunk I/O pipeline provides automatic services for the provider such as:
    Whole and partial caching of chunks for performance reasons. Performing checksum insertion on write, and checksum verification on read. Read or write (or both) transactional locking for cloud providers that require it (for example, never try to read chunk 458 when chunk 458 is being written to). Translation of I/O that would end up being a partial chunk read / write request into a whole chunk read / write request for providers that require this. This is actually very complicated. If a partial chunk needs to be read, and the provider doesn't support partial reads, the whole chunk is read (and possibly cached) and only the part needed is returned. If a partial chunk needs to be written, and the provider doesn't support partial writes, then the whole chunk is downloaded (or retrieved from the cache), only the part that needs to be written to is updated, and the whole chunk is written back.If while this is happening another partial write request comes in for the same chunk (in parallel, on a different thread), and we're still in the process of reading that whole chunk, then coalesce the [whole read -> partial write -> whole write] into [whole read -> multiple partial writes -> whole write]. This is purely done as an optimization and is also very complicated. And in the future the chunk I/O processing pipeline can be extended to support other services as the need arises. Large Chunk Support
     
    Speaking of extending the chunk I/O pipeline, that's exactly what happened recently with the addition of large chunk support (> 1 MB) for most cloud providers.
     
    Previously, most cloud providers were limited to a maximum chunk size of 1 MB. This limit was in place because:
    Cloud drive read I/O requests, which can't be satisfied by the local cache, would generate provider read I/O that needed to be satisfied fairly quickly. For providers that didn't support partial reads, this meant that the entire chunk needed to be downloaded at all times, no matter how much data was being read. Additionally, if checksumming was enabled (which would be typical), then by necessity, only whole chunks could be read and written. This had some disadvantages, mostly for users with fast broadband connections:
    Writing a lot of data to the provider would generate a lot of upload requests very quickly (one request per 1 MB uploaded). This wasn't optimal because each request would add some overhead. Generating a lot of upload requests very quickly was also an issue for some cloud providers that were limiting their users based on the number of requests per second, rather than the total bandwidth used. Using smaller chunks with a fast broadband connection and a lot of threads would generate a lot of requests per second. Now, with large chunk support (up to 100 MB per chunk in most cases), we don't have those disadvantages.
     
    What was changed to allow for large chunks?
    In order to support the new large chunks a provider has to support partial reads. That's because it's still very necessary to ensure that all cloud drive read I/O is serviced quickly. Support for a new block based checksumming algorithm was introduced into the chunk I/O pipeline. With this new algorithm it's no longer necessary to read or write whole chunks in order to get checksumming support. This was crucial because it is very important to verify that your data in the cloud is not corrupted, and turning off checksumming wasn't a very good option. Are there any disadvantages to large chunks?
    If you're concerned about using the least possible amount of bandwidth (as opposed to using less provider calls per second), it may be advantageous to use smaller chunks. If you know for sure that you will be storing relatively small files (1-2 MB per file or less) and you will only be updating a few files at a time, there may be less overhead when you use smaller chunks. For providers that don't support partial writes (most cloud providers), larger chunks are more optimal if most of your files are > 1 MB, or if you will be updating a lot of smaller files all at the same time. As far as checksumming, the new algorithm completely supersedes the old one and is enabled by default on all new cloud drives. It really has no disadvantages over the old algorithm. Older cloud drives will continue to use the old checksumming algorithm, which really only has the disadvantage of not supporting large chunks. Which providers currently support large chunks?
    I don't want to post a list here because it would get obsolete as we add more providers. When you create a new cloud drive, look under Advanced Settings. If you see a storage chunk size > 1 MB available, then that provider supports large chunks.  
    Going Chunk-less in the Future?
     
    I should mention that StableBit CloudDrive doesn't actually require chunks. If it at all makes sense for a particular provider, it really doesn't have to store its data in chunks. For example, it's entirely possible that StableBit CloudDrive will have a provider that stores its data in a VHDX file. Sure, why not. For that matter, StableBit CloudDrive providers don't even need to store their data in files at all. I can imagine writing providers that can store their data on an email server or a NNTP server (a bit of a stretch, yes, but theoretically possible).
     
    In fact, the only thing that StableBit CloudDrive really needs is some system that can save data and later retrieve it (in a timely manner). In that sense, StableBit CloudDrive can be a general purpose drive virtualization solution.
  14. Like
    thnz got a reaction from Kasual in Amazon Cloud Drive, near future.?   
    https://forums.developer.amazon.com/forums/thread.jspa?forumID=73&threadID=9355
  15. Like
    thnz got a reaction from Christopher (Drashna) in I/O deadlock?   
    It took several attempts, but I finally reproduced it again after lowering the local cache size to 20MB (though that could just be coincidence). 'Drive tracing' seems to have turned itself off after the hard restart, but hopefully it caught enough to be helpful. I've uploaded it via the form on that log collection page.
     
    Quick summary of how I reproduced it:
    New 1GB unencrypted drive on DropBox with 20MB cache Copied ~700mb file across Hard reset after the file finished copying (ie. disk activity in resource monitor had finished), but still uploading (was maybe 50mb into the upload) File is now corrupt (has different hash) after drive recovers Just want to add, that its times like this (constant restarting) that you really appreciate having an SSD. Reboots so fast!
  16. Like
    thnz got a reaction from Christopher (Drashna) in I/O deadlock?   
    Disk grouping sounds ideal.
     
    Previously I found the i/o deadlock seemed to kick in after maybe 5mins or so, so having gone several hours without issue is certainly promising. Fingers crossed I don't wake up to a BSOD tomorrow morning! Will keep you guys updated.
×
×
  • Create New...