Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Reputation Activity

  1. Like
    steffenmand got a reaction from Christopher (Drashna) in Moving the Cache?   
    just detach the drive and attach it on the new drive. Though it would have to recache all the stuff 
  2. Like
    steffenmand got a reaction from Christopher (Drashna) in CloudDrive + GoogleDrive   
    you are using an old version!
    follow this url for the newest updates! newest is .597 LOTS of speed improvements since then!
    you can increase the minimum download size while creating/attaching to get it to download larger pieces and get throttled a lot less! i still get throttled a lot but 100 mb chunks will improve that!
    Remember to make a new drive with the new version since stuff changed
    You can find their changelog here:
  3. Like
    steffenmand got a reaction from raidz in +1 for GoogleDrive for Work support   
    The speeds are good now for the single file, but when you want to prefetching on two files at the same time it fails at giving both the same priority.
    In my opinion prefetching should work per file, so that a 1 gb prefetching would mean 2gb when loading two different files, currently it seems to be a total which means one could use 980mb and the other 20 mb..
  4. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    as a coder i know that what looks simple might often be complicated as hell lol!
    im sure you will get to it as fast as you can
  5. Like
    steffenmand reacted to Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Yes, and IIRC, it's the next issue to address.  But I'll mention it to Alex to get it expedited. 
  6. Like
    steffenmand reacted to Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Sorry, I've been feeling a bit under the weather, so I hadn't kept up with the thread as much as I would have liked to/should have. 
    I *have* already mentioned the partial chunk size issue to Alex and he does plan on looking into it and testing it more thoroughly.  And I have created an issue/bug for it explicitly here:
    For now, there isn't a good solution to this.  Going back to the smaller size will be more reliable until we're able to fix this issue.
    But that's nto a great solution for long term.
    As for the "user Rate Limit exceeded", this is likely because of the said issue, with it repeatedly calling the same files.  Basically, it's calling the same file too many times in a short period of time.  Even after it fails on some of them, it retries, but with the "exponential backup" but that may not be quick enough and triggers this warning. 
    That said, Alex is still working on the "null chunks" issue (the one  that was causing corruption before).  While blocking it from uploading the chunks is good, figuring out why it's trying to do so, has been problematic. 
    Additionally, during this bug hunt, Alex has run into a number of other stability issues (especially with Box), and these issues unfortunately do get priority because they're corruption issues, where this is an API issue (and one that definitely needs more testing). 
    Again, sorry for the lapse in responding.  I will make a serious effort to not let it happen again!
    The 20MB limit was due to an API limit, so if the partial chunks are larger, then the whole size could be larger. 
    However, at best, we cannot have more than 20 chunks, to ensure t hat we don't run into that limit.  Which means new code to handle this, and testing to ensure it works properly.
    I've already discussed this directly with alex, and .... well. I have a sticky note on my monitor about the entire issue. That way, it reminds me to bring it up to him every chance I get.
  7. Like
    steffenmand reacted to triadcool in +1 for GoogleDrive for Work support   
    Christopher, is this issue being looked into?
  8. Like
    steffenmand reacted to raidz in +1 for GoogleDrive for Work support   
    I am getting the exact same issues, very annoying.
  9. Like
    steffenmand reacted to Christopher (Drashna) in External (Non Windows) access plus local cache questions   
    Unfortunately, Windows only for the foreseeable future. (though, you could run VMs)
    As for a web service, I believe that Alex (the Developer) wanted to do that eventually, but I'm not sure, and I'm not sure when we'd be able to get to that. 
    That said, something like a colocated server, or a VM (such as through Microsoft Azure) could be used to run StableBit CloudDrive, and you could then use a VPN or web server with WebDAV enabled to access the content remotely.  That may be a good option for you in the meanwhile, and may solve some of the other issues you've mentioned. 
    Though I absolutely understand that this really isn't a cheap solution. 
    As for when you wouldn't have internet access... that shouldn't be a huge issue.  Any new data would remain in the cache, awaiting to be uploaded as soon as you have internet access. 
    As for reads, this would be more problematic.  Since you couldn't download new data, that means that anything that is not cached would be completely inaccessible. 
    However, as you've mentioned, the cache can be very good about "fixing this". By default. the software tried to pin the disk and NTFS metadata, and all of the directory entries on the file system. This should help prevent issues, but t hat also depends on the cache size.
    If you're going to be experiencing "network outages" a lot, then you will be much better off using as large of a cache size as you can.  And if you could devote a drive to it, that maybe the best option here. (if this is for a laptop, and it has an optical drive, there are "trays" that let you replace the entire optical drive with a HDD caddy). 
    As for Amazon Cloud Drive, we haven't really tested, but a number of users have reported that the internal beta builds ( have significantly improved the performance with Amazon Cloud Drive.  
    But we agree, we really do want it to work well and reliably with Amazon Cloud Drive, as it would definitely help us, but it would also help Amazon (it's in their best interest, as well as ours). 
  10. Like
    steffenmand reacted to steffenmand in +1 for GoogleDrive for Work support   
    Noticed that while prefetching with 20 mb chunks and minimal download 20 mb that the prefetching seems to download the same chunks like 10 times - isn't this an error? Does it on all chunks - seems like prefetching is still thinking it has to download 2 mb chunks?
  11. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    I'm a happy man today and you know why :-D
    However I would like to request the return of 100 mb chunks when you use a minimal download above X!
    I'm seeing a lot of throttling because the chunks are too small, so I hope you will allow the larger chunks of your minimal download is larger as well :-)
    Happy to get 700 mbit now, but still room for improvement if throttling disappears :-D
  12. Like
    steffenmand reacted to Christopher (Drashna) in Multiple reads on the same offset   
    Honestly, I'm not sure here.  (this is definitely one of the much more technical aspects of the software, and I'd rather have a right answer for you, than a guess). 
    I've flagged the question for Alex (and it's marked as critical, so he should get to it soon).
    However, I suspect that this is normal and may be related to something the software is doing. But again, since I'm not sure ....
    As for the partial chunk size, I'll push it again.  
    Though, Alex has been running consistency checks on all of the providers, and has been looking into a potential corruption issue that Modplan has ran into. (any corruption issues always get top priority, because of their nature). 
  13. Like
    steffenmand reacted to Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Well, thank you for the kind words! And we're glad that you like StableBit CloudDrive!
    As for the erratic speeds, this may be related the partial read stuff. Since we break up the larger chunks into smaller pieces and read just the partial chunks, this can impact performance in some cases (hence the desire for larger partial chunk sizes). 
    As for the IO errors, these may be normal, as they happen from time to time. More with some providers than others. But any error is automatically retried. 
    These two are related, so grouping.
    No, there isn't any way to pause or throttle the bandwidth properly right now. It's on the to-do list, but probably won't be until after the stable release is out. (including maybe scheduling).
    However, you can "pause" it by setting the upload and/or download threads to "0". (Disk Options -> Performance -> IO Performance).  This essentially pauses the transfer. 
    Well, missing chunks is a bad thing, in general. Since we're essentially storing the raw disk data on the drive, this would be like a piece of a physical disk "dying" and would cause data loss.
    However, like a physical disk, running a "CHKDSK" pass should "fix" a lot of disk issues. 
    Additionally, we do a lot to make sure that nothing happens to the data, at least from our end.  In fact, one of those options is "upload verification" (where we download the file, make sure it uploaded and uploaded properly, and re-upload it if something went wrong).  Some providers have this enabled by default (such as ... Amazon Cloud Drive, as it's necessary for this provider), but it's an option for any provider. 
    However, for each chunk (and the partial chunks), we do include checksums, to help detect errors. 
    But really, you should check out these links:
    These talk about the cache manager, and some of the details that you want to know (rather than me retyping it all here, and not doing as good of a job). 
    As for redundancy, no, there isn't any, but yes, you could use StableBit DrivePool for that. But just keep in mind, if you're using multiple drives from your google account, this will reduce the overall throughput to the provider. Though, from the sounds of it, that may not be an issue. 
    If checksumming is enabled, then yes, there is verification. This is enabled by default on most providers IIRC.
    As for partial chunk support for larger chunks, every MB of data has a checksum section, specifically to ensure that the data is intact. 
  14. Like
    steffenmand reacted to joss in +1 for GoogleDrive for Work support   
    Thank you soooo much for supporting Google Drive now and the fast progress! Since Bitcasa's change of mood we were looking for exactly a tool like CloudDrive, then found it here but weren't able to use it until now.
    And as we are using Drivepool and your Scanner for years now, we are very, very happy to see this project in your hands, because you always deliver well-thought-out and stable stuff. I only hope, you are not going to be bought by one of the big ones  .
    Although I don't understand everything that's discussed here (i.e. what the increased partial reads are for - and why we would have to start over new then ...  over my head in some cases    ), and we do not have such are great bandwidth, I've been testing a bit using the default settings and I'm highly satisfied - YEAH!   
    The speed is a bit erratic and I/O-errors are popping up frequently but CloudDrive manages to upload things anyway. Perfect!
    I have some questions - not 100% Google-Drive-related, but I hope it's ok to place it here:
    Are there plans (or is there already a way) to pause the upload; either every upload activity or for a specific drive (so I could set one drive to priority)? Is there a way to throttle the upload? Maybe scheduled? (our son is angry with us using all bandwidth   ) We are trying to use Plex with CloudDrive. It's working pretty well so far. But of course to fast-forward is not covered by the prefetched data so it takes a while to jump to a specific position in a video.
    Do you have any suggestion i.e. for the cache or prefetching settings to optimize this? Maybe different relating to the size (TV-Shows oder movies)? I'm a bit concerned about data corruption, not of single chunks or files, but of some of the metadata or so, CloudDrive needs to mount and recognize the drive. We are using duplicati and one day an essential file was obviously missing or corrupted, so duplicati wasn't able to list anything or repair it and the whole backupset was lost for us. The weeks of uploading, too. And as far as I know there is no option to use the content files somehow - and I assume this isn't different with CloudDrive, right?

    Do you have redundancy or something like that? Is it possible for me to backup this data?
    Google doesn't make it easy to replace files, because of their ID-based structure - just putting the same file with the same name in the same place doesn't mean, Google or a tool like duplicati (and CloudDrive?) recognizes it.

    Would it be wiser to build several smaller drives and use Drivepool additionally than creating one big one? What would you suggest? 
    How are the experiences with Google Drive (for work)? How big are the drives you created?

    (Of course we have several backups - but because of the limited bandwidth and the fact that we have plans to let Google/CloudDrive be part of our backup-strategy I would like to fathom and minimize the risks.) And a last one: what about the verifying - is it actually working with the 10MB-chunks now? Or even bigger ones? I didn't get it. Sorry.  
  15. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    I'm still pushing for the update that gives increased partial reads! - Hopefully you will get it running before you go out of beta  - As it will require a new drive, i can't really upload stuff before then!
    A lot of great fixes these days though, so props for that!
  16. Like
    steffenmand reacted to Christopher (Drashna) in Setup Same CloudDrive on multiple Computers?   
    I think the Op doesn't want to use his upload speed to do this. 
    If you upload the content to a CloudDrive, you can detach (unmount) it from the local system and then mount it on the other system. That works fine. But as for sharing it between two systems? Not really.
    However, Plex does allow for multiple users (via plex accounts, IIRC). But it does consume your upstream bandwidth. 
    NetDrive does have a solution that allows you to access from multiple systems, but it's not a full drive, and it's not encrypted (IIRC).
    THe difference is that StableBit CloudDrive is using the provider as a raw disk backend, basically. Meaning it stores raw drive data. And it caches new content locally before uploading it.  Because of this, there isn't a good solution to "share" (read) the drive in multiple locations, as the content may not be uploaded yet. This would cause corruption, in any scenario. 
    Maybe we could (later) add a "mount as read only" option, so that you can mount an unused disk as read only, and add a flag to it so it cannot be mounted as writable anywhere else. 
    This would allow you to do what you want, but it render the drive as un-updatable. Meaning no new content. 
  17. Like
    steffenmand reacted to Christopher (Drashna) in Google Drive: Cant Upload, 'The requested mime type change is forbidden'   
    Better yet:
    This should handle the error properly now (essentially by deleting and reuploading it automatically). 
  18. Like
    steffenmand reacted to Christopher (Drashna) in [Possible Bug] Label change on drive   
    I don't know if you saw, but it looks like Alex has added code to enable synchronization between the volume label (in the OS) and the label in the UI.
  19. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Of course, i was only joking ;-)
  20. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Tell him i'll give cake to the office if he picks it up fast Then he will be the most popular guy that day haha
  21. Like
    steffenmand reacted to Christopher (Drashna) in Question about loading multiple files   
    I'm not really sure the point you're trying to make here. 
    But, either way, the default settings are what we've tried to optimize for "most users".  These should work for most people.
    In fact, the 1MB file size, or small checksum unit size for partial uploads/downloads  specifically to minimize the latency for "disk access".
    And we do list a requirement for internet connection, as anything slower may cause issues with connectivity and latency. 
    And yeah, all the setting such as chunk size, (in the future) checksum size, and the like are all set up when creating the drive. There are a number of reasons for this, but as I said, a lot of this is very much like the advanced formatting feature, and partially the formatting options.  So, it's not somethign that can be changed after the fact. 
    The exception to this is that you can detach and reattach a drive and change the cache options. 
    However, you high bandwidth people may require different settings (at least to saturate your bandwidth).  And worst case, I'll see about creating a guide on the wiki for high bandwidth users. 
  22. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Thanks for the new update yesterday ! Speed now goes to 60-120 mbit making it a bit more usable! 
    Also seeing almost always all threads in use, where i only had 2-4 before!
    Don't know what your response times are to google in the US, but i get 400ms -> 1850ms so there is a wait time between each thread up to almost three-four times the time it takes to download a chunk
    Look forward to the partial read increase, but untill then this has made me more happy for now
    Oh and btw, .452 isn't available for download, however it is in the changes.txt - just if you forgot it
    *EDIT* I see you got it up as well with a new fix
  23. Like
    steffenmand reacted to Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Well, for the "checksum unit size" thing, we already have the request/issue for it, so no need to create a ticket.
    And specifically the "checksum unit size" is the size for the partial chunk access. So being able to configure that (which is most likely what we'd be doing) would likely help with what you're seeing. (the downside is that larger chunk sizes will create larger latencies for slower connections, obviously, and that's the primary concern here).  
    And as for the complaining: not a problem. That's WHY it's a beta, find issues and flaws and aspects we can improve. So you're really helping us to create a better product.
    LOL, on the nose. It's never enough. Remember when you thought that 1TB was all you'd ever need, and *if* you'd ever be able to fill it?
    And while I'm not sure where you live, I know that the EU has some pretty fantastic speeds, and for fairly cheap.   Especially compared to the USA. 
  24. Like
    steffenmand got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    i sent in some logs, but rarely do a ticket on the side - the description usually tells the issue :-)
    First of all i have to compliment you guys for reacting so well to issues and requests from the community, other companies could learn a lot :-)
    I did use 20 threads as all threads finished in 00:00:00:25-40 so with them finishing in under half a second i thought more could help it speed up! of course when i can utilize the speed on less threads i will ( read larger reads ;-) hehe ) currently the threads just finish so fast that speed doesn't matter its the http delay which determines the speed currently because you have to wait some time between each thread. a 1 mb chunk will most likely not finish faster on 1 gbit than on a 50 mbit because of the http request being the main delay, thus we can never get the speed utilized!
    Also I only use 10 upload threads now because they use my speed just fine as well :-)
    I really look forward to that feature fequest to get implemented, and will make some drives ready with 100 mb chunks before updating! 
    Hope that the feature request will result in the return of 100 MB chunks with a higher minimum partial reads( 10 mb? ) and just inform that it is for high speed connections! 
    Again thanks for being so cool with our constant complaining ;-) your product is awesome and your constant tweaking due to our complaints just makes it even better, so thanks :-) 

    oh and btw, 35 mbit is considered slow in my world with a 1 gbit line :-D thats what happens, you are never content with what you have, you always want the best tou can get ;-)
  25. Like
    steffenmand reacted to Alex in Question about loading multiple files   
    Technically speaking, StableBit CloudDrive doesn't really deal with files directly. It actually emulates a virtual disk for the Operating System, and other applications or the Operating System can then use that disk for whatever purpose that suits them.
    Normally, a file system kernel driver (such as NTFS) will be mounted on the disk and will provide file based access. So when you say that you're accessing 2 files on the cloud drive, you're actually dealing with the file system (typically NTFS). The file system then translates your file-based requests (e.g. read 10248 bytes at offset 0 of file \Test.bin) into disk-based requests (e.g. read 10248 bytes at position 12,447,510 on disk #4).
    But you may ask, well, eventually these file-based requests must be scheduled by something, so who prioritizes which requests get serviced first?
    Yes, and here's how that works:
    The I/O pipeline in the kernel is inherently asynchronous. In other words, multiple I/O request can be "in-flight" at the same time. Scheduling of disk-based I/O requests occurs right before our virtual disk, by the NT disk subsystem (multiple drivers involved). It determines which requests get serviced first by using the concept of I/O priorities. To read more about this topic (probably much more than you'd ever want to know) see: 
    Skip to the I/O Prioritization section for the relevant information. You can also read about the concept of bandwidth reservation which deals with ensuring smooth media playback.
  • Create New...