Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Posts posted by steffenmand

  1. Me again (can't remember password for other account, is saved on other PC).

     

    Here is a drive created on 763 and being used on 763 setup with 20mb chunks. Note all reads at 1mb and all writes are 20mb.

     

    http://i.imgur.com/XbCktRC.png

     

    And here is the current throughput on the UI

     

    http://i.imgur.com/ruobEkh.png

     

    463 was actually better because the 

    i think you forgot to change the minimal download while creating /attaching. You cant change it before the stored chunk size is increased. you most likely left it on 1 MB by mistake as it is the default. Its one of the last calues you can change in the bottom - but remember to change it last :-) The minimal download works just fine, you just set it up wrong

  2. While it's writing in 10 or 20mb chunks the service is currently bugged and only reading in 1mb chunks. This causes a bunch of issues from a large number of API hits and slow download speeds as a large proportion of the time is making the connection and requesting the data (I have a 1gbps line server currently reading google drive at 1.68mbps).

     

    This really needs fixed. I would also like to see a prefetch option for fast lines where the prefetch brings the whole chunk into cache regardless of only part of it being needed.

     

    (as I wrote this post the drive errored out and dismounted :))

    It is not bugged. It is because you made the drive with an old version of CloudDrive. Unfortunately the increase prefetch amount was only added in later versions, so the drives need to be remade to utilize this feature. I've read 20 MB chunks with prefetching with no issue at all since version .463 or something close. Recreate the drive using the newest version found here:

    http://dl.covecube.com/CloudDriveWindows/beta/download/

  3. I see.. But that means that the speeds I am seeing now, roughly 3-4 mbps, is what I should expect in download speed?

     

    That's gonna take a long time to get the data back from the cloud then, seeing as I have more than 2 tb backed up already.

    If you are copying from the drive, you should be reading so fast that it kicks in with multiple threads. Are you on the latest version ? And did you try reboot and detach/reattaching?

  4. Thanks, but I have been tinkering with the I/O Performance setting for a long time. No matter what I try it only does 1 read thread at a time.

     

    I usually monitor my upload/download in the Technical Details, very helpful to get an understanding of the underlying processes.

    When I upload it looks like it should (constantly writing according to my I/O Performance setting and then reading each chunk for verification).

    But when I download it only ever shows 1 read thread, even though my setting currently is at 6 download threads.

     

    Could it be some setting in Advanced Settings I need to change?

    After the initial prefetch, it will only grab a new chunk as you read a chunk - therefore you will most likely only see one thread after the initial prefetch always :)

  5.  

    nope, accessing gdrive by other means just works without any problems - i.e. I am currently trying to implement sort of a shadow drive for mkv metadata which sits between the clouddrive volumes and plex - using the gdrive api (of course with my own developer keys) there is no heavy throttling noticeable: uploading and downloading at around 750 mbit/sec using a few threaded connections is no problem at all whereas clouddrive maxxes out at around 80-100 mbit/sec for a few seconds, then slowly drops to around 0, then going back up to around 100 and so forth. (average speed per 5 minute is well below 40 mbit). today, and as long as I leave uploading disabled, the clouddrive connection seems to be working okay (for downloads) - I am pretty sure you will get it sorted sometime in the future  :)

     

    Experiencing the same.

     

    This is my bottleneck:

     

    http://imgur.com/UlWpiw9

     

    UlWpiw9.jpg

     

    HUGE http delays - only happening through stablebit clouddrive.

     

    1 gbit full duplex line, going to google directly i can get files with full speed and no delay

  6. The high MB is due to the NTFS storing data in memory about each file. Due to the high volume of files due to the chunk system, it will over time use more and more memory as files get indexed. It is due to the way NTFS works and not something to do with Cloud Drive.

     

    Try adding a couple of million files to your own drive and see the same result over time

  7. Trust me... getting them to rush a release won't make things any better :)

    But thank god they prefer to keep it in beta until the product really is stabil enough :)

     

    However i must say that i have had data in the cloud with stablebit for more than a year now and i haven't lost any of it yet :)

  8. Are there any plans to actually address this issue? Your "fix" has done nothing, its just been swetp under the rug.

     

    I'm a new user and would love to purchase a license, i thought i found what id been looking for all this time. I started using the software two weeks ago and now feel like the information ive managed to upload to my Google Drive is somewhat useless. As a previous user stated updating to version 730 has made it even worse so i downgraded back to 726 which seemed to be causing less issues.

     

    Im getting constant dismounts and sometimes remounting will take 30 minutes or longer to finally connect. The software is borderline unusable at this point. Not to mention that im uploading this data at 1/7 of my available bandwidth, and its not about theoretical speeds or an mb to MB conversion, since i can get the speeds through other means.

     

    This seems like a major issue, and it doesn't seem that theres been much of a response from your development team, at least not publicly.

     

    Is there any workaround or improvement planned in the immediate near future? If we shouldn't continue to upload data via this avenue i think you should tell us that as users.

    Just disable the auto dismount thing in the advanced settings - that did it for me :-)

  9. LOL. Mine is 100% green too. Solar power. :)

     

    Also, between the AC and ... datacenter, the solar has already paid for itself. Which is also nice. :)

    (if you notice, I live in San Diego, where it's always sunny and warm... the joke that it's hot enough to fry an egg on the sidewalk during christmas .... isn't far from the truth)

     

    But nice internet speeds for "dirt cheap" and uncapped is a dream for me. :(

    Especially as Google Fiber is probably not going to be rolled out anymore.

     

    I bet i could barely power my bed lamp with solar power here :D Guess that is why we went to Wind Power - makes a lot more sense here with our shitty weather :)

     

    I've seen those videoes with people doing barbecue in the sun and using their car as an oven for cookies - hope to get to see San Diego within the next few years myself though - i know the US has a whole different idea of what a King Size meal is compared to here, so pretty much food heaven! 

  10. Yeah, but IIRC, the electricity prices for you are ... pretty high (at least compared to the US). 

     

    In addition to the taxes. :)

     

     

     

    Though, seriously, I'd kill for cheap symmetrical gigabit internet speeds. 

    But atleast it is green energy :-D

     

    But yea nice internet speeds are nice - and they are cheap and uncapped

  11. Well, that would be idea, yes.  But "in the meanwhile", it's a solution that would work. 

     

     

     

     

    I don't think so. Other solutions are using encryption as well (such as acd_cli/rclone/etc).   

     

    My guess is that the API usage was not "within desired parameters" and Duplicati didn't want to (or couldn't significantly) change to accommodate them. 

     

    But that's only a guess. 

     

    Let's hope :)

  12. Aarhus, Denmark here. 1gbit/1gbit

     

    Getting ~300-400 mbit upload and ~200-300 mbit download currently with Google Drive at 20 MB chunks. Prior we did have 100 MB chunks for a very short time, where i could utilize ~950 mbit upload and ~600-700 mbit download (miss that ;) )

    Amazon is giving me similar results.

     

    Speed issues currently is caused by the HTTP response time which often is between 5000-15000ms, therefore eventhough the actual transfer is done in a partial of a second, the actual http response wait time is causing it to seem as a way slower transfer.

     

    The User Rate Exceeded is also a quite common error in my logs

  13. They do both.  And the limit is decently high IIRC.  

     

    And they don't punish, not really.  As I said, we can appeal/apply/whatever term to increase the rate limit, which I believe Alex plans on doing. 

     

     

     

    I'm pretty sure the plan is both, actually. 

     

    Fewer API calls means less overhead, so it's a win-win. 

     

    It really sounds great - I'm pretty sure all the high bandwidth users are pulling their limits :) Using it for cold storage I would only benefit from large chunks, as i rarely would need to do much more than open a folder.

     

    Really look forward to these things getting in :) - The rate exceeded issue also seems to have gotten better

  14. Since I am probably going to post this a lot: 

    http://community.covecube.com/index.php?/topic/2228-google-drive-rate-limits-and-threadingexp-backoff/

     

    Basically, download 1.0.0.725 for now. It should fix/help the issue. 

     

    But we either need to overhaul the provider, or appeal to google for a rate limit increase.

    There is a "hard" limit to the number of APIs made "per app", and we're hitting that. Either that limit needs to be increased, we need to optimize the code to reduce the number of API calls, or both

     

     

    Wth limit pr app and not pr user?

     

    So they wanna punish an app for getting popular?

     

    Regarding lowering API calls i say bigger chunks :-D

  15. I am running on a physical win10 box.  I worked around whatever this is... by removing the "drives" from the Google account using clouddrive on a different computer.  Then I was able to create a working disk by being sure to check the format and mount checkbox under advanced on the create drive.  I also did not use the full disk encryption (though I really do no know what that means since it encrypts stuff anyway)

     

    It does really basic default encryption per default to ensure Google and Amazon don't index the files as video/pictures whatever. This encryption is easily cracked. The Full Disc Encryption is proper encryption with a good key involved.

  16. Took me 5 reboots and 3 hours to mount my drives :( When i finally got them mounted i had 83 red errors in the top)

     

    Even after they are mounted i'm getting Rate Limit Exceeded like every 20 sec

×
×
  • Create New...