Jump to content
  • 0

+1 for GoogleDrive for Work support


Reptile

Question

Google Drive for Work means unlimited storage for about 40 dollar a month. And even normal googledrive accounts could be pooled together with drivepool. And nothing stops you from having multiple google accounts right.

Furthermore google has amazing speed. I get around 220 Mbit/s. Yes on gigabit fiber google allows to sync with up to 250 mbit/s per second. It would be wonderful to have Google Drive support.

 

Fast, afordable, unlimited.

Is there a beta release supporting this provider already?

 

Yours sincerely 

 

Reptile

 

 

Edit:

Preliminary Google Drive support added. 

Download available here:

http://dl.covecube.com/CloudDriveWindows/beta/download/

Note: these are internal beta builds and may not be stable. Try at your own risk.

 

Google Drive Support starts with build 1.0.0.408 (current recommended for Google Drive)

Edited by Christopher (Drashna)
Google Drive info
Link to comment
Share on other sites

Recommended Posts

  • 0

Noticed that while prefetching with 20 mb chunks and minimal download 20 mb that the prefetching seems to download the same chunks like 10 times - isn't this an error? Does it on all chunks - seems like prefetching is still thinking it has to download 2 mb chunks?

that *may* be normal.  Is this at the same time, or one after the other. 

 

If it's one after the other, the checksums may be failing, and redownloading the chunk.  Which would be 100% normal.

 

Otherwise, enable logging and repro, please. 

http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

 

 

Found a temporary workaround for 100mb chunks! Create a drive on an old version, detach and mount on a new version - then I can even get my 100 mb minimum dowbload :-) almost constant 1 gbit speed! Now the prefetching just needs to not download the same file 10 times in a row :-)

 

I wouldn't recommend it,  the older version doesn't support the correct partial chunk size, meaning that you may run into the download quota issue with the drive.  

 

If(/When) we re-add the 100MB sections, at best, we'll have to limit the minimum partial chunk size based on the chunk size, so that we don't run into this issue. 

Link to comment
Share on other sites

  • 0

that *may* be normal.  Is this at the same time, or one after the other. 

 

If it's one after the other, the checksums may be failing, and redownloading the chunk.  Which would be 100% normal.

 

Otherwise, enable logging and repro, please. 

http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

 

 

 

I wouldn't recommend it,  the older version doesn't support the correct partial chunk size, meaning that you may run into the download quota issue with the drive.  

 

If(/When) we re-add the 100MB sections, at best, we'll have to limit the minimum partial chunk size based on the chunk size, so that we don't run into this issue. 

 

 

 

as you can see above it works on the same chunk several times at the same time. will do log tomorrow

Link to comment
Share on other sites

  • 0

Downloaded the new update 1.0.0.537

 

I have noticed, that I am getting a lot of I/O errors. 

 

I have tried different settings from having the download and upload threads from 20 originally and then even reverting back to default settings of 2 threads each. Still getting the errors. Also I am on a gigabit connection and only seeing 50 to 100 mpbs when before i was doing easily 300 to 600 mpbs

 

https://drive.google.com/file/d/0B55xGiCB0c8ndEJnbzNRM19HbG8/view

 

I have also uploaded the logs, hopefully that will help you guys out.  The image shows the error occurred twice in this instance, but i have been having errors go up in the hundreds (300 to 400) times.

 

Normal Settings:

 

20 Download Threads

20 Upload Threads

 

Enable prefetching

Prefetch trigger - 2MB

Prefetch Forward - 40 MB

Prefetch Window - 3600 seconds

 

Testing Settings (Still with errors)

 

2 Download Threads

2 Upload Threads

 

Enable prefetching

Prefetch trigger - 1MB

Prefetch Forward - 10 MB

Prefetch Window - 30 seconds

 
Link to comment
Share on other sites

  • 0

Downloaded the new update 1.0.0.537

 

I have noticed, that I am getting a lot of I/O errors. 

 

I have tried different settings from having the download and upload threads from 20 originally and then even reverting back to default settings of 2 threads each. Still getting the errors. Also I am on a gigabit connection and only seeing 50 to 100 mpbs when before i was doing easily 300 to 600 mpbs

 

https://drive.google.com/file/d/0B55xGiCB0c8ndEJnbzNRM19HbG8/view

 

I have also uploaded the logs, hopefully that will help you guys out.  The image shows the error occurred twice in this instance, but i have been having errors go up in the hundreds (300 to 400) times.

 

Normal Settings:

 

20 Download Threads

20 Upload Threads

 

Enable prefetching

Prefetch trigger - 2MB

Prefetch Forward - 40 MB

Prefetch Window - 3600 seconds

 

Testing Settings (Still with errors)

 

2 Download Threads

2 Upload Threads

 

Enable prefetching

Prefetch trigger - 1MB

Prefetch Forward - 10 MB

Prefetch Window - 30 seconds

 

I never see this issue and get around 700 mbit up and down with 20 mb chunks. With 100 mb chunks i can get my connection maxed around 1000 mbit.

 

However the prefetch bug means that it is getting the same chunks tons of times, so the speed is not utilized to actually benefit something yet - but i'm sure that will be fixed soon :)

Link to comment
Share on other sites

  • 0

Uploaded my logs showing the prefetching of the same chunk 20 times at the same time. Running newest version (.539) and drive was made on .537.

 

Maybe it still believes it is getting 1-2 mb parts ?

 

20mb chunks

20mb minimal download

20 TB Drive

 

Also noticed that prefetching kicks in during attach, shouldn't this be disabled untill drive has been attached and data is readable?

 

Result currently is that we are downloading the same chunk, leaving only 1 chunk to be downloaded for each 20 threads.

 

Hopefully you will also notice how often i am throttled in the logs and get the 100mb chunks back :P

 

Besides this issue, it seems to work but of course the speed is not near what it says, due to the fact it is the same stuff it is downloading :)

Link to comment
Share on other sites

  • 0

Still having huge issues with the Google Drive and Cloud Drive. As of the latest update, it is not even uploading the data now. Its been stuck in an uploading state for over 14 hours and still showing the same data to be uploaded.  Included some pics of the errors and also log file has been uploaded.  Download Threads are currently set to 3 and upload threats are 10, i lowered them down to 2 but still had same result.

 

 

 

bvBayIR.png

 

bUjwRKI.png

 

I also noticed in the log files i had a lot of these. Of course these are in the log file that was uploaded =)

 

Warning 0 [ApiGoogleDrive:17] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded 2016-04-06 04:02:03Z 79303942549

Warning 0 [ApiHttp:17] HTTP protocol exception (Code=ServiceUnavailable). 2016-04-06 04:02:04Z 79304921562

Warning 0 [ApiHttp] Server is throttling us, waiting 1,831ms and retrying. 2016-04-06 04:02:04Z 79304932756

Warning 0 [ioManager:3] Thread abort performing I/O operation on provider. 2016-04-07 03:57:40Z 13942306952

 

not for sure if those are the reason for the issues i am having.

 

EDIT: I updated to 547 and the uploading has started again. 

Link to comment
Share on other sites

  • 0

Uploaded my logs showing the prefetching of the same chunk 20 times at the same time. Running newest version (.539) and drive was made on .537.

 

Maybe it still believes it is getting 1-2 mb parts ?

 

20mb chunks

20mb minimal download

20 TB Drive

 

Also noticed that prefetching kicks in during attach, shouldn't this be disabled untill drive has been attached and data is readable?

 

Result currently is that we are downloading the same chunk, leaving only 1 chunk to be downloaded for each 20 threads.

 

 

 

I am getting the exact same issues, very annoying.

Link to comment
Share on other sites

  • 0

There does seem to be an issue with the minimum chunk size.  


 


1. I started a file transfer from stablebit clouddrive to a local disk on my server with 20MB chunks and no minimum chunk size.  Stablebit was using all 15 threads that I assigned and said that the file was downloading at roughly 60Mb/s.  I monitored the actual speed of the file transfer and it was downloading at an average of 6 MB/s which is about right.


 


2.  I then started this same exact test, but this time I used 20MB minimum chunks with the 20MB chunks.  Stablebit was using all 15 threads and was reporting that it was downloading the file at 400Mb/s.  I monitored the actual speed of the file transfer and it was downloading at the same 6 MB/s speed as before I enabled the minimum chunk size even though stablebit thought that it was downloading faster.


 


You can easily test this yourself by doing the same steps that I took above.


 


Note: I monitored the transfer speed with both Ultracopier and Windows built in file copy.  The transfers were to a very fast ssd.


Link to comment
Share on other sites

  • 0

I am still getting hit with 

 

CloudDrive.Service.exe Warning 0 [ApiGoogleDrive:45] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded

 

and upload freezes.  I have tried reauthorizing but no luck, also cant dismount the drive due to upload queue. 

Link to comment
Share on other sites

  • 0

 

There does seem to be an issue with the minimum chunk size.  

 

1. I started a file transfer from stablebit clouddrive to a local disk on my server with 20MB chunks and no minimum chunk size.  Stablebit was using all 15 threads that I assigned and said that the file was downloading at roughly 60Mb/s.  I monitored the actual speed of the file transfer and it was downloading at an average of 6 MB/s which is about right.

 

2.  I then started this same exact test, but this time I used 20MB minimum chunks with the 20MB chunks.  Stablebit was using all 15 threads and was reporting that it was downloading the file at 400Mb/s.  I monitored the actual speed of the file transfer and it was downloading at the same 6 MB/s speed as before I enabled the minimum chunk size even though stablebit thought that it was downloading faster.

 

You can easily test this yourself by doing the same steps that I took above.

 

Note: I monitored the transfer speed with both Ultracopier and Windows built in file copy.  The transfers were to a very fast ssd.

 

 

Seeing the same thing. Turned it off for now until it is fixed.

Link to comment
Share on other sites

  • 0

Recently i have gotten terrible performance in periods with Google Drive

 

Each chunk goes to 100% completion in less than a second and then afterwards sits waiting, while counting up duration time.

 

I can see in the server log thst google is throwing an internalError which seems to cause all of this. Tonight it is pretty much constantly failing :-(

 

Also saw that i was getting 4000-5000 ms response times while downloading chunks from google - anyone know if they have issues ? - Could seem as being huge delays on responses from Google.

Link to comment
Share on other sites

  • 0

There is definitely something going on. I am still getting the [ApiGoogleDrive:39] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded  and also getting server is being throttled errors..

 

its been 2 weeks since i seen a post from Christopher on this thread, hoping they are just busy trying to figure it out the issue.

Link to comment
Share on other sites

  • 0

Kraevin,

 

Do you have more than one Stablebit Clouddrive on Google running at a time?  I believe there is a 20 thread limit for Google as a provider and if you have more than one clouddrive running and have each drive set to use more then 10 threads, than you would get the userRateLimitExceeded error. 

 

I would also like to hear from Christopher to address some of these issues.

Link to comment
Share on other sites

  • 0

There is definitely something going on. I am still getting the [ApiGoogleDrive:39] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded  and also getting server is being throttled errors..

 

its been 2 weeks since i seen a post from Christopher on this thread, hoping they are just busy trying to figure it out the issue.

 

Google is limiting you for making too many API calls/sec, not CloudDrive. No way for them to get around this. You could raise your chunk size so that you are making less API calls per second. Google does not seem to care how much bandwidth you are pushing/pulling, just how many API calls you make. With larger chunks you will obviously make less API calls to upload/download the same amount of data.

Link to comment
Share on other sites

  • 0

Google is limiting you for making too many API calls/sec, not CloudDrive. No way for them to get around this. You could raise your chunk size so that you are making less API calls per second. Google does not seem to care how much bandwidth you are pushing/pulling, just how many API calls you make. With larger chunks you will obviously make less API calls to upload/download the same amount of data.

 

And that is why im hoping for the return of 100mb chunks :-) 20mb simply finishes too fast and causes throttling! they should just add a minimal download size to it so we dont get the download quota exceeded 

Link to comment
Share on other sites

  • 0

Kraevin,

 

Do you have more than one Stablebit Clouddrive on Google running at a time?  I believe there is a 20 thread limit for Google as a provider and if you have more than one clouddrive running and have each drive set to use more then 10 threads, than you would get the userRateLimitExceeded error. 

 

I would also like to hear from Christopher to address some of these issues.

 

 

Google is limiting you for making too many API calls/sec, not CloudDrive. No way for them to get around this. You could raise your chunk size so that you are making less API calls per second. Google does not seem to care how much bandwidth you are pushing/pulling, just how many API calls you make. With larger chunks you will obviously make less API calls to upload/download the same amount of data.

 

 

I am only using one Cloud Drive. I will try raising the chunks and see if it helps. Thanks =)

Link to comment
Share on other sites

  • 0

Christopher, is this issue being looked into?

 

 

Im sad that it seems that this thread is being ignored... would love a response eventhough it might not be news we want! Bad news is better than no news...

 

 

 

Sorry, I've been feeling a bit under the weather, so I hadn't kept up with the thread as much as I would have liked to/should have. 

 

 

I *have* already mentioned the partial chunk size issue to Alex and he does plan on looking into it and testing it more thoroughly.  And I have created an issue/bug for it explicitly here:

https://stablebit.com/Admin/IssueAnalysis/26012

 

For now, there isn't a good solution to this.  Going back to the smaller size will be more reliable until we're able to fix this issue.

But that's nto a great solution for long term.

 

 

As for the "user Rate Limit exceeded", this is likely because of the said issue, with it repeatedly calling the same files.  Basically, it's calling the same file too many times in a short period of time.  Even after it fails on some of them, it retries, but with the "exponential backup" but that may not be quick enough and triggers this warning. 

 

 

That said, Alex is still working on the "null chunks" issue (the one  that was causing corruption before).  While blocking it from uploading the chunks is good, figuring out why it's trying to do so, has been problematic. 

Additionally, during this bug hunt, Alex has run into a number of other stability issues (especially with Box), and these issues unfortunately do get priority because they're corruption issues, where this is an API issue (and one that definitely needs more testing). 

 

 

 

Again, sorry for the lapse in responding.  I will make a serious effort to not let it happen again!

And that is why im hoping for the return of 100mb chunks :-) 20mb simply finishes too fast and causes throttling! they should just add a minimal download size to it so we dont get the download quota exceeded 

 

The 20MB limit was due to an API limit, so if the partial chunks are larger, then the whole size could be larger. 

 

However, at best, we cannot have more than 20 chunks, to ensure t hat we don't run into that limit.  Which means new code to handle this, and testing to ensure it works properly.

 

I've already discussed this directly with alex, and .... well. I have a sticky note on my monitor about the entire issue. That way, it reminds me to bring it up to him every chance I get. ;)

Link to comment
Share on other sites

  • 0

I never see this issue and get around 700 mbit up and down with 20 mb chunks. With 100 mb chunks i can get my connection maxed around 1000 mbit.

 

However the prefetch bug means that it is getting the same chunks tons of times, so the speed is not utilized to actually benefit something yet - but i'm sure that will be fixed soon :)

 

Is this still being looked into? Happening to me all the time, lots of wasted bandwidth with no real benefit.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...