Jump to content
  • 0

+1 for GoogleDrive for Work support


Reptile

Question

Google Drive for Work means unlimited storage for about 40 dollar a month. And even normal googledrive accounts could be pooled together with drivepool. And nothing stops you from having multiple google accounts right.

Furthermore google has amazing speed. I get around 220 Mbit/s. Yes on gigabit fiber google allows to sync with up to 250 mbit/s per second. It would be wonderful to have Google Drive support.

 

Fast, afordable, unlimited.

Is there a beta release supporting this provider already?

 

Yours sincerely 

 

Reptile

 

 

Edit:

Preliminary Google Drive support added. 

Download available here:

http://dl.covecube.com/CloudDriveWindows/beta/download/

Note: these are internal beta builds and may not be stable. Try at your own risk.

 

Google Drive Support starts with build 1.0.0.408 (current recommended for Google Drive)

Edited by Christopher (Drashna)
Google Drive info
Link to comment
Share on other sites

Recommended Posts

  • 0

 

Just wanted to share how awesome stablebit makes my google drive connection ;) As soon as i can get the same in download i couldn't ask for more! Really an awesome application!

Yup, the software scales *very* well. :)

And hopefully, we'll get that fix out soon. And please do take a screen cap of that as well. Because both are pretty awesome. :)

Link to comment
Share on other sites

  • 0

My current settings are as follows:

 

Download Threads: 30

Upload Threads: 5

 

Prefetch trigger:   1MB

Prefetch forward: 15MB

Prefetch time window: 20 seconds

 

Local Cache Size: 80GB  (SSD)

Chunk Size: 100MB

 

When I start media from the cloud drive it takes Stablebit about 6 or 7 seconds before the download speed reaches 1Mb/s (Megabit) and it refuses to use more than 3 or 4 download threads even though I have it set to use 30.  After about 7 seconds the speeds jump up to 15 to 20Mb/s while still only using 3 or 4 threads and never going higher.  I assume the initial download speed delay is what is causing the media to take so long to start playing.  I tested the same thing with Netdrive and the download speeds immediately jump to 15MB/s (Megabyte) and the media plays almost instantly.

 

 

EDIT: Upload speeds are great! :)

 

CaeDjIa.jpg

 

just wantrd to mention that this is the same issue with 1-2 mb partial reads that i wrote earlier. the threads finish so fast that the ui only shows maybe 2-4 in the thread count. however in the i/o threads overview you can see them all where most are finishes and therefore doesnt show in the ui. since they instantly finish at high speeds they slow down the download :-) just for his and your info so you dont think there is two issues/feature requests

Link to comment
Share on other sites

  • 0

Is there a way to have the cache automatically clear itself nightly or something?  I use this with Plex and have a 80GB ssd that I use for my cache drive and it fills up after a day or two.  After it fills up plex starts buffering on the videos really bad until I manually clear the cache.  This issue happens much faster when transferring files to the cloud drive.  It would be nice if I could set it so files that are being copied to the cloud drive are not cached.  I have experimented with running the drive without a cache and it seems to work alright but I just don't know what prefetch time window settings I should use so prefetched data isn't removed before plex has a chance to read it.

 

 

P.S. I'm really looking forward to the larger chunk download sizes.  That will really help out the plex streams.

Link to comment
Share on other sites

  • 0

Is there a way to have the cache automatically clear itself nightly or something?  I use this with Plex and have a 80GB ssd that I use for my cache drive and it fills up after a day or two.  After it fills up plex starts buffering on the videos really bad until I manually clear the cache.  This issue happens much faster when transferring files to the cloud drive.  It would be nice if I could set it so files that are being copied to the cloud drive are not cached.  I have experimented with running the drive without a cache and it seems to work alright but I just don't know what prefetch time window settings I should use so prefetched data isn't removed before plex has a chance to read it.

 

 

P.S. I'm really looking forward to the larger chunk download sizes.  That will really help out the plex streams.

 

 

If you haven't already, take a look at this:

http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/

 

 

Basically, the software does try to keep the cache at the set size. "To upload" will increase the size past this and slowly shrink as data is uploaded. 

 

 

However, the cache is "learning" and tries to keep frequently used data cached, to minimize access time. So clearing it isn't desireable.

 

You can set it to something very low (like "none" or 500MBs), and that may help with what you're seeing. 

Link to comment
Share on other sites

  • 0

The partial reads are often giving me download quota of this file has been exceeded. It must be due to the 50-100 partial reads of the same file. If something fails and it has to read it again, it takes yet another 50-100 reads.

 

This makes the increase of partial read more important, as a drive can become unusable for 24 hours untill the quota resets. Because the drive keep trying to download that part forever untill the download completes after quota resets.

Link to comment
Share on other sites

  • 0

Seems like a bad fix for me :-( its clearly the huge amount of partial reads resulting in this error and therefore the larger partial reads would fix it as well. smaller chunks just mean less upload speed for us with high bandwidth... 100 mb currently takes around 1-3 seconds and i can build my upload speed up to the max.  

 

Unfortunately this just seems like a hotfix :-( 

 

100 mb chunks could just have had a higher minimum partial read set! (in my case i would have loved to disable partial reads completely as my bandwidth can download chunks fast enough)

 

Currently i can only get maximum 60-70 mbit on 20 threads due to the overhead on the http connections.. Threads usually finish in 1/4th of a second which of course means that speed can never build up, thus the low speeds,

 

I really urge you to reconsider this fix and instead look into increasing the partial read size, as mentioned you could restrict 100 mb chunks to larger partial reads and have it as a high bandwidth option (outside the US a lot of countries, do have 1 gbit symmetrical lines, one even mentioned they were getting 10 gbit in singapore :-) )

 

Just add a warnibg to the user which explains this is high bandwidth only

Link to comment
Share on other sites

  • 0

Seems like a bad fix for me :-( its clearly the huge amount of partial reads resulting in this error and therefore the larger partial reads would fix it as well. smaller chunks just mean less upload speed for us with high bandwidth... 100 mb currently takes around 1-3 seconds and i can build my upload speed up to the max.  

 

Unfortunately this just seems like a hotfix :-( 

 

100 mb chunks could just have had a higher minimum partial read set! (in my case i would have loved to disable partial reads completely as my bandwidth can download chunks fast enough)

 

Currently i can only get maximum 60-70 mbit on 20 threads due to the overhead on the http connections.. Threads usually finish in 1/4th of a second which of course means that speed can never build up, thus the low speeds,

 

I really urge you to reconsider this fix and instead look into increasing the partial read size, as mentioned you could restrict 100 mb chunks to larger partial reads and have it as a high bandwidth option (outside the US a lot of countries, do have 1 gbit symmetrical lines, one even mentioned they were getting 10 gbit in singapore :-) )

 

Just add a warnibg to the user which explains this is high bandwidth only

 

To clarify here, this definitely isn't a "hot fix".

 

However, the specific issue here was caused by undocumented limitations on google's end, and the system configuration. 

 

 

The logs that we looked at, the use (i'm not sure if it was you, or somebody else) was using 100MB chunks, and 20 threads. 

The issue, is that there is a 53 request limit on any specific file. And since we read in 1MB chunks, ... that would definitely trigger the issue. 

 

 

20MB chunks should still be a good size. And if we disable partial chunk support (or allow an increase to the partial chunk size... to well match the chunk, effectively disabling it), this would fix the issue.

 

I've already talked to Alex about this issue last night, and specifically ... well, because of you. :)

We do have a pending feature request to change the partial chunk size, which is SPECIFICALLY what you want for your setup, I think. 

https://stablebit.com/Admin/IssueAnalysis/23906

 

And I linked this to Alex in regards to this issue, as well. 

And I absolutely plan on pushing this issue/feature request repeatedly, especially for you lucky guys with massive bandwidth. 

 

 

 

And if you did post a ticket about the speed/performance issue, please let me know which ticket it was (either post the number here, as it is private, or PM me), that way I can help push this issue through.

 

 

 

 

 

Also, as mentioned in the Issue, 2 threads *should* be able to get ~35 mbps or higher by themselves. 

Link to comment
Share on other sites

  • 0

i sent in some logs, but rarely do a ticket on the side - the description usually tells the issue :-)

 

First of all i have to compliment you guys for reacting so well to issues and requests from the community, other companies could learn a lot :-)

 

I did use 20 threads as all threads finished in 00:00:00:25-40 so with them finishing in under half a second i thought more could help it speed up! of course when i can utilize the speed on less threads i will ( read larger reads ;-) hehe ) currently the threads just finish so fast that speed doesn't matter its the http delay which determines the speed currently because you have to wait some time between each thread. a 1 mb chunk will most likely not finish faster on 1 gbit than on a 50 mbit because of the http request being the main delay, thus we can never get the speed utilized!

 

Also I only use 10 upload threads now because they use my speed just fine as well :-)

 

I really look forward to that feature fequest to get implemented, and will make some drives ready with 100 mb chunks before updating! 

Hope that the feature request will result in the return of 100 MB chunks with a higher minimum partial reads( 10 mb? ) and just inform that it is for high speed connections! 

 

Again thanks for being so cool with our constant complaining ;-) your product is awesome and your constant tweaking due to our complaints just makes it even better, so thanks :-) 


oh and btw, 35 mbit is considered slow in my world with a 1 gbit line :-D thats what happens, you are never content with what you have, you always want the best tou can get ;-)

Link to comment
Share on other sites

  • 0

Well, for the "checksum unit size" thing, we already have the request/issue for it, so no need to create a ticket.

 

And specifically the "checksum unit size" is the size for the partial chunk access. So being able to configure that (which is most likely what we'd be doing) would likely help with what you're seeing. (the downside is that larger chunk sizes will create larger latencies for slower connections, obviously, and that's the primary concern here).  

 

 

And as for the complaining: not a problem. That's WHY it's a beta, find issues and flaws and aspects we can improve. So you're really helping us to create a better product. :)

 

 

 

 

i sent in some logs, but rarely do a ticket on the side - the description usually tells the issue :-)

 

First of all i have to compliment you guys for reacting so well to issues and requests from the community, other companies could learn a lot :-)

 

I did use 20 threads as all threads finished in 00:00:00:25-40 so with them finishing in under half a second i thought more could help it speed up! of course when i can utilize the speed on less threads i will ( read larger reads ;-) hehe ) currently the threads just finish so fast that speed doesn't matter its the http delay which determines the speed currently because you have to wait some time between each thread. a 1 mb chunk will most likely not finish faster on 1 gbit than on a 50 mbit because of the http request being the main delay, thus we can never get the speed utilized!

 

Also I only use 10 upload threads now because they use my speed just fine as well :-)

 

I really look forward to that feature fequest to get implemented, and will make some drives ready with 100 mb chunks before updating! 

Hope that the feature request will result in the return of 100 MB chunks with a higher minimum partial reads( 10 mb? ) and just inform that it is for high speed connections! 

 

Again thanks for being so cool with our constant complaining ;-) your product is awesome and your constant tweaking due to our complaints just makes it even better, so thanks :-) 


oh and btw, 35 mbit is considered slow in my world with a 1 gbit line :-D thats what happens, you are never content with what you have, you always want the best tou can get ;-)

 

oh and btw, 35 mbit is considered slow in my world with a 1 gbit line :-D thats what happens, you are never content with what you have, you always want the best tou can get ;-)

 

LOL, on the nose. It's never enough. Remember when you thought that 1TB was all you'd ever need, and *if* you'd ever be able to fill it? :)

 

And while I'm not sure where you live, I know that the EU has some pretty fantastic speeds, and for fairly cheap.   Especially compared to the USA. 

Link to comment
Share on other sites

  • 0

Thanks for the new update yesterday ! :) Speed now goes to 60-120 mbit making it a bit more usable! 

 

Also seeing almost always all threads in use, where i only had 2-4 before!

 

Don't know what your response times are to google in the US, but i get 400ms -> 1850ms so there is a wait time between each thread up to almost three-four times the time it takes to download a chunk :)

Look forward to the partial read increase, but untill then this has made me more happy for now :)

 

Oh and btw, .452 isn't available for download, however it is in the changes.txt - just if you forgot it ;)

*EDIT* I see you got it up as well with a new fix :)

Link to comment
Share on other sites

  • 0

Well, Alex has been addressing a lot of the backend and optimization stuff, and if you check the logs, I'm pretty sure this is the .452 release that has helped with performance. 

 

 

Oh and btw, .452 isn't available for download, however it is in the changes.txt - just if you forgot it ;)

*EDIT* I see you got it up as well with a new fix :)

 

Yeah, that happens sometimes. The upload is scripted, and you probably hit it at JUST the right time where it hadn't completed the upload yet. 

Link to comment
Share on other sites

  • 0

Unfortunately I just received my first ever Quota error and can no longer use the drive.  Error: The download quota for this file has been exceeded.  This error has occurred 46 times.  Make sure that you are connected to the Internet and have sufficient bandwidth available.  

 

How long does this error last?

Link to comment
Share on other sites

  • 0

"Up to 48 hours", according to Google. Unfortunately, that's about all the information I have about it, and the length of the lockout. 

 

If you want, set the upload  threads to "0" and it will disable the upload, until you increase the number of threads.  It may work around the issue, at least  temporarily.

Link to comment
Share on other sites

  • 0

Good to know. Will remember that for the future. 

Reauthorizing does not work :) He must have hit the golden time window!

 

Any news if the increased partial reads might be coming soon ? ;) Not able to use the drive properly before then, as you have stated it would require a drive to be made again :)

 

Stuff is getting pretty good, rarely see any issues anymore!

 

Oh one question, what happens if all the drive letters have been used ? Are we able to format it to a folder instead ? :)

Link to comment
Share on other sites

  • 0

Reauthorizing does not work :) He must have hit the golden time window!

 

Any news if the increased partial reads might be coming soon ? ;) Not able to use the drive properly before then, as you have stated it would require a drive to be made again :)

 

Stuff is getting pretty good, rarely see any issues anymore!

 

Oh one question, what happens if all the drive letters have been used ? Are we able to format it to a folder instead ? :)

In that case, yeah, may have been a timing thing.

 

And sorry, no ETA. It's up to Alex, but as I've said, I've been pushing for it. 

 

And yeah, hopefully, we are very close to a stable release. :)

 

 

That's ... a good question. Since this is handled by Windows ... I believe the default behavior is to just not assign a drive letter. In this case, you'd need to manually map to a folder path instead. 

Though a lot of software can use just the volume ID ("\\?\Volume{GUID}\") rather than having to have a letter or mount point (and StableBit DrivePool falls into this category)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...