Jump to content
  • 0

+1 for GoogleDrive for Work support


Reptile

Question

Google Drive for Work means unlimited storage for about 40 dollar a month. And even normal googledrive accounts could be pooled together with drivepool. And nothing stops you from having multiple google accounts right.

Furthermore google has amazing speed. I get around 220 Mbit/s. Yes on gigabit fiber google allows to sync with up to 250 mbit/s per second. It would be wonderful to have Google Drive support.

 

Fast, afordable, unlimited.

Is there a beta release supporting this provider already?

 

Yours sincerely 

 

Reptile

 

 

Edit:

Preliminary Google Drive support added. 

Download available here:

http://dl.covecube.com/CloudDriveWindows/beta/download/

Note: these are internal beta builds and may not be stable. Try at your own risk.

 

Google Drive Support starts with build 1.0.0.408 (current recommended for Google Drive)

Edited by Christopher (Drashna)
Google Drive info
Link to comment
Share on other sites

Recommended Posts

  • 0

In that case, yeah, may have been a timing thing.

 

And sorry, no ETA. It's up to Alex, but as I've said, I've been pushing for it. 

 

And yeah, hopefully, we are very close to a stable release. :)

 

 

That's ... a good question. Since this is handled by Windows ... I believe the default behavior is to just not assign a drive letter. In this case, you'd need to manually map to a folder path instead. 

Though a lot of software can use just the volume ID ("\\?\Volume{GUID}\") rather than having to have a letter or mount point (and StableBit DrivePool falls into this category)

 

Tell him i'll give cake to the office if he picks it up fast :P Then he will be the most popular guy that day haha

Link to comment
Share on other sites

  • 0

Tell him i'll give cake to the office if he picks it up fast :P Then he will be the most popular guy that day haha

 

Well, we'd rather take the time and release it right than to hurry to make a deadline. But yes, we're definitely getting antsy for release.  (but if you notice the change log, it's a lot of performance and polish changes lately :) )

Link to comment
Share on other sites

  • 0

Hi,

 

Thank you soooo much for supporting Google Drive now and the fast progress! Since Bitcasa's change of mood we were looking for exactly a tool like CloudDrive, then found it here but weren't able to use it until now.
And as we are using Drivepool and your Scanner for years now, we are very, very happy to see this project in your hands, because you always deliver well-thought-out and stable stuff. I only hope, you are not going to be bought by one of the big ones  :ph34r: .

 

Although I don't understand everything that's discussed here (i.e. what the increased partial reads are for - and why we would have to start over new then ... :huh: over my head in some cases  :unsure:  ), and we do not have such are great bandwidth, I've been testing a bit using the default settings and I'm highly satisfied - YEAH!  :D 

 

The speed is a bit erratic and I/O-errors are popping up frequently but CloudDrive manages to upload things anyway. Perfect!

 

 

I have some questions - not 100% Google-Drive-related, but I hope it's ok to place it here:
 

  • Are there plans (or is there already a way) to pause the upload; either every upload activity or for a specific drive (so I could set one drive to priority)?
  • Is there a way to throttle the upload? Maybe scheduled? (our son is angry with us using all bandwidth  :wacko: )
  • We are trying to use Plex with CloudDrive. It's working pretty well so far. But of course to fast-forward is not covered by the prefetched data so it takes a while to jump to a specific position in a video.
    Do you have any suggestion i.e. for the cache or prefetching settings to optimize this? Maybe different relating to the size (TV-Shows oder movies)?
  • I'm a bit concerned about data corruption, not of single chunks or files, but of some of the metadata or so, CloudDrive needs to mount and recognize the drive. We are using duplicati and one day an essential file was obviously missing or corrupted, so duplicati wasn't able to list anything or repair it and the whole backupset was lost for us. The weeks of uploading, too. And as far as I know there is no option to use the content files somehow - and I assume this isn't different with CloudDrive, right?

    Do you have redundancy or something like that? Is it possible for me to backup this data?
    Google doesn't make it easy to replace files, because of their ID-based structure - just putting the same file with the same name in the same place doesn't mean, Google or a tool like duplicati (and CloudDrive?) recognizes it.

    Would it be wiser to build several smaller drives and use Drivepool additionally than creating one big one? What would you suggest? 
    How are the experiences with Google Drive (for work)? How big are the drives you created?

    (Of course we have several backups - but because of the limited bandwidth and the fact that we have plans to let Google/CloudDrive be part of our backup-strategy I would like to fathom and minimize the risks.)
  • And a last one: what about the verifying - is it actually working with the 10MB-chunks now? Or even bigger ones? I didn't get it. Sorry.

 

Thanks  :)

joss​ 

Link to comment
Share on other sites

  • 0

Hi,

 

Thank you soooo much for supporting Google Drive now and the fast progress! Since Bitcasa's change of mood we were looking for exactly a tool like CloudDrive, then found it here but weren't able to use it until now.

And as we are using Drivepool and your Scanner for years now, we are very, very happy to see this project in your hands, because you always deliver well-thought-out and stable stuff. I only hope, you are not going to be bought by one of the big ones  :ph34r: .

 

Although I don't understand everything that's discussed here (i.e. what the increased partial reads are for - and why we would have to start over new then ... :huh: over my head in some cases  :unsure:  ), and we do not have such are great bandwidth, I've been testing a bit using the default settings and I'm highly satisfied - YEAH!  :D 

 

The speed is a bit erratic and I/O-errors are popping up frequently but CloudDrive manages to upload things anyway. Perfect!

 

 

Well, thank you for the kind words! And we're glad that you like StableBit CloudDrive!

 

As for the erratic speeds, this may be related the partial read stuff. Since we break up the larger chunks into smaller pieces and read just the partial chunks, this can impact performance in some cases (hence the desire for larger partial chunk sizes). 

As for the IO errors, these may be normal, as they happen from time to time. More with some providers than others. But any error is automatically retried. 

 

 

 

 

 

  • Are there plans (or is there already a way) to pause the upload; either every upload activity or for a specific drive (so I could set one drive to priority)?
  • Is there a way to throttle the upload? Maybe scheduled? (our son is angry with us using all bandwidth  :wacko: )

 

These two are related, so grouping.

 

No, there isn't any way to pause or throttle the bandwidth properly right now. It's on the to-do list, but probably won't be until after the stable release is out. (including maybe scheduling).

 

However, you can "pause" it by setting the upload and/or download threads to "0". (Disk Options -> Performance -> IO Performance).  This essentially pauses the transfer. 

 

 

 

  • I'm a bit concerned about data corruption, not of single chunks or files, but of some of the metadata or so, CloudDrive needs to mount and recognize the drive. We are using duplicati and one day an essential file was obviously missing or corrupted, so duplicati wasn't able to list anything or repair it and the whole backupset was lost for us. The weeks of uploading, too. And as far as I know there is no option to use the content files somehow - and I assume this isn't different with CloudDrive, right?

     

    Do you have redundancy or something like that? Is it possible for me to backup this data?

    Google doesn't make it easy to replace files, because of their ID-based structure - just putting the same file with the same name in the same place doesn't mean, Google or a tool like duplicati (and CloudDrive?) recognizes it.

     

    Would it be wiser to build several smaller drives and use Drivepool additionally than creating one big one? What would you suggest? 

    How are the experiences with Google Drive (for work)? How big are the drives you created?

     

    (Of course we have several backups - but because of the limited bandwidth and the fact that we have plans to let Google/CloudDrive be part of our backup-strategy I would like to fathom and minimize the risks.)

 

Well, missing chunks is a bad thing, in general. Since we're essentially storing the raw disk data on the drive, this would be like a piece of a physical disk "dying" and would cause data loss.

 

However, like a physical disk, running a "CHKDSK" pass should "fix" a lot of disk issues. 

Additionally, we do a lot to make sure that nothing happens to the data, at least from our end.  In fact, one of those options is "upload verification" (where we download the file, make sure it uploaded and uploaded properly, and re-upload it if something went wrong).  Some providers have this enabled by default (such as ... Amazon Cloud Drive, as it's necessary for this provider), but it's an option for any provider. 

 

However, for each chunk (and the partial chunks), we do include checksums, to help detect errors. 

 

But really, you should check out these links:

http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/

http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/

 

These talk about the cache manager, and some of the details that you want to know (rather than me retyping it all here, and not doing as good of a job). 

 

As for redundancy, no, there isn't any, but yes, you could use StableBit DrivePool for that. But just keep in mind, if you're using multiple drives from your google account, this will reduce the overall throughput to the provider. Though, from the sounds of it, that may not be an issue. 

 

 

 

  • And a last one: what about the verifying - is it actually working with the 10MB-chunks now? Or even bigger ones? I didn't get it. Sorry.

If checksumming is enabled, then yes, there is verification. This is enabled by default on most providers IIRC.

 

As for partial chunk support for larger chunks, every MB of data has a checksum section, specifically to ensure that the data is intact. 

Link to comment
Share on other sites

  • 0

Thank you, Christopher. That helps a lot.

 

Thanks for the links, too. To retype is nonsense of course - sorry for leaving out the forum search. I was so focused on my testing that it completely hopped out of my mind what normally would be my first step. I'm obviously a bit puzzled.   :rolleyes:

 

:)

joss

Link to comment
Share on other sites

  • 0

Thank you, Christopher. That helps a lot.

 

Thanks for the links, too. To retype is nonsense of course - sorry for leaving out the forum search. I was so focused on my testing that it completely hopped out of my mind what normally would be my first step. I'm obviously a bit puzzled.   :rolleyes:

 

:)

joss

You are very welcome. 

 

As for the links, Alex is the actual developer, I'm not.  So, he has a much better understanding of how everything works, and because of, he can describe it much better than I can. So I'd rather link his posts, to get a clear and concise description rather than paraphrasing it. 

 

And not a problem. Some times it's hard to know what to search for. 

Link to comment
Share on other sites

  • 0

Hi, curious to know if the reduction to 20MB chunks has cured the issue of reaching the quota, anyone still see this error with 20MB chunks?

The limit to 20MB for Google Drive was specifically to fix the quote issue. 

 

The public beta (or any build after 1.0.0.450) should not see this issue... well, as long as it's using 20MB or smaller chunks.

 

Details here:

https://stablebit.com/Admin/IssueAnalysis/24914

 

 

 

If you are still seeing the issue, let us know.

Link to comment
Share on other sites

  • 0

The limit to 20MB for Google Drive was specifically to fix the quote issue. 

 

The public beta (or any build after 1.0.0.450) should not see this issue... well, as long as it's using 20MB or smaller chunks.

 

Details here:

https://stablebit.com/Admin/IssueAnalysis/24914

 

 

 

If you are still seeing the issue, let us know.

I hope that the chunk size will be increased when partial reads are larger. Could be forced to minimum partial read sizes on sizes > 20 MB. something like 50 MB would be ok for 1 gbit, lower just decreases the speeds due to the fast time an upload takes, which means the http overhead will slow down the speed.

Link to comment
Share on other sites

  • 0

I hope that the chunk size will be increased when partial reads are larger. Could be forced to minimum partial read sizes on sizes > 20 MB. something like 50 MB would be ok for 1 gbit, lower just decreases the speeds due to the fast time an upload takes, which means the http overhead will slow down the speed.

Well, that's something that we'll definitely have to look into. But yeah, if the partial chunk sizes are larger, there shouldn't be any issues with increasing the chunk size, in theory.  But even 20MBs may be fine, if we have the option for larger partial chunks.

Link to comment
Share on other sites

  • 0

Just a side point, I reinstalled after a few changes to my Mac Mini Server (namely 16GB memory upgrade and a 128GB SSD OS drive and a 512GB SSD cache drive), and this thing is rock solid, the last few releases have made a huge impact on performance! Nice job.

Glad to hear it.  And yeah, there has been a lot of performance optimization lately (especially between the last public beta and current public beta. 

Link to comment
Share on other sites

  • 0

I'm pretty sure I just saw some files disappear, I'm rsyncing content in a CloudDrive, I see successful transfer, but two folders and their contents just vanished. Where should I look for logs?

 

Edit: A subsequent rsync put the files back, there's nothing else running. I'm using 477.

 

Edit 2: chkdsk has "found" the missing files.

 

Edit 3: It's happened again.

 

Edit 4: I lost around 3.5TB, though the folder structure remained for the most part, the files were gone. I also had some security issue in that I couldn't open the folders in my CloudDrive.

 

Edit 5: Last one, I have installed .480 and will and see if I can reproduce the issue.

Link to comment
Share on other sites

  • 0

The folders on the cloud provider should never vanish, unless a) you've destroyed the drive, b.) they have been deleted (either by you, or software you're using), or c) an issue with the provider. 

 

 

 

For the time being, I would recommend against using RSYNC to mess with the provider, or to make absolutely, 100% certain that it's not touching the StableBit CloudDrive files, because if it is... it could be single handled responsible for the issue. 

Link to comment
Share on other sites

  • 0

Just for my own curiosity i was wondering if yoy have a timeframe for the increase of partial reads? for bow the drive is not usable for me because i will have to reupload everything again because it will require a new drive. So for now i am just sitting in a waiting position to use Stablebit! So just to have a timeframe would make me able to not check update logs 4 times a day every day :P

Link to comment
Share on other sites

  • 0

Just for my own curiosity i was wondering if yoy have a timeframe for the increase of partial reads? for bow the drive is not usable for me because i will have to reupload everything again because it will require a new drive. So for now i am just sitting in a waiting position to use Stablebit! So just to have a timeframe would make me able to not check update logs 4 times a day every day :P

 

 

No, we don't. But IIRC, it's pretty much at the top of the to-do list (or close to it). 

 

Generally, we don't keep a timeline/timeframe for releases, as we feel this is unhealthy, as there is the tendency to rush to meet the deadline. And that's an easy way to make mistakes.

 

 

However, if you want to check something religiously, check the request for this. Alex will update it publically, when it's added.

https://stablebit.com/Admin/IssueAnalysis/23906

Link to comment
Share on other sites

  • 0

The folders on the cloud provider should never vanish, unless a) you've destroyed the drive, b.) they have been deleted (either by you, or software you're using), or c) an issue with the provider. 

 

 

 

For the time being, I would recommend against using RSYNC to mess with the provider, or to make absolutely, 100% certain that it's not touching the StableBit CloudDrive files, because if it is... it could be single handled responsible for the issue. 

 

I received an "Internal Error" and also the Checksum Mismatch error. I'm using a 512GB SSD for a separate cache, set at 400GB; I received the warning that the disk was almost full during a copy.

 

Is it possible that the lack of disk space on the cache disk is the cause of the corruption?

Link to comment
Share on other sites

  • 0

I received an "Internal Error" and also the Checksum Mismatch error. I'm using a 512GB SSD for a separate cache, set at 400GB; I received the warning that the disk was almost full during a copy.

 

Is it possible that the lack of disk space on the cache disk is the cause of the corruption?

 

No, explicitly not.  We had an issue with the cache early on, and we've added some robust handling for it.

 

If you want to take a read, please check this out:

http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/

 

It's rather detailed (but the cache is very complicated). 

 

 

And could you grab the logs fro this?

http://wiki.covecube.com/StableBit_CloudDrive_Log_Collection

Link to comment
Share on other sites

  • 0

The Partial chunk size customization should make it into the release version. I've recently talked to Alex about this, and ... well, that's what he's said (and I'll try to keep him to that, as well, but that isn't a guarantee). 

 

 

As for the checksum errors, there are a number of fixes that may prevent them in the future, but basically, it's like unreadable sectors on the disk (in fact, it's probably exactly like that, where the disk is unable to recover any usable data).  You can disable the checksum verification, but it means you're getting corrupted data back. Period.  that's why it's there. 

 

And from what I've seen, you're the only one reporting it, so that may be a provider issue (eg, an error in their database)

Link to comment
Share on other sites

  • 0

I'm a happy man today and you know why :-D

 

However I would like to request the return of 100 mb chunks when you use a minimal download above X!

 

I'm seeing a lot of throttling because the chunks are too small, so I hope you will allow the larger chunks of your minimal download is larger as well :-)

 

Happy to get 700 mbit now, but still room for improvement if throttling disappears :-D

Link to comment
Share on other sites

  • 0

Yup, looked at the change log today, and knew at least one person would be commenting and ecstatic.  :)

 

I've been pushing and nudging Alex every time we talk, so I'm glad to see it too. :)

 

It isn't really tested yet, (for consistency/reliability/etc), but it's there. 

 

As for the large chunk size, I'll push that, for you lucky bastards with gigabit internet connections. :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...