Jump to content
Covecube Inc.
  • 0
jjross

Are we getting close to release?

Question

I want to test out CloudDrive but I'd like to be on a stable (RC?) version before giving it a go. Are we getting close to a release for this? How stable is the product?

 

It seems to have been in beta for a long time so I'm a little worried that it's buggy.

Share this post


Link to post
Share on other sites

10 answers to this question

Recommended Posts

  • 0

...

It seems to have been in beta for a long time so I'm a little worried that it's buggy.

 

I am also waiting for the stable version of CloudDrive but I have no concerns about the quality of the final product, because other products (DrivePool and Scanner) are very good and also support is at a high level.

Share this post


Link to post
Share on other sites
  • 0

Trust me... getting them to rush a release won't make things any better :)

But thank god they prefer to keep it in beta until the product really is stabil enough :)

 

However i must say that i have had data in the cloud with stablebit for more than a year now and i haven't lost any of it yet :)

Share this post


Link to post
Share on other sites
  • 0

Well, we did feel it was getting close to an RC build... but the "Rate Limit Exceeded" issues we've been experiencing with Google Drive changes things. 

 

Right now, Alex (the Developer) is implementing a fairly major change that should significantly cut down on the number of API calls and prevent the Rate Limit Exceeded" issue. 

 

After that has been finished and tested, we should have another public beta or RC build out soon.  From there, we'll see.  

 

But we do want to get an RC and/or final build out in the near future, as well. 

Share this post


Link to post
Share on other sites
  • 0

Right now, Alex (the Developer) is implementing a fairly major change that should significantly cut down on the number of API calls and prevent the Rate Limit Exceeded" issue. 

 

Is this the local file ID database, or is there something else in the pipeline too?

Share this post


Link to post
Share on other sites
  • 0

Is this the local file ID database, or is there something else in the pipeline too?

 

Yes, that is the major change. 

 

If your curious (I know *somebody* will be), the reason this is a major change is because it changes how files are queried for. 

Normally, we try to create a file.  If we don't get an error, the file is created on the provider. If it already exists, the file create errors out and hands us a File ID.   

(this also happens for each chunk during upload verification, as well, but not with the File ID database)

 

This effectively generates additional API calls that aren't strictly needed (and honestly, could be handled better).  But by maintaining the local database, it will significantly reduce the number of API calls that we need to make. 

 

Also, IIRC, part of the plan was to "sync" this database (which probably means uploading it along with the other disk data) so that "new" systems can use this database rather than starting from scratch. Again, significantly reducing the number of API calls used when mounting a drive on a new system. 

 

 

 

If you can't tell, the singular goal here is to reduce the number of API calls.   

 

And this doesn't just apply to Google Drive (though it's the primary provider).  It applies to Box, and I think DropBox (i'll have to double check with Alex).   But we're only implementing it for providers that use this sort of method for querying for file info. 

Share this post


Link to post
Share on other sites
  • 0

Well, we did feel it was getting close to an RC build... but the "Rate Limit Exceeded" issues we've been experiencing with Google Drive changes things. 

 

Right now, Alex (the Developer) is implementing a fairly major change that should significantly cut down on the number of API calls and prevent the Rate Limit Exceeded" issue. 

 

After that has been finished and tested, we should have another public beta or RC build out soon.  From there, we'll see.  

 

But we do want to get an RC and/or final build out in the near future, as well. 

 

I don't know what you guys did between 730 and 742.. but I went from ~8-16mbs upload on Google Drive to like 250mbs. Much much much better.

Share this post


Link to post
Share on other sites
  • 0

It seems to me it would make sense to do it for all providers. Is there a disadvantage for other providers if they were to use it?

 

Yes and no. Some providers are much better about it.  

Yes, I asked Alex about this actually.  Providers like Amazon S3, Microsoft Azure, and Google Cloud Storage are very optimized already, and won't benefit from this actually. 

 

However, for the providers that we feel would be, we are using it. 

 

I don't know what you guys did between 730 and 742.. but I went from ~8-16mbs upload on Google Drive to like 250mbs. Much much much better.

 

That's fantastic to hear!  Hopefully, many others experience this shift in speed as well. 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...