Jump to content
  • 0

Long term use. Will things change much?


Nebulous

Question

I am currently using version 1.0.0.608 and it's working great. I have about 500GB uploaded so far and it appears to all be perfect. I have a bunch concerns as I am starting to use CloudDrive more and more. 

  • Will the format change much in the future? I've seen here and there on the forums that between different versions of the beta it has been recommended to "create a new drive" to take advantage of new features and speed improvements. I understand that this is BETA software but I don't want to re-create, download, and re-upload several TB of data a bunch of times moving forward. (http://community.covecube.com/index.php?/topic/2026-updating-questions/)
  • I want to use StableBit CloudDrive as a primary offsite backup (to GoogleDrive), is this unwise until an actual full release is published?
  • Related to the previous question, what are strategies for getting huge chunks of data up to a CloudDrive? I am noticing that if I copy over 1TB of data to a CloudDrive my local cache drive (256GB) gets filled up and then everything is throttled down like crazy. Are my only option to get a larger cache drive or upload in 256GB sessions? I was hoping for a set and forget solution.
  • Is there anyway to copy data from clouddrive to clouddrive from the same provider and same account without using my local connection as an intermediary? This ability would probably solve my first question regarding re-creating drives.
  • Considering the Microsoft One Drive fiasco with unlimited storage, what is the general feeling with GoogleDrive Unlimited and Amazon CloudDrive possibly doing the same thing? I know it has nothing to do with Covecube but I'm just curious how the community feels about it. I just don't want to be locked out of my data and/or get a notice that I need to figure out how to download and store 50+TB of data out of nowhere. Nothing is ever certain but how do people feel about this.
  • Last question for the community, How much data do you have up in CloudDrives (and which provider are you using?)
Link to comment
Share on other sites

10 answers to this question

Recommended Posts

  • 0

Are you me? I have all these same questions and same use-case.

 

Edit: Although given how much bad press MSFT got for the OneDrive change I think Google and Amazon would be more hesitant. I think their storage solutions and capacities are also much bigger/more mature. Still, redundancy is your friend.

Link to comment
Share on other sites

  • 0

The drive only changed due to the implementation of the minimal chunk size, which required changes to the drive.
 

Almost all updates are client based only, which wont require a new drive. 

 

Regarding your copy of data, you could use ultracopier to limit copy speed so that CloudDrive can follow along with the upload. 

 

I seriously doubt them closing their unlimited space, but i'm sure that they will enforce closure of accounts with maybe 50tb or more data stored. A good safe bet if you want to save huge amounts of data is to have several accounts and log into them all and have maybe like max 10 tb per account. It will cost more, but get you less noticed by Amazon/Google.

 

Currently using Google myself

Link to comment
Share on other sites

  • 0

  • Even if the formatting changes (which is has numerous times), we maintain backwards compatibility with the drives, to ensure that you never lose access to the drives and that you don't need to recreate the drives.  

     

    However, in some cases (such as with Google Drive), it maybe a good idea, to fix certain issues. 

     

  • We don't recommend it for production use, as it is still in beta, and there is still a lot of fluctuation. 

    However, don't let that stop you, if you think it's working pretty well already. 

     

  • This depends highly on your connection, and your cache drive.  If the cache drive gets full, then the speed to the drive gets throttled to prevent it from filling the cache drive (as "bad things happen" when that happens).  Obviously, having a higher upload speed will significantly help here, as well.  And if you do have very good upload speed, stuff like increasing the chunk size and the minimum download size may help saturate the connection, as there is less overhead (HTTP(s), JSON, REST, etc). 

     

  • Nope. Sorry. The software itself handles a lot of what actually happens with the data, so it gets accessed and stored locally. 

     

  • Honestly, I don't think Google will change the size. But only time will tell. 

    As for Amazon Cloud Drive... it's hard to tell here as well. But I don't suspect that this will be an issue, either. 

     

    But again, hosting data like this is expensive, and they're hedging their bets/money on most people using less than 100GBs of their unlimited data (IIRC), with people using TBs being the outlier.  The problem with this, is that the people that actually take advantage of "unlimited" tend to get punished for doing so.  

    That's been the trend, and only time will tell if Amazon and Google follow suit. 

     

  • For me? none. I have a bunch of test data, but I haven't really been using it extensively. 

    Alex (the developer) on the other hand, not only has a lot of test data, but has a lot of actual data in the cloud already. Though, not sure on which providers. 

Link to comment
Share on other sites

  • 0

Sorry, a bit ranty here.... 

 

I don't want to ruin it for everyone, like the 75TB guy did with OneDrive.

I wouldn't say that he ruined anything.  In fact, just the opposite. Microsoft advertised "unlimited", and he *used* that.  Just because Microsoft (and many other providers) hedge on most users not even using 500GBs of data doesn't mean he abused it. 

 

If you're going to advertise it as such, people will use it as such. Just because *you* (in this case, Microsoft) didn't mean it, doesn't mean that some guy spoiled anything. 

Link to comment
Share on other sites

  • 0

Here's a thread on Amazon's dev forums I saw earlier in the year regarding storing a lot (~100TB) of data on Amazon Drive.

 

 

Technically, there is no limit to individual customer storage. If you want to upload 100 TB, you can. It is probably going to take quite a while ... 100 TB would definitely be a power user, but if it's just for you to use as your own *personal* storage, that should be fine. Using Cloud Drive as the platform for torrent / video sharing site or similar would be against TOS. Thanks!

 

FWIW I have ~3TB backed up to CrashPlan, with most of that also backed up to Amazon via CloudDrive.

Link to comment
Share on other sites

  • 0

I have 17tb backed up to crash plan with no problems so far have only ever recovered a about 5 blu-Ray rips that have gotten corrupt but i am hoping Alex is going to get time eventually to take a look at the prospect of being able to utilise the pool duplication better so this doesn't happen

Lee

Link to comment
Share on other sites

  • 0

Here's a thread on Amazon's dev forums I saw earlier in the year regarding storing a lot (~100TB) of data on Amazon Drive.

 

 

FWIW I have ~3TB backed up to CrashPlan, with most of that also backed up to Amazon via CloudDrive.

 

Nice to know. 

 

Now they just need to fix their backend... 

And ... not randomly lose data.... (check the logs for 623...)

 

I have 17tb backed up to crash plan with no problems so far have only ever recovered a about 5 blu-Ray rips that have gotten corrupt but i am hoping Alex is going to get time eventually to take a look at the prospect of being able to utilise the pool duplication better so this doesn't happen

Lee

 

It's "planned". But it may be a while, depending on StableBit CloudDrive.

Link to comment
Share on other sites

  • 0

Now they just need to fix their backend... 

And ... not randomly lose data.... (check the logs for 623...)

 

Yeah I saw the notes in the changelog. Thats pretty bad on Amazon's part. Are you guys in touch at all with them regarding the disappearing data issues, or are you still prioritizing the next stable beta release before focusing on ACD again?

Link to comment
Share on other sites

  • 0

Prioritizing a stable release. 

 

There isn't anything we can do about this. This is ENTIRELY an Amazon issue. It's their servers that are having that issue. Aside from keeping the entire drive cached locally... there isn't a solution. :(

 

And I'm sure that Amazon knows. Because if *we* are having this issue, other utilities are as well. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...