Jump to content

Talyrius

Members
  • Posts

    24
  • Joined

  • Last visited

Posts posted by Talyrius

  1. However, providing some sort of service like this is something that we've talked about internally, in a large part because of StableBit CloudDrive, and that this is a common request.  (and it would fit in well into our products)

     

    Oh, cool! Have you discussed which underlying protocols would be used? Rsync over SSH should work quite well for this.

  2. You can surely continue working towards a stable release in the downtime between their unpunctual responses, yes?

     

    Also, there are no other services that I have interest in pursuing other than ACD, for the sake that its unlimited and the price point that it is at.

    This is true for me as well. If CloudDrive gets approved for production with ACD, this is where most of your license sales are going to be coming from.
  3. It'd be nice if an option was added that let us specify a DSCP value. When employing QoS at the router, this makes it much easier to differentiate between traffic that makes use of the same ports.

     

    Edit:

    AF11 (DSCP 10) would be a good default value.

  4. And fighting with Amazon Cloud Drive may take weeks, months or longer.

     

    …But again, we really do recommend against using the Amazon Cloud Drive provider right now. As we've said, it's just not reliable or stable.

    I hope you continue to make that fight a priority while doing your best to workaround their lousy API implementation.

     

    Without jumping through hoops for Google Apps Unlimited, Amazon Cloud Drive is, currently, the only affordable option for unlimited cloud storage.

  5. I hope some progress is made soon on getting Amazon Cloud Drive approved for production.

     

    In the meantime, does anyone know of a way to sign up for a Google Apps Unlimited account without having to pay for four other users? An organization requires five users to qualify and Google Apps Unlimited is $10 per user per month.

  6. I have no idea what they mean regarding the encrypted data size. AES-CBC is a 1:1 encryption scheme. Some number of bytes go in and the same exact number of bytes come out, encrypted. We do have some minor overhead for the checksum / authentication signature at the end of every 1 MB unit of data, but that's at most 64 bytes per 1 MB when using HMAC-SHA512 (which is 0.006 % overhead). You can easily verify this by creating an encrypted cloud drive of some size, filling it up to capacity, and then checking how much data is used on the server.

    Are chunks fully or partially allocated upon creation? If they're fully allocated, this could explain Amazon's confusion.
  7. The problem with faster internet connection is that it isn't that simple. Just because you may have gigabit internet doesn't mean that the server you're connecting to can support that speed.

    No, of course not. It would need to be tuned to your throughput with the provider.

     

    But this won't affect the performance of the drive. Specifically, each chunk is accessed by a single thread. That means that more threads you use, the more of these chunks are accessed in parallel.  So, with enough threads, you could ABSOLUTELY saturate your bandwidth.

     

    Actually, this SPECIFICALLY was the issue with Amazon Cloud Drive... and what Alex means by "scales well".

    Yes, but using larger chunks will reduce the amount of threads necessary to saturate your bandwidth, thereby reducing API call overhead.

     

    But to summarize here, it's a complicated issue, and we do plan on re-evaluating in the "near future".

    That's good to hear.
  8. Right now, the maximum "whole file" chunks is capped to 1MB. It will stay this way for the foreseeable future.

    I hope not!

     

    The reason for this is responsiveness for the disk. We don't want too much of a time delay when accessing the disk. A second or two is fine, but 10 seconds? 20? The larger the whole chunk, the longer this delay may end up being.

    Yes. However, that doesn't take into account faster internet connections. I think a selectable range of 1-100MB chunks would do a better job of accommodating everyone's circumstances. Using larger chunks also has the benefit of reducing the number of API calls necessary for communication with the provider, especially when accessing large files.
  9. Can you consider providing us with more control over how our drives are encrypted (mode of operation, algorithm, and key/hash length selection) in a future release? AES-256 bit CBC is a good default, but I'd prefer to use something stronger.

×
×
  • Create New...