Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Login issues   11/07/17

      If you have issues with logging in, make sure you use your display name and not the "username" or email.  Or head here for more info.   http://community.covecube.com/index.php?/topic/3252-login-issues/  
    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 
  • 0
Alex

Amazon Cloud Drive - Why is it not supported?

Question

Some people have mentioned that we're not transparent enough with regards to what's going on with Amazon Cloud Drive support. That was definitely not the intention, and if that's how you guys feel, then I'm sorry about that and I want to change that. We really have nothing to gain by keeping any of this secret.
 
Here, you will find a timeline of exactly what's happening with our ongoing communications with the Amazon Cloud Drive team, and I will keep this thread up to date as things develop.
 
Timeline:

  • On May 28th the first public BETA of StableBit CloudDrive was released without Amazon Cloud Drive production access enabled. At the time, I thought that the semi-automated whitelisting process that we went through was "Production Access". While this is similar to other providers, like Dropbox, it became apparent that for Amazon Cloud Drive, it's not. Upon closer reading of their documentation, it appears that the whitelisting process actually imposed "Developer Access" on us. To get upgraded to "Production Access" Amazon Cloud Drive requires direct email communications with the Amazon Cloud Drive team.
  • We submitted our application for approval for production access originally on Aug. 15 over email:

Hello,
 
I would like to submit my application to be approved for production use with the Amazon Cloud Drive API.
 
It is using the "StableBit CloudDrive" security profile on the [...] amazon account. I'm sorry for the other extraneous security profiles on that account, but I've been having issues getting my security profile working, and I've been in touch with Amazon regarding that in a different email thread, but that's now resolved.
 
Application name: StableBit CloudDrive
Description: A secure virtual hard drive, powered by the cloud.
Web site: https://stablebit.com/CloudDrive
Operating system: Microsoft Windows
 
Summary:
An application for Microsoft Windows that allows users to create securely encrypted virtual hard drives (AES) which store their data in the cloud provider of their choice. Only the user has the key to unlock their data for ultimate security. We would like to enable Amazon Cloud Drive for production use with our application.
 
We are still in BETA and you can see the website for a more elaborate description of our application and its feature set.
 
Download *:

  • x64: [...]
  • x86: [...]
* Please don't download the version from stablebit.com as it does not have the correct code for Amazon Cloud Drive. We will update that version once we get approved.

 
System requirements:
  • Microsoft Windows Vista or newer.
  • A broadband always-on Internet connection.
  • An internal hard drive with at least 10 GB of freespace.
Testing notes:
  • If you're not sure whether you need the x64 or the x86 version, please download the x64 version. The x86 version is provided for compatibility with older systems.
How to use:

 
The application is designed to be very easy to use and should be self explanatory, but for the sake of completeness here are detailed instructions on how to use it.
  • Install the application.
  • The user interface will open shortly after it's installed. Or you can launch it from the Start menu under StableBit CloudDrive.
  • Click Get a license...
  • Activate the fully functional trial, or enter the following retail activation ID: {8523B289-3692-41C7-8221-926CA777D9F8-BD}
  • Click Connect... next to Amazon Cloud Drive.
  • Click Authorize StableBit CloudDrive...
  • A web browser will open where you will be able to log into your Amazon account and authorize StableBit CloudDrive to access your data.
  • Copy the Access Key that you are given into the application and enter a Name for this connection (it can be anything).
  • Now you will be able to click Create... to create a new virtual drive.
  • On the dialog that pops up, just click the Create button to create a default 10 GB drive.
Please let me know if you experience any issues or need any more information.

 
I look forward to getting our application approved for use with Amazon Cloud Drive.
 
Kind Regards,
Alex Piskarev
Covecube Inc.

 

  • On Aug 24th I wrote in requesting a status update, because no one had replied to me, so I had no idea whether the email was read or not.
  • On Aug. 27th I finally got an email back letting me know that our application was approved for production use. I was pleasantly surprised.

Hello Alex,
 
 
We performed our call pattern analysis this week and we are approving StabeBit CloudDrive for production. The rate limiting changes will take effect in the next couple minutes.
 
Best regards,
 
Amazon Cloud Drive 3P Support

  • On Sept 1st, after some testing, I wrote another email to Amazon letting them know that we are having issues with Amazon not respecting the Content-Type of uploads, and that we are also having issues with the new permission scopes that have been changes since we initially submitted our application for approval.

Amazon,
 
That's great news, and thank you for doing that. However, I am running into a small permission issue. The new Amazon read scopes are causing a problem with how our application stores data in the cloud. This wasn't an issue with earlier builds as they were using the older scopes.
 
At the heart of the problem is that the Content-Type: application/octet-stream is not being used when we, for example, upload what may look like text files to the cloud. All of our files can be treated as binary, but this behavior means that we need additional permissions (clouddrive:read_all) in order to access those files.
 
It turns out what we really need is: clouddrive:read_all clouddrive:write
 
This is for the "StableBit CloudDrive" security profile.
 
I know that those scopes were originally selected in the security profile whitelisting form, but how do I go about changing those permissions at this point?
 
Should I just re-whitelist the same profile?
 
Thank You,
Alex Piskarev
Covecube Inc.

  • No one answered this particular email until...
  • On Sept 7th I received a panicked email from Amazon addressed to me (with CCs to other Amazon addresses) letting me know that Amazon is seeing unusual call patterns from one of our users. 

Hi Alex,
 
We are seeing an unusual pattern from last 3-4 days. There is a single customer (Customer ID: A1SPP1QQD9NHAF ) generating very high traffic from stablebit to Cloud Drive (uploads @ the rate of 45 per second). This created few impacts on our service including hotspotting due to high traffic from single customer.
 
As a quick mitigation, we have changed the throttling levels for stablebit back to development levels. You will see some throttling exceptions until we change it back to prod level. We need your help in further analysis here. Can you please check what is happening with this customer/ what is he trying to do? Is this some sort of test customer you are using?
 
Regards,

  • On Sept 11th I replied explaining that we do not keep track of what our customers are doing, and that our software can scale very well, as long as the server permits it and the network bandwidth is sufficient. Our software does respect 429 throttling responses from the server and it does perform exponential backoff, as is standard practice in such cases. Nevertheless, I offered to limit the number of threads that we use, or to apply any other limits that Amazon deems necessary on the client side.

Amazon,
 
Thanks for letting me know that this is an issue. I did notice the developer throttling limits take effect.
 
Right now, our software doesn't limit the number of upload threads that a user can specify. Normally, our data is stored in 1 MB chunks, so theoretically speaking, if someone had sufficient upload bandwidth they could upload x50 1 MB chunks all at the same time.
 
This allows our software to scale to the user's maximum upload bandwidth, but I can see how this can be a problem. We can of course limit the number of simultaneous upload threads as per your advice.
 
What would be a reasonable maximum number of simultaneous upload threads to Amazon Cloud Drive? Would a maximum of x10 uploads at a time suffice, or we can lower that even further?
 
Thanks,
Alex

I highlighted on this page the key question that really needs answered. Also, please note that obviously Amazon knows who this user is, since they have to be their customer in order to be logged into Amazon Cloud Drive. Also note that instead of banning or throttling that particular customer, Amazon has chosen to block the entire user base of our application.

  • On Sept. 16th I received a response from Amazon.

Hello Alex,
 
 
Thank you for your patience. We have taken your insights into consideration, and we are currently working on figuring out the optimal limit for uploads that will allow your product to work as it should and still prevent unnecessary impact on Cloud Drive. We hope to have an answer for you soon.
 
Best regards,
 
Amazon Cloud Drive 3P Support

  • On Sept. 21st I haven't heard anything back yet, so I sent them this.

Amazon,
 
Thanks for letting me know, and I await your response.
 
Regards,
Alex

  • I waited until Oct. 29th and no one answered. At that point I informed them that we're going ahead with the next public BETA regardless.

Amazon,
 
We're going ahead with our next public BETA release. We will, for now, categorize your provider as "experimental". That means it won't show up in our list of available providers by default. We're going to release a new public BETA by the end of this week.
 
In that BETA, we will add an upload rate limit for Amazon Cloud Drive. It will never allow an upload rate of faster than 20 megabits per second. And this is the reason I'm writing to you, if that's too fast, please let me know and we'll change it to whatever is acceptable to you before the release.
 
After the BETA is out, I'll send you another email asking whether you can reinstate our production status.
 
Thank you,
Alex

  • Some time in the beginning of November an Amazon employee, on the Amazon forums, started claiming that we're not answering our emails. This was amidst their users asking why Amazon Cloud Drive is not supported with StableBit CloudDrive. So I did post a reply to that, listing a similar timeline.
  • One Nov. 11th I sent another email to them.

Amazon,
 
According to this thread on your forums, [...] from Amazon is saying that you can't get in touch with the developer of StableBit CloudDrive: [...]
 
I find this very interesting, since I'm the lead developer of StableBit CloudDrive, and I'm still awaiting your response from my last 2 emails.
 
In any case, we've released our next public BETA without Amazon Cloud Drive production access. This will likely be the last public BETA, and the next release may be the 1.0 Release final.
 
These are the limits that we've places on Amazon Cloud Drive, on the client side:
Maximum DL: 50 Megabits /s
Maximum UL: 20 Megabits /s
Maximum number of UL connections: 2
Maximum number of DL connections: 4
As I've said before, we can change these to anything that you want, but since I haven't received a response, I had to make a guess.
 
Of course, we would like you to approve us for production access, if at all possible.
 
Our public BETA is available here: https://stablebit.com/CloudDrive
 
To gain access to the Amazon Cloud Drive provider:
Click the little gear at the top of the StableBit CloudDrive UI.
Select Troubleshooting
Check Show experimental providers
You'll then see Amazon Cloud Drive listed in the list of providers.
 
The current public BETA is using the above limits for Amazon Cloud Drive, but the "StableBit Applications" security profile is currently not approved for production use.
 
If you can please let me know if there's anything at all that we can do on our end to get us approved as quickly as possible, or just write back with a status update of where we stand, I would highly appreciate it.
 
Thank You,
Alex

  • On Nov 13th Amazon finally replied.

Hello Alex,
 
Thank you for your emails and for providing details on the issue at hand. Looks like we've gotten a little out of sync! We would like to do everything we can to make sure we continue successfully integrating our products.
 
In terms of your data about bandwidth and connections, do you have those figures in terms of API calls per second? In general, a client instance on developer rate limits should make up to about [...] read and [...] write calls per second. On the production rates, they could go up to roughly [...] read and [...] write calls per second.
 
Thank you for providing a new Beta build, we will be testing it out over the weekend and examining the traffic pattern it generates.
 
Best regards,
Amazon Cloud Drive 3P Support

I redacted the limits, I don't know if they want those public.

  • On Nov. 19th I sent them another email asking to clarify what the limits mean exactly.

Amazon,
 
I'm looking forward to getting our app approved for production use, I'm sure that our mutual customers would be very happy.
 
So just to sum up how our product works... StableBit CloudDrive allows each of our users to create 1 or more fully encrypted virtual drives, where the data for that drive, is stored with the cloud provider of their choosing.
 
By default, our drives store their data in 1 MB sized opaque chunks in the provider's cloud store.
 
And here's the important part regarding your limits. Currently, for Amazon Cloud Drive we have limited the Download / Upload rate per drive to:
 
Upload: 20 Megabits/s (or 2.5 Megabytes/s)
Download: 50 Megabits/s (or 6.24 Megabytes/s)
 
So, converting that to API calls:
 
Upload: about 2.5 calls/s
Download: about 6.25 calls/s
 
And, let's double that for good measure, just to be safe. So we really are talking about less than 20 calls/s maximum per drive.
 
In addition to all of that, we do support exponential backoff if the server returns 429 error codes.
 
Now one thing that isn't clear to me, when you say "up to roughly [...] read and [...] write calls per second" for the production limits, do you mean for all of our users combined or for each Amazon user using our application? If it's for all of our users combined, we may have to lower our limits dramatically.
 
Thank you,
Alex

  • On Nov 25th Amazon replied with some questions and concerns regarding the number of API calls per second:

Hello Alex,
 
We've been going over our analysis of StableBit. Overall things are going very well, and we have a few questions for you.

  • At first we were a little confused by the call volume it generated. Tests working with less than 60 files were ending up being associated with over 2,000 different API calls. Tests designed to upload only were causing hundreds of download calls. Is it part of the upload process to fetch and modify and update 1Mb encrypted chunks that are already uploaded? Or maybe the operating system attempting to index it's new drive?
  • What kind of file size change do you expect with your encryption? ~100Mb of input files became ~200Mb worth of encrypted files. This is not a problem for us, just wondering about the padding in the chunks. 
  • In regards to call rates, things look to be right in line. You quoted that uploads and downloads should combine to be calling at 7.75 calls per second. Taking a histogram of the calls and clustering them by which second they happened in gave us good results. 99.61% of seconds contained 1-7 calls!
Best regards,
 
Amazon Cloud Drive 3P Support

 

Amazon
 
Glad to hear back from you. Here are your answers and I have some good news regarding the calls per second that StableBit CloudDrive generates on uploads.
 
First, your answers:

  • Regarding the calls per second:
    • The key thing to understand is, StableBit CloudDrive doesn't really know anything about local files. It simply emulates a virtual hard drive in the kernel. To StableBit CloudDrive, this virtual drive looks like a flat address space that can be read or written to. That space is virtually divided into fixed sized chunks, which are synchronized with the cloud, on demand.
    • So calls to the Amazon API are created on demand, only when necessary, in order to service existing drive I/O.
    • By default, for every chunk uploaded, StableBit CloudDrive will re-download that chunk to verify that it's stored correctly in the cloud. This is called upload verification and can be turned off in Drive Options -> Data integrity -> Upload verification. This will probably explain why you were seeing download requests with tests only designed for uploading. We can turn this off permanently for Amazon Cloud Drive, if you'd like.
    • It is possible that an upload will generate a full chunk download and then a full re-upload. This is done for providers that don't have any way of updating a part of a file (which are most cloud providers).
    • As far as the calls / second on uploads, I think that we've dramatically decreased these. I will explain in a bit.
  • Regarding the size change when encrypted.
    • There is very negligible overhead for encrypted data (< 0.01 %). 
    • 100 MB of encrypted data stays at roughly 100 MB of data. It's simply encrypted with AES256-CBC with a key that only the user knows.
----

 
Regarding the calls per second on uploads, our latest builds implement large chunk support (up to 100 MB) for most providers, including Amazon Cloud Drive. This should dramatically decrease the number of calls per second that are necessary for large file uploads.
 
As a point of comparison, to Google Drive for example, they are limiting us to 10 calls / second / user, which is working well but we can probably even go lower.
 
The large chunk support was implemented in 1.0.0.421, and you can download that here:
http://dl.covecube.com/CloudDriveWindows/beta/download/
 
If you want to really dig into the details of how I/O works with us, or get more information on large chunk support, I made a post about that here:
http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/
 
Again, thank you very much for taking the time to evaluate our product, and please let me know what the next steps should be to get to production access and whether you need any changes or additional limits applied on our end.
 
Kind Regards,
Alex
Covecube Inc.

 

  • On Dec. 10th 2015, Amazon replied:

Hello Alex,
 
 
Thank you for this information. We are running a new set of tests on the build you mentioned, 1.0.0.421. Once we have those results, we can see what we want to do with Upload Verification and/or setting call limits per second per user.
 
Your details about file size inflation don't match with what we saw in our most recent tests. We'll try a few different kinds of upload profiles this time and let you know what we see. 
 
Thank you for your continued patience as we work together on getting this product to our customers.
 
 
Best regards,
 
Amazon Cloud Drive 3P Support

  • I have no idea what they mean regarding the encrypted data size. AES-CBC is a 1:1 encryption scheme. Some number of bytes go in and the same exact number of bytes come out, encrypted. We do have some minor overhead for the checksum / authentication signature at the end of every 1 MB unit of data, but that's at most 64 bytes per 1 MB when using HMAC-SHA512 (which is 0.006 % overhead). You can easily verify this by creating an encrypted cloud drive of some size, filling it up to capacity, and then checking how much data is used on the server.

    Here's the data for a 5 GB encrypted drive:

    xPSzCl1.png

    Yes, it's 5 GBs.

 

 


Hello Alex,
 
We have an update and some questions regarding our analysis of build 1.0.0.421. 
 
This particular test involved a 1.5Gb folder containing 116 photos. The client ran for 48 hours before we stopped it. In each of those 48 hours, the client generated 104-205 calls, with an average of 168. (Down from ~12,000 in previous tests.)
 
Afterwards, we checked the test account through the browser. There are 163 files in the StableBit folder. 161 of them are chunk files. 8 chunks are 10 Mb in size. The remaining 153 chunks are 0 Bytes in size. Approximately half of all calls being made in each hour are resulting in errors. We tracked the request_ids of the errored calls into our logs and found many errors similar to this one:
 
        NameConflictException: Node with the name c71e006b-d476-45b4-84e0-91af15bf2683-chunk-300 already exists under parentId 0l1fv1_kQbyzeme0vIC1Rg conflicting NodeId: p8-oencZSVqV-E_x2N6LUQ
 
If this error is familiar you, do you have any recommendations for steps we might have missed in configuration, or a different build to try? 
 
Best regards,
Amazon Cloud Drive 3P Support 
 
To clarify the error issue here (I'm not 100% certain about this), Amazon doesn't provide a good way to ID these files. We have to try to upload them again, and then grab the error ID to get the actual file ID so we can update the file.  This is inefficient and would be solved with a more robust API that included a search functionality, or a better file list call. So, this is basically "by design" and required currently. 
-Christopher
 
 
Unfortunately, we haven't pursued this with Amazon recently. This is due to a number of big bugs that we have been following up on.  However, these bugs have lead to a lot of performance, stability and reliability fixes. And a lot of users have reported that these fixes have significantly improved the Amazon Cloud Drive provider.  That is something that is great to hear, as it may help to get the provider to a stable/reliable state. 
 
That said, once we get to a more stable state (after the next public beta build (after 1.0.0.463) or a stable/RC release), we do plan on pursuing this again.  
 
But in the meanwhile, we have held off on this as we want to focus on the entire product rather than a single, problematic provider. 
-Christopher
 
 
Amazon has "snuck in" additional guidelines that don't bode well for us. 
  • Don’t build apps that encrypt customer data

 

What does this mean for us? We have no idea right now.  Hopefully, this is a guideline and not a hard rule (other apps allow encryption, so that's hopeful, at least). 

 

But if we don't get re-approved, we'll deal with that when the time comes (though, we will push hard to get approval).

 

- Christopher (Jan 15. 2017)

 

If you haven't seen already, we've released a "gold" version of StableBit CloudDrive. Meaning that we have an official release! 
Unfortunately, because of increasing issues with Amazon Cloud Drive, that appear to be ENTIRELY server side (drive issues due to "429" errors, odd outages, etc), and that we are STILL not approved for production status (despite sending off additional emails a month ago, requesting approval or at least an update), we have dropped support Amazon Cloud Drive. 

This does not impact existing users, as you will still be able to mount and use your existing drives. However, we have blocked the ability to create new drives for Amazon Cloud Drive.   

 

This was not a decision that we made lightly, and while we don't regret this decision, we are saddened by it. We would have loved to come to some sort of outcome that included keeping full support for Amazon Cloud Drive. 

-Christopher (May 17, 2017)

Edited by Christopher (Drashna)
Cleaned up Quote info to make more readable, Added Release/Gold notification

Share this post


Link to post
Share on other sites

197 answers to this question

Recommended Posts

  • 0

Could they look to add a checksum verification on Amazon end, this would save everyone a pile of bandwidth, but add a pile of compute time on Amazon end I guess.

 

1. Compute chunk checksum 

2. Upload chunk

3. Run Amazon checksum API

4. Receive Amazon generate checksum (will be only a few bytes depending on MD5 or SHA1 etc)

5 compare checksums from step 1 & 4. If same move to next file, if cehcksum is  different, error restart at step 1 for current file

Share this post


Link to post
Share on other sites
  • 0

Could they look to add a checksum verification on Amazon end, this would save everyone a pile of bandwidth, but add a pile of compute time on Amazon end I guess.

 

1. Compute chunk checksum 

2. Upload chunk

3. Run Amazon checksum API

4. Receive Amazon generate checksum (will be only a few bytes depending on MD5 or SHA1 etc)

5 compare checksums from step 1 & 4. If same move to next file, if cehcksum is  different, error restart at step 1 for current file

There is a bunch of stuff that Amazon should be doing on their end. Starting with properly throttling API calls instead of punishing us for using too many calls. 

 

If that sounds a bit bitter .... It is. And it's meant to be an example of how poorly Amazon has handled their entire Cloud Drive service. Not just the API. 

 

Basically the entire backend is poorly designed and managed, and the Amazon Cloud Drive team really needs to call the S3 team and step up their game. 

Share this post


Link to post
Share on other sites
  • 0

There is a bunch of stuff that Amazon should be doing on their end. Starting with properly throttling API calls instead of punishing us for using too many calls. 

 

If that sounds a bit bitter .... It is. And it's meant to be an example of how poorly Amazon has handled their entire Cloud Drive service. Not just the API. 

 

Basically the entire backend is poorly designed and managed, and the Amazon Cloud Drive team really needs to call the S3 team and step up their game. 

What is the harm in giving us a few more threads for amazon? I used the old build that didnt have the 2 thread cap and it worked great. However you guys kept adding new features that eventually i said ok i gotta upgrade. I am as made at amazon as everyone else but really if i just had a few more threads problem solved for me.

Share this post


Link to post
Share on other sites
  • 0

What is the harm in giving us a few more threads for amazon? I used the old build that didnt have the 2 thread cap and it worked great. However you guys kept adding new features that eventually i said ok i gotta upgrade. I am as made at amazon as everyone else but really if i just had a few more threads problem solved for me.

Right now? We're stuck in development status, that means the more active connections from our software, the more the entire product gets throttled.  More threads means more connections, which means a worse experience for everyone.  As in, unusable.  And that includes for Amazon (who needs to test our program, now).

Share this post


Link to post
Share on other sites
  • 0

Right now? We're stuck in development status, that means the more active connections from our software, the more the entire product gets throttled.  More threads means more connections, which means a worse experience for everyone.  As in, unusable.  And that includes for Amazon (who needs to test our program, now).

How are all these other tools that are popping up on github that access ACD not having the same issue? for example https://github.com/yadayada/acd_cli

*EDIT*

Actually upon reading a bit into it i think they are approved.

Just baffled as why the hell you guys cant get approved.

Share this post


Link to post
Share on other sites
  • 0

How are all these other tools that are popping up on github that access ACD not having the same issue? for example https://github.com/yadayada/acd_cli

 

They're probably running in production status.

 

Perhaps the 'chunky' nature of clouddrive makes it stand out with high calls/sec, even though the intelligent caching side of it will cause less data to be transferred. Storing a 100mb file in 1mb chunks could mean 100 upload requests rather than just 1. Also the upload verification would cause a double up in data used when uploading, though IIRC this is necessary as files sometimes 'disappear' unexpectedly at Amazon's end.

Share this post


Link to post
Share on other sites
  • 0

That's why....

They're cheating... Heavily.

 

 

https://github.com/yadayada/acd_cli/blob/master/docs/authorization.rst

 

Specifically, they're requiring you to sign up for an account as a developer, and then input that info in, and use it. Meaning that each and every user is using a different developer account. 

Something you're *not* supposed to do. You're supposed to use a single developer account for your entire program.

 

In fact, this may be part of why Amazon has been a nightmare about approval.... because they have hundreds or thousands of developer accounts being used by people using ACD_CLI. 

Share this post


Link to post
Share on other sites
  • 0

That's why....

They're cheating... Heavily.

 

 

https://github.com/yadayada/acd_cli/blob/master/docs/authorization.rst

 

Specifically, they're requiring you to sign up for an account as a developer, and then input that info in, and use it. Meaning that each and every user is using a different developer account.

Something you're *not* supposed to do. You're supposed to use a single developer account for your entire program.

 

In fact, this may be part of why Amazon has been a nightmare about approval.... because they have hundreds or thousands of developer accounts being used by people using ACD_CLI.

Can't beat em, join em.

 

Sent from my Nexus 6P using Tapatalk

Share this post


Link to post
Share on other sites
  • 0

Is the first/default method not the same as what you guys are doing?

This first method is what we're doing. 

 

But the offer the option to use your own security account.

 

Can't beat em, join em.

 

Sent from my Nexus 6P using Tapatalk

 

We've talked about it. Extensively.

 

However, the issue isn't just the production status. The entire service has some pretty fundamental issues, as well. Which we've reported to Amazon.  

Share this post


Link to post
Share on other sites
  • 0

This first method is what we're doing. 

 

But the offer the option to use your own security account.

 

 

We've talked about it. Extensively.

 

However, the issue isn't just the production status. The entire service has some pretty fundamental issues, as well. Which we've reported to Amazon.  

 

Well i might try to sign up for a google for work account.

Tired of amazon. Its weird normally their customer service is top notch.

Share this post


Link to post
Share on other sites
  • 0

Well i might try to sign up for a google for work account.

Tired of amazon. Its weird normally their customer service is top notch.

May be a good idea for the meanwhile.

 

And yes, it is. But then again, S3 is also very top notch, in regards to API and documentation. So it really seems that this is a COMPLETELY different team, and they have absolutely no interaction with the S3 team (which is a shame, and ... troubling IMO). 

Share this post


Link to post
Share on other sites
  • 0

May be a good idea for the meanwhile.

 

And yes, it is. But then again, S3 is also very top notch, in regards to API and documentation. So it really seems that this is a COMPLETELY different team, and they have absolutely no interaction with the S3 team (which is a shame, and ... troubling IMO). 

Well i might try to sign up for a google for work account.

Tired of amazon. Its weird normally their customer service is top notch.

 

Do you think that Google Drive can support everything, if not work much better than the current state of AWS? Particularly for media streaming?

Share this post


Link to post
Share on other sites
  • 0

I hope some progress is made soon on getting Amazon Cloud Drive approved for production.

 

In the meantime, does anyone know of a way to sign up for a Google Apps Unlimited account without having to pay for four other users? An organization requires five users to qualify and Google Apps Unlimited is $10 per user per month.

Share this post


Link to post
Share on other sites
  • 0

Is there any news on this? I see we can now use custom ClientIDs and ClientSecrets - I assume to use our own dev profile with Amazon.

 

Also trying the latest beta (.456) and it's throwing 'Chunk 0 was not found' errors when creating a new drive on ACD.

Share this post


Link to post
Share on other sites
  • 0

Is there any news on this? I see we can now use custom ClientIDs and ClientSecrets - I assume to use our own dev profile with Amazon.

 

Also trying the latest beta (.456) and it's throwing 'Chunk 0 was not found' errors when creating a new drive on ACD.

 

To be honest, it's not really a high priority as we want to get a stable version of StableBit CloudDrive out sooner rather than later. And fighting with Amazon Cloud Drive may take weeks, months or longer. 

 

However, the "Chunk 0" issue is new, due to some of the backend changes. And Alex is aware of it, and looking into it. 

Edit: Fixed in the 1.0.0.459 build: http://dl.covecube.com/CloudDriveWindows/beta/download/

 

 

But again, we really do recommend against using the Amazon Cloud Drive provider right now. As we've said, it's just not reliable or stable. 

Share this post


Link to post
Share on other sites
  • 0

And fighting with Amazon Cloud Drive may take weeks, months or longer.

 

…But again, we really do recommend against using the Amazon Cloud Drive provider right now. As we've said, it's just not reliable or stable.

I hope you continue to make that fight a priority while doing your best to workaround their lousy API implementation.

 

Without jumping through hoops for Google Apps Unlimited, Amazon Cloud Drive is, currently, the only affordable option for unlimited cloud storage.

Share this post


Link to post
Share on other sites
  • 0

I hope you continue to make that fight a priority while doing your best to workaround their lousy API implementation.

 

Without jumping through hoops for Google Apps Unlimited, Amazon Cloud Drive is, currently, the only affordable option for unlimited cloud storage.

Well, adding additional providers and "fixing and work around issues" in the Amazon Cloud Drive provider are planned for after the stable release. 

 

 

And yes, it's about the cheapest... the problem is, Amazon really seems to be trying to stick to the maxim of "You get what you pay for". And that's unfortunate for consumers. 

Share this post


Link to post
Share on other sites
  • 0

FWIW .460 seems to be usable atm, much moreso than it was when I last tried several months ago. While not line-speed, speeds are much improved.
 
Whether this is due to optimisations you guys have done recently (chunk sizes, chunk caching etc), less people using it with ACD, or I just got lucky when I tried it I guess its hard to say. Regardless, I won't be storing anything worth losing on it until its no longer experimental.

Share this post


Link to post
Share on other sites
  • 0

FWIW .460 seems to be usable atm, much moreso than it was when I last tried several months ago. While not line-speed, speeds are much improved.

 

Whether this is due to optimisations you guys have done recently (chunk sizes, chunk caching etc), less people using it with ACD, or I just got lucky when I tried it I guess its hard to say. Regardless, I won't be storing anything worth losing on it until its no longer experimental.

 

Well, there have been a lot of backend changes recently. Including to the chunk code, so it may have significantly optimized the performance for Amazon Cloud Drive incidentally (we try to share as much of the code between providers as possible, as it's easier and more efficient in the long term).

 

But it may have also been a timing thing too. :)

Share this post


Link to post
Share on other sites
  • 0

Also, I'm getting the same speeds with my own (dev status) custom security profile, so I guess that rules out throttling from lots of people on a single profile.

If that's the case, then Amazon may (finally) be improving some of their backend to better handle stuff. Or the changes that we've made have definitely improved performance. 

Either way, glad to hear it.

Share this post


Link to post
Share on other sites
  • 0

Is there any chance of having the thread limit increased slightly when using a custom Amazon security profile? If the purpose of the thread limit was to allow more concurrent users on the dev status profile, then this shouldn't be an issue with a custom profile.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×