Jump to content
  • 0

Amazon Cloud Drive - Why is it not supported?


Alex

Question

Some people have mentioned that we're not transparent enough with regards to what's going on with Amazon Cloud Drive support. That was definitely not the intention, and if that's how you guys feel, then I'm sorry about that and I want to change that. We really have nothing to gain by keeping any of this secret.
 
Here, you will find a timeline of exactly what's happening with our ongoing communications with the Amazon Cloud Drive team, and I will keep this thread up to date as things develop.
 
Timeline:

  • On May 28th the first public BETA of StableBit CloudDrive was released without Amazon Cloud Drive production access enabled. At the time, I thought that the semi-automated whitelisting process that we went through was "Production Access". While this is similar to other providers, like Dropbox, it became apparent that for Amazon Cloud Drive, it's not. Upon closer reading of their documentation, it appears that the whitelisting process actually imposed "Developer Access" on us. To get upgraded to "Production Access" Amazon Cloud Drive requires direct email communications with the Amazon Cloud Drive team.
  • We submitted our application for approval for production access originally on Aug. 15 over email:

Hello,
 
I would like to submit my application to be approved for production use with the Amazon Cloud Drive API.
 
It is using the "StableBit CloudDrive" security profile on the [...] amazon account. I'm sorry for the other extraneous security profiles on that account, but I've been having issues getting my security profile working, and I've been in touch with Amazon regarding that in a different email thread, but that's now resolved.
 
Application name: StableBit CloudDrive
Description: A secure virtual hard drive, powered by the cloud.
Web site: https://stablebit.com/CloudDrive
Operating system: Microsoft Windows
 
Summary:
An application for Microsoft Windows that allows users to create securely encrypted virtual hard drives (AES) which store their data in the cloud provider of their choice. Only the user has the key to unlock their data for ultimate security. We would like to enable Amazon Cloud Drive for production use with our application.
 
We are still in BETA and you can see the website for a more elaborate description of our application and its feature set.
 
Download *:

  • x64: [...]
  • x86: [...]
* Please don't download the version from stablebit.com as it does not have the correct code for Amazon Cloud Drive. We will update that version once we get approved.

 
System requirements:
  • Microsoft Windows Vista or newer.
  • A broadband always-on Internet connection.
  • An internal hard drive with at least 10 GB of freespace.
Testing notes:
  • If you're not sure whether you need the x64 or the x86 version, please download the x64 version. The x86 version is provided for compatibility with older systems.
How to use:

 
The application is designed to be very easy to use and should be self explanatory, but for the sake of completeness here are detailed instructions on how to use it.
  • Install the application.
  • The user interface will open shortly after it's installed. Or you can launch it from the Start menu under StableBit CloudDrive.
  • Click Get a license...
  • Activate the fully functional trial, or enter the following retail activation ID: {8523B289-3692-41C7-8221-926CA777D9F8-BD}
  • Click Connect... next to Amazon Cloud Drive.
  • Click Authorize StableBit CloudDrive...
  • A web browser will open where you will be able to log into your Amazon account and authorize StableBit CloudDrive to access your data.
  • Copy the Access Key that you are given into the application and enter a Name for this connection (it can be anything).
  • Now you will be able to click Create... to create a new virtual drive.
  • On the dialog that pops up, just click the Create button to create a default 10 GB drive.
Please let me know if you experience any issues or need any more information.

 
I look forward to getting our application approved for use with Amazon Cloud Drive.
 
Kind Regards,
Alex Piskarev
Covecube Inc.

 

  • On Aug 24th I wrote in requesting a status update, because no one had replied to me, so I had no idea whether the email was read or not.
  • On Aug. 27th I finally got an email back letting me know that our application was approved for production use. I was pleasantly surprised.

Hello Alex,
 
 
We performed our call pattern analysis this week and we are approving StabeBit CloudDrive for production. The rate limiting changes will take effect in the next couple minutes.
 
Best regards,
 
Amazon Cloud Drive 3P Support

  • On Sept 1st, after some testing, I wrote another email to Amazon letting them know that we are having issues with Amazon not respecting the Content-Type of uploads, and that we are also having issues with the new permission scopes that have been changes since we initially submitted our application for approval.

Amazon,
 
That's great news, and thank you for doing that. However, I am running into a small permission issue. The new Amazon read scopes are causing a problem with how our application stores data in the cloud. This wasn't an issue with earlier builds as they were using the older scopes.
 
At the heart of the problem is that the Content-Type: application/octet-stream is not being used when we, for example, upload what may look like text files to the cloud. All of our files can be treated as binary, but this behavior means that we need additional permissions (clouddrive:read_all) in order to access those files.
 
It turns out what we really need is: clouddrive:read_all clouddrive:write
 
This is for the "StableBit CloudDrive" security profile.
 
I know that those scopes were originally selected in the security profile whitelisting form, but how do I go about changing those permissions at this point?
 
Should I just re-whitelist the same profile?
 
Thank You,
Alex Piskarev
Covecube Inc.

  • No one answered this particular email until...
  • On Sept 7th I received a panicked email from Amazon addressed to me (with CCs to other Amazon addresses) letting me know that Amazon is seeing unusual call patterns from one of our users. 

Hi Alex,
 
We are seeing an unusual pattern from last 3-4 days. There is a single customer (Customer ID: A1SPP1QQD9NHAF ) generating very high traffic from stablebit to Cloud Drive (uploads @ the rate of 45 per second). This created few impacts on our service including hotspotting due to high traffic from single customer.
 
As a quick mitigation, we have changed the throttling levels for stablebit back to development levels. You will see some throttling exceptions until we change it back to prod level. We need your help in further analysis here. Can you please check what is happening with this customer/ what is he trying to do? Is this some sort of test customer you are using?
 
Regards,

  • On Sept 11th I replied explaining that we do not keep track of what our customers are doing, and that our software can scale very well, as long as the server permits it and the network bandwidth is sufficient. Our software does respect 429 throttling responses from the server and it does perform exponential backoff, as is standard practice in such cases. Nevertheless, I offered to limit the number of threads that we use, or to apply any other limits that Amazon deems necessary on the client side.

Amazon,
 
Thanks for letting me know that this is an issue. I did notice the developer throttling limits take effect.
 
Right now, our software doesn't limit the number of upload threads that a user can specify. Normally, our data is stored in 1 MB chunks, so theoretically speaking, if someone had sufficient upload bandwidth they could upload x50 1 MB chunks all at the same time.
 
This allows our software to scale to the user's maximum upload bandwidth, but I can see how this can be a problem. We can of course limit the number of simultaneous upload threads as per your advice.
 
What would be a reasonable maximum number of simultaneous upload threads to Amazon Cloud Drive? Would a maximum of x10 uploads at a time suffice, or we can lower that even further?
 
Thanks,
Alex

I highlighted on this page the key question that really needs answered. Also, please note that obviously Amazon knows who this user is, since they have to be their customer in order to be logged into Amazon Cloud Drive. Also note that instead of banning or throttling that particular customer, Amazon has chosen to block the entire user base of our application.

  • On Sept. 16th I received a response from Amazon.

Hello Alex,
 
 
Thank you for your patience. We have taken your insights into consideration, and we are currently working on figuring out the optimal limit for uploads that will allow your product to work as it should and still prevent unnecessary impact on Cloud Drive. We hope to have an answer for you soon.
 
Best regards,
 
Amazon Cloud Drive 3P Support

  • On Sept. 21st I haven't heard anything back yet, so I sent them this.

Amazon,
 
Thanks for letting me know, and I await your response.
 
Regards,
Alex

  • I waited until Oct. 29th and no one answered. At that point I informed them that we're going ahead with the next public BETA regardless.

Amazon,
 
We're going ahead with our next public BETA release. We will, for now, categorize your provider as "experimental". That means it won't show up in our list of available providers by default. We're going to release a new public BETA by the end of this week.
 
In that BETA, we will add an upload rate limit for Amazon Cloud Drive. It will never allow an upload rate of faster than 20 megabits per second. And this is the reason I'm writing to you, if that's too fast, please let me know and we'll change it to whatever is acceptable to you before the release.
 
After the BETA is out, I'll send you another email asking whether you can reinstate our production status.
 
Thank you,
Alex

  • Some time in the beginning of November an Amazon employee, on the Amazon forums, started claiming that we're not answering our emails. This was amidst their users asking why Amazon Cloud Drive is not supported with StableBit CloudDrive. So I did post a reply to that, listing a similar timeline.
  • One Nov. 11th I sent another email to them.

Amazon,
 
According to this thread on your forums, [...] from Amazon is saying that you can't get in touch with the developer of StableBit CloudDrive: [...]
 
I find this very interesting, since I'm the lead developer of StableBit CloudDrive, and I'm still awaiting your response from my last 2 emails.
 
In any case, we've released our next public BETA without Amazon Cloud Drive production access. This will likely be the last public BETA, and the next release may be the 1.0 Release final.
 
These are the limits that we've places on Amazon Cloud Drive, on the client side:
Maximum DL: 50 Megabits /s
Maximum UL: 20 Megabits /s
Maximum number of UL connections: 2
Maximum number of DL connections: 4
As I've said before, we can change these to anything that you want, but since I haven't received a response, I had to make a guess.
 
Of course, we would like you to approve us for production access, if at all possible.
 
Our public BETA is available here: https://stablebit.com/CloudDrive
 
To gain access to the Amazon Cloud Drive provider:
Click the little gear at the top of the StableBit CloudDrive UI.
Select Troubleshooting
Check Show experimental providers
You'll then see Amazon Cloud Drive listed in the list of providers.
 
The current public BETA is using the above limits for Amazon Cloud Drive, but the "StableBit Applications" security profile is currently not approved for production use.
 
If you can please let me know if there's anything at all that we can do on our end to get us approved as quickly as possible, or just write back with a status update of where we stand, I would highly appreciate it.
 
Thank You,
Alex

  • On Nov 13th Amazon finally replied.

Hello Alex,
 
Thank you for your emails and for providing details on the issue at hand. Looks like we've gotten a little out of sync! We would like to do everything we can to make sure we continue successfully integrating our products.
 
In terms of your data about bandwidth and connections, do you have those figures in terms of API calls per second? In general, a client instance on developer rate limits should make up to about [...] read and [...] write calls per second. On the production rates, they could go up to roughly [...] read and [...] write calls per second.
 
Thank you for providing a new Beta build, we will be testing it out over the weekend and examining the traffic pattern it generates.
 
Best regards,
Amazon Cloud Drive 3P Support

I redacted the limits, I don't know if they want those public.

  • On Nov. 19th I sent them another email asking to clarify what the limits mean exactly.

Amazon,
 
I'm looking forward to getting our app approved for production use, I'm sure that our mutual customers would be very happy.
 
So just to sum up how our product works... StableBit CloudDrive allows each of our users to create 1 or more fully encrypted virtual drives, where the data for that drive, is stored with the cloud provider of their choosing.
 
By default, our drives store their data in 1 MB sized opaque chunks in the provider's cloud store.
 
And here's the important part regarding your limits. Currently, for Amazon Cloud Drive we have limited the Download / Upload rate per drive to:
 
Upload: 20 Megabits/s (or 2.5 Megabytes/s)
Download: 50 Megabits/s (or 6.24 Megabytes/s)
 
So, converting that to API calls:
 
Upload: about 2.5 calls/s
Download: about 6.25 calls/s
 
And, let's double that for good measure, just to be safe. So we really are talking about less than 20 calls/s maximum per drive.
 
In addition to all of that, we do support exponential backoff if the server returns 429 error codes.
 
Now one thing that isn't clear to me, when you say "up to roughly [...] read and [...] write calls per second" for the production limits, do you mean for all of our users combined or for each Amazon user using our application? If it's for all of our users combined, we may have to lower our limits dramatically.
 
Thank you,
Alex

  • On Nov 25th Amazon replied with some questions and concerns regarding the number of API calls per second:

Hello Alex,
 
We've been going over our analysis of StableBit. Overall things are going very well, and we have a few questions for you.

  • At first we were a little confused by the call volume it generated. Tests working with less than 60 files were ending up being associated with over 2,000 different API calls. Tests designed to upload only were causing hundreds of download calls. Is it part of the upload process to fetch and modify and update 1Mb encrypted chunks that are already uploaded? Or maybe the operating system attempting to index it's new drive?
  • What kind of file size change do you expect with your encryption? ~100Mb of input files became ~200Mb worth of encrypted files. This is not a problem for us, just wondering about the padding in the chunks. 
  • In regards to call rates, things look to be right in line. You quoted that uploads and downloads should combine to be calling at 7.75 calls per second. Taking a histogram of the calls and clustering them by which second they happened in gave us good results. 99.61% of seconds contained 1-7 calls!
Best regards,
 
Amazon Cloud Drive 3P Support

 

Amazon
 
Glad to hear back from you. Here are your answers and I have some good news regarding the calls per second that StableBit CloudDrive generates on uploads.
 
First, your answers:

  • Regarding the calls per second:
    • The key thing to understand is, StableBit CloudDrive doesn't really know anything about local files. It simply emulates a virtual hard drive in the kernel. To StableBit CloudDrive, this virtual drive looks like a flat address space that can be read or written to. That space is virtually divided into fixed sized chunks, which are synchronized with the cloud, on demand.
    • So calls to the Amazon API are created on demand, only when necessary, in order to service existing drive I/O.
    • By default, for every chunk uploaded, StableBit CloudDrive will re-download that chunk to verify that it's stored correctly in the cloud. This is called upload verification and can be turned off in Drive Options -> Data integrity -> Upload verification. This will probably explain why you were seeing download requests with tests only designed for uploading. We can turn this off permanently for Amazon Cloud Drive, if you'd like.
    • It is possible that an upload will generate a full chunk download and then a full re-upload. This is done for providers that don't have any way of updating a part of a file (which are most cloud providers).
    • As far as the calls / second on uploads, I think that we've dramatically decreased these. I will explain in a bit.
  • Regarding the size change when encrypted.
    • There is very negligible overhead for encrypted data (< 0.01 %). 
    • 100 MB of encrypted data stays at roughly 100 MB of data. It's simply encrypted with AES256-CBC with a key that only the user knows.
----

 
Regarding the calls per second on uploads, our latest builds implement large chunk support (up to 100 MB) for most providers, including Amazon Cloud Drive. This should dramatically decrease the number of calls per second that are necessary for large file uploads.
 
As a point of comparison, to Google Drive for example, they are limiting us to 10 calls / second / user, which is working well but we can probably even go lower.
 
The large chunk support was implemented in 1.0.0.421, and you can download that here:
http://dl.covecube.com/CloudDriveWindows/beta/download/
 
If you want to really dig into the details of how I/O works with us, or get more information on large chunk support, I made a post about that here:
http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/
 
Again, thank you very much for taking the time to evaluate our product, and please let me know what the next steps should be to get to production access and whether you need any changes or additional limits applied on our end.
 
Kind Regards,
Alex
Covecube Inc.

 

  • On Dec. 10th 2015, Amazon replied:

Hello Alex,
 
 
Thank you for this information. We are running a new set of tests on the build you mentioned, 1.0.0.421. Once we have those results, we can see what we want to do with Upload Verification and/or setting call limits per second per user.
 
Your details about file size inflation don't match with what we saw in our most recent tests. We'll try a few different kinds of upload profiles this time and let you know what we see. 
 
Thank you for your continued patience as we work together on getting this product to our customers.
 
 
Best regards,
 
Amazon Cloud Drive 3P Support

  • I have no idea what they mean regarding the encrypted data size. AES-CBC is a 1:1 encryption scheme. Some number of bytes go in and the same exact number of bytes come out, encrypted. We do have some minor overhead for the checksum / authentication signature at the end of every 1 MB unit of data, but that's at most 64 bytes per 1 MB when using HMAC-SHA512 (which is 0.006 % overhead). You can easily verify this by creating an encrypted cloud drive of some size, filling it up to capacity, and then checking how much data is used on the server.

    Here's the data for a 5 GB encrypted drive:

    xPSzCl1.png

    Yes, it's 5 GBs.

 

 


Hello Alex,
 
We have an update and some questions regarding our analysis of build 1.0.0.421. 
 
This particular test involved a 1.5Gb folder containing 116 photos. The client ran for 48 hours before we stopped it. In each of those 48 hours, the client generated 104-205 calls, with an average of 168. (Down from ~12,000 in previous tests.)
 
Afterwards, we checked the test account through the browser. There are 163 files in the StableBit folder. 161 of them are chunk files. 8 chunks are 10 Mb in size. The remaining 153 chunks are 0 Bytes in size. Approximately half of all calls being made in each hour are resulting in errors. We tracked the request_ids of the errored calls into our logs and found many errors similar to this one:
 
        NameConflictException: Node with the name c71e006b-d476-45b4-84e0-91af15bf2683-chunk-300 already exists under parentId 0l1fv1_kQbyzeme0vIC1Rg conflicting NodeId: p8-oencZSVqV-E_x2N6LUQ
 
If this error is familiar you, do you have any recommendations for steps we might have missed in configuration, or a different build to try? 
 
Best regards,
Amazon Cloud Drive 3P Support 
 
To clarify the error issue here (I'm not 100% certain about this), Amazon doesn't provide a good way to ID these files. We have to try to upload them again, and then grab the error ID to get the actual file ID so we can update the file.  This is inefficient and would be solved with a more robust API that included a search functionality, or a better file list call. So, this is basically "by design" and required currently. 
-Christopher
 
 
Unfortunately, we haven't pursued this with Amazon recently. This is due to a number of big bugs that we have been following up on.  However, these bugs have lead to a lot of performance, stability and reliability fixes. And a lot of users have reported that these fixes have significantly improved the Amazon Cloud Drive provider.  That is something that is great to hear, as it may help to get the provider to a stable/reliable state. 
 
That said, once we get to a more stable state (after the next public beta build (after 1.0.0.463) or a stable/RC release), we do plan on pursuing this again.  
 
But in the meanwhile, we have held off on this as we want to focus on the entire product rather than a single, problematic provider. 
-Christopher
 
 
Amazon has "snuck in" additional guidelines that don't bode well for us. 
  • Don’t build apps that encrypt customer data

 

What does this mean for us? We have no idea right now.  Hopefully, this is a guideline and not a hard rule (other apps allow encryption, so that's hopeful, at least). 

 

But if we don't get re-approved, we'll deal with that when the time comes (though, we will push hard to get approval).

 

- Christopher (Jan 15. 2017)

 

If you haven't seen already, we've released a "gold" version of StableBit CloudDrive. Meaning that we have an official release! 
Unfortunately, because of increasing issues with Amazon Cloud Drive, that appear to be ENTIRELY server side (drive issues due to "429" errors, odd outages, etc), and that we are STILL not approved for production status (despite sending off additional emails a month ago, requesting approval or at least an update), we have dropped support Amazon Cloud Drive. 

This does not impact existing users, as you will still be able to mount and use your existing drives. However, we have blocked the ability to create new drives for Amazon Cloud Drive.   

 

This was not a decision that we made lightly, and while we don't regret this decision, we are saddened by it. We would have loved to come to some sort of outcome that included keeping full support for Amazon Cloud Drive. 

-Christopher (May 17, 2017)

Edited by Christopher (Drashna)
Cleaned up Quote info to make more readable, Added Release/Gold notification
Link to comment
Share on other sites

Recommended Posts

  • 0

Actually today it has not worked at all for me, just sitting there and not uploading.  Error has occured 58 times. 

 

I guess something must be going on my "To Upload" has been stuck for hours on 80KB, the upload arrow goes to 20.2mbps but nothing upload wise changes... (chkdsk comes back fine)

 

Found a hacky fix for my issue (Terminate all the stable bit program tree, quickly delete all the .cache files before the drive leaves (Files are locked while its running) then re run chkdsk

Link to comment
Share on other sites

  • 0

Are there any general or Amazon specific benefits in updating from .631 to .731? I had a look at changes.txt, but it ends at .726.

 

There are overall improvements that may help Amazon Cloud Drive, but no specific changes to the Amazon Cloud Drive provider itself. 

 

However, after this build, you require a developer account and a security profile to be able to log in.  However, Amazon has blocked the ability to do so, properly.  meaning you won't be able to log in with newer builds. 

Link to comment
Share on other sites

  • 0

ACD has been such a clusterf@ck, and things are just going to get worse with Plex Cloud.  I'm glad I got in on the 5 dollar for a year on cyber monday last year.  Now if only I could get a cheap google work account or a school account, with unlimited storage.  At this point I am seriously considering not renewing ACD as it's been horrible for any support outside of linux.  

Link to comment
Share on other sites

  • 0

ACD has been such a clusterf@ck, and things are just going to get worse with Plex Cloud.  I'm glad I got in on the 5 dollar for a year on cyber monday last year.  Now if only I could get a cheap google work account or a school account, with unlimited storage.  At this point I am seriously considering not renewing ACD as it's been horrible for any support outside of linux.  

 

Not quite sure what issues you've been having with ACD, but mine so far have been quite swell (for the price)!

 

I've been testing out build 631 as it was the recommended one to use apparently, but then I came across this post and saw that build 748 should in fact work, so I gave it a shot. It seems to be working now and is a little bit faster and more efficient from what I can tell.

 

Unfortunately the hard-coded limits of 50mbps down 20mbps up is really killing the speed of my moving of the 2TB library to ACD so I can test around with it for the next 30 days before I decide if this is actually something I want to use or not... I found a slight workaround of having two different computers syncing 1TB each (Movies on one, TV shows on another) to separate 10TB drives on the ACD account, and so far they've been pretty smooth (just slow), but this is a little faster since it means 2 20mbps connections and then once the second computer is done uploading, I'll just detach and attach it to the actual server that also has the other drive.

 

Again, only for testing so far, but it *seems* good?

 

Any word the rough time frame of when you hope to get back to resolving these ACD problems devs?

 

Edit:

 - Apparently upgrading from old versions allows you to keep your drives, but it's unable to poll for existing drives and you can't create any new ones, so I've got a bit of a conundrum and I'm hoping I'll be able to downgrade so I'm able to create a replacement for one of my drives as I forgot to enable the encryption option X_X

 - Will that be fixed once you guys re-certify your application for production-use?

Edited by Kyrluckechuck
Link to comment
Share on other sites

  • 0

Not quite sure what issues you've been having with ACD, but mine so far have been quite swell (for the price)!

 

I've been testing out build 631 as it was the recommended one to use apparently, but then I came across this post and saw that build 748 should in fact work, so I gave it a shot. It seems to be working now and is a little bit faster and more efficient from what I can tell.

 

Unfortunately the hard-coded limits of 50mbps down 20mbps up is really killing the speed of my moving of the 2TB library to ACD so I can test around with it for the next 30 days before I decide if this is actually something I want to use or not... I found a slight workaround of having two different computers syncing 1TB each (Movies on one, TV shows on another) to separate 10TB drives on the ACD account, and so far they've been pretty smooth (just slow), but this is a little faster since it means 2 20mbps connections and then once the second computer is done uploading, I'll just detach and attach it to the actual server that also has the other drive.

 

Again, only for testing so far, but it *seems* good?

 

Any word the rough time frame of when you hope to get back to resolving these ACD problems devs?

 

Edit:

 - Apparently upgrading from old versions allows you to keep your drives, but it's unable to poll for existing drives and you can't create any new ones, so I've got a bit of a conundrum and I'm hoping I'll be able to downgrade so I'm able to create a replacement for one of my drives as I forgot to enable the encryption option X_X

 - Will that be fixed once you guys re-certify your application for production-use?

 

I had serious issues with ACD corrupting my files as they were uploaded.  At one point I was at over 300 tries to upload a single block.  ACD is so not ready for the sheer amount of data the are getting.  And also, they are starting to lock accounts of people who do not have encrypted data.  So anyone who was dumb enough to use Plex Cloud without encrypting their data, will enjoy losing everything they've uploaded.  Once your account is locked, all your data is gone.  From a fresh post in /r/plex on reddit today.

Link to comment
Share on other sites

  • 0

I had serious issues with ACD corrupting my files as they were uploaded.  At one point I was at over 300 tries to upload a single block.  ACD is so not ready for the sheer amount of data the are getting.  And also, they are starting to lock accounts of people who do not have encrypted data.  So anyone who was dumb enough to use Plex Cloud without encrypting their data, will enjoy losing everything they've uploaded.  Once your account is locked, all your data is gone.  From a fresh post in /r/plex on reddit today.

 

Speaking of fresh posts on reddit today; it seems that even encrypting your data won't save you from losing your account. They can terminate your account for any reason they'd like and one or two people on r/datahoarder have reported that Amazon closed their account even though all of their data was encrypted prior to being uploaded (this is after the post on r/plex was made). The affected users have said their initial attempts to contact Amazon support got them nowhere, so they're going to try again today and report back how it goes.

Link to comment
Share on other sites

  • 0

Speaking of fresh posts on reddit today; it seems that even encrypting your data won't save you from losing your account. They can terminate your account for any reason they'd like and one or two people on r/datahoarder have reported that Amazon closed their account even though all of their data was encrypted prior to being uploaded (this is after the post on r/plex was made). The affected users have said their initial attempts to contact Amazon support got them nowhere, so they're going to try again today and report back how it goes.

 

One important note is that this appeared to be for users outside of the US.

Link to comment
Share on other sites

  • 0

I have a few questions related to ACD and CloudDrive. I hope it's okay to post them here.

 

A) Is it possible that the current developer throttle cap prevents CloudDrive users from getting banned on Amazon? I'm guessing banned accounts were banned because of aggressive clients (and maybe amount of data).

B) Is it possible to have x MB continuously buffered when reading a file sequentially? I noticed that CloudDrive doesn't prefetch before the current prefetched data goes to 0.

C) Does the prefetch time window only affect how long prefetched data is stored before it's read.

Link to comment
Share on other sites

  • 0

I have a few questions related to ACD and CloudDrive. I hope it's okay to post them here.

 

A) Is it possible that the current developer throttle cap prevents CloudDrive users from getting banned on Amazon? I'm guessing banned accounts were banned because of aggressive clients (and maybe amount of data).

B) Is it possible to have x MB continuously buffered when reading a file sequentially? I noticed that CloudDrive doesn't prefetch before the current prefetched data goes to 0.

C) Does the prefetch time window only affect how long prefetched data is stored before it's read.

 

  1. We already implement a bandwidth cap currently, to prevent issues with the Developer account. 

    However, the software supports a bandwidth cap setting in general. 

    Additionally, if Amazon would give us a straight answer about the max speed and API calls per second/minute/hour, we'd hard code the provider to respect these. But as you can see, they refuse to give us a straight answer here. 

     

  2. I'm not entirely sure what you mean here.  As long as the data is being read sequentially, it should continue to prefetch.  However, you can change the prefetch settings to grab more data and keep it for longer. 

     

  3. No, the time window is how long the data is retained before it may be purged from the cache.
Link to comment
Share on other sites

  • 0

Thanks for the answers Christopher.

 

Regarding question 2, if I set prefetch to 200 MB, while streaming a large file, it won't show 200 MB consistently buffered, but rather it'll go to zero before it'll prefetch another 200 MB. Often it'll wait a minute before performing another prefetch. I don't know if this is a mere inconsistency between the GUI and the actual cached data.

 

Could the file be streamed too slowly for the prefetching to be triggered?

Link to comment
Share on other sites

  • 0

My Amazon drives occasionally need to be reauthorized. Is there a way around this?

 

I need to downgrade .631 to authorize the drives so I'm hoping there's a way to prevent them getting disconnected.

 

Reauthorizing the drives periodically is definitely not normal behavior. 

 

Could you grab logs from the system? 

http://wiki.covecube.com/StableBit_CloudDrive_Logs

 

would it be possible to use to ACD-CD format on a local drive?  I noticed it's storing stuff differently than the local or file share formats - I have other file sync applications that can do fast transfers, so I was hoping to maybe use those for the actual uploading/etc?

 

yes, technically.  however, there is some obfuscation and encryption done for the ACD provider that isn't necessary for local disks. 

 

As for faster transfers, please see the first post.  This is an Amazon issue, and not strictly "our fault". If/when/if Amazon approves us for production status, the speed should vastly improve.  Until then... there isn't anything we can do.

 

Additionally syncing in use files is problematic at best. however, if you're detaching the files, you could then just upload them to Amazon Cloud Drive however. Sync/copy them back to a local system and mount them again. 

Link to comment
Share on other sites

  • 0

Wow what a Saga. Who would have ever thought, I used Amazon S3 for business backup going back to the early days, it was rock solid.

 

Then they came out with Reduced Redundancy then Glacier each offering less redundancy then speed but reduced price so they have 3 levels of cloud storage, actually just looked and there are some other options not relevant.

 

I'm just guessing here is what I think happened on the way. Someone at Amazon said hey we want a cheaper unlimited product and  AWS S3 wanted nothing of it knowing how hard managing price per MB is so some clown spun off a new product and took zero advice or expertise from the S3 group and they seem to be making it up as they go along. I think they wanted to keep consumer and business separate but what an ugly mess they could have avoided. The S3 expertise could have been brought in and just re-branded with less support and some limitations but using the same infrastructure and obvious world of knowledge they already had at building cloud storage. Maybe they did do it properly but just setup a dodgy team for this product.

 

I thought adding unlimited $10 a month Google Apps account data would be an option but my free (Grandfathered) accounts become payable for every user and the minimum is 5 so that is how it looks. I would pay $50 a year and $10 a month for the storage but that's not an option.

 

Unfortunately it looks like this project is not looking good. Even if Amazon get their act together I am sure they are relying on deduplication a lot and encrypted files just won't help them there, at the end of the day I don't think they will allow unlimited encrypted, I just don't think they have a policy on it yet and we have a smart team here pushing to make something work that is probably only going to be catered for in the business category.

 

Now Nvidia Shield is hooking up with them that will be interesting that could be a lot of video uploads, I wonder what data transfer rates and API call limits they have set for them?

 

Unlimited offsite storage, I could pay double a year and be happy and pay a lot more for CloudDrive as well. 

 

I am happy to be proven wrong though. I'm just looking forward to really using my 100mbs Australian Fibre NBN service, it sits idle a lot of the time and 99% of my Drive pool storage barely gets used, I would prefer someone else to spin all those disks and pay the power bill and do all the maintenance and happily downsize, A couple of 1TB SSD's in  a small case would do for me with the above working reliably. With my 1TB month internet though it would take a couple of years to upload. Maybe time to do a bit of pruning of stuff I will never watch.

Link to comment
Share on other sites

  • 0

If you have a drive set up with an older version, you should be able to install the newer version and continue to use the Amazon Cloud Drive Provider.

 

It's setting it up that is the issue, currently.

 

And no, no new developments here, yet.

Plex users are having problems who would have thought Amazon wouldn't be ready for unlimited DVR backup!

 

Sent from my Nexus 6P using Tapatalk

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...