Talyrius reacted to Cobalt503 in G Suite and unlimited Google drive
Word on the street is they aren't enforcing the 1tb limit for those with less than 5 users. Also admin has the ability to delete people's shit so good luck trying to convince people you aren't ever going to abandon the project / delete their shit.
Talyrius reacted to Mavol in Amazon Cloud Drive - Why is it not supported?
Can confirm it's been working great for me so far, and I understand that Amazon has not been forthcoming as they should. It definitely however is a provider I think most people are really eager to use with your awesome software.
Talyrius reacted to tcz06a in CloudDrive drive destroyed, ~158000 files left in Google Drive
I believe I have found a solution to this problem of mine. When I had CloudDrive destroy my drive, something, somewhere, had a hiccup. The folder on Google Drive wherein the chunks resided was deleted, but somehow the chunks themselves were not. Google forums indicates such files are called orphans, as they have no parent structure (folder). These orphans can be viewed by searching for 'is:unorganized owner:me' and deleted from there. As the Drive GUI produces 'server error occurred' when trying to delete more than 100 files at once, I looked for a script to automate their removal. I finally found one which worked. I found the code snippet at https://medium.com/@tashian/the-workaround-300ac8f05bb7#.vc41xsuwdalthough I am unsure if I can post a hyperlink in my comments. I can testify that it is deleting the orphaned files, while not touching anything else.
Talyrius reacted to steffenmand in +1 for GoogleDrive for Work support
I don't know if it is possible, but would be nice if you somehow could be able to cache the header/file attribute data of files locally so plex could index nice and fast. Loading in some videos is taking some time as well as the constant load to check if info is the same and that the file stille exists.
Just tried the drive with plex and everything runs fine, but indexing is slow as well as you lag while entering a title because plex tries to read the header/file attribute of each file and asks the OS to confirm the file exists.
If it is possible it would be a nice feature to be able to activate caching of the headers and/or file attributes of each file :-)
Besides helping in plex, i also believe it would make folders load faster while browsing in windows, as they use the headers as well - especially with lots of files.
Perhaps using the GetFileAttributesEx function in the WinAPI
8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads.
I see that this issue is happening if you have upload pending and threads running for upload and you start prefetching. It will then block both upload ans download and have threads sitting stuck in "ChecksumBlocksReadCopy" state untill prefetch timeframe runs out.
Talyrius reacted to evulhotdog in Amazon Cloud Drive - Why is it not supported?
Wouldn't make more sense to try and do whatever you can with ACD, if they are taking forever to respond/work with you guys to fix issues? In theory it would save time down the road.
Also, there are no other services that I have interest in pursuing other than ACD, for the sake that its unlimited and the price point that it is at.
There are no other services or 'mounting' software that allows the same functionality that you are able to provide in your product. I cant stress enough about the high number of people that using ACD for streaming content from the cloud, and that number would be huge if you can develop a product that works with that, and encryption too (crucial feature in my book).
See link, and you'll better understand.
Talyrius reacted to Christopher (Drashna) in Setup Same CloudDrive on multiple Computers?
I think the Op doesn't want to use his upload speed to do this.
If you upload the content to a CloudDrive, you can detach (unmount) it from the local system and then mount it on the other system. That works fine. But as for sharing it between two systems? Not really.
However, Plex does allow for multiple users (via plex accounts, IIRC). But it does consume your upstream bandwidth.
NetDrive does have a solution that allows you to access from multiple systems, but it's not a full drive, and it's not encrypted (IIRC).
THe difference is that StableBit CloudDrive is using the provider as a raw disk backend, basically. Meaning it stores raw drive data. And it caches new content locally before uploading it. Because of this, there isn't a good solution to "share" (read) the drive in multiple locations, as the content may not be uploaded yet. This would cause corruption, in any scenario.
Maybe we could (later) add a "mount as read only" option, so that you can mount an unused disk as read only, and add a flag to it so it cannot be mounted as writable anywhere else.
This would allow you to do what you want, but it render the drive as un-updatable. Meaning no new content.
Talyrius reacted to KodeK in CloudDrive file format
Have you guys published (or considered publishing) any information on the file format used at the cloud level? While I am a huge fan of all your software, I am a bit hesitant to use Cloud Drive as a long-term solution without a guarantee that files are recoverable without the use of proprietary software.
Thanks, and hope you understand my concerns!
Talyrius reacted to steffenmand in Question about chunk upload
I noticed that after a write thread finishes the upload verification downloads the content to check if its correct. however while it downloads no new upload threads are started - shouldnt it start uploading the next chunk, while it downloads and checks the former one? Guess you are using some sort of queue system and that the file could just be added to the queue if it needs to be reuploaded.
Any specific reason why it runs like this? could imagine a little more speed if it can continue upload meanwhile
Talyrius reacted to Christopher (Drashna) in Amazon Cloud Drive - Why is it not supported?
Well, as it took 2 weeks and several emails originally ..... the answer is "I don't know". Sorry.
We've had much better communication with Amazon lately (eg, they're actually replying).
In fact, right now, the emails are about the exact API usage and clearing up some functionality, between Alex and Amazon. So, things are looking good.
If you check the first post, Alex has updated it with the latest responses.
Talyrius got a reaction from Christopher (Drashna) in Amazon is offering a year of unlimited cloud storage for just $5
Act quick! This is sure to be a limited-time offer. After the year is up, it'll renew at the normal rate of $59.99/year.
Talyrius reacted to Alex in Mount Existing Files
When I said unique, I didn't mean that we're unique at offering encryption for the cloud, or mounting your cloud data as a drive. Surely that's been done by other products.
And by the way, I'm not trying to convince you to use StableBit CloudDrive, in fact, I'm saying the exact opposite. If you want to share your files using the tools that your cloud provider offers (such as web access), StableBit CloudDrive simply can't do what you're asking.
I'm simply trying to explain the motivation behind the design.
Think about TrueCrypt. When you create a TrueCrypt container and mount that as a drive, and then copy some files into that encrypted drive, do those files get stored on some file system somewhere in an encrypted form? Does the encrypted data retain its folder structure? No, the files get stored in an encrypted container, which is a large random blob of data.
You can think of StableBit CloudDrive as TrueCrypt for the cloud, in the sense that the files that you copy into a cloud drive get stored in random blobs in the cloud. They don't retain any structure.
(I know, we're not open source, but that's the best analogy that I can come up with)
Now you may ask, why do this? Why not simply encrypt the files and upload them to the cloud as is?
Because the question that StableBit CloudDrive is trying to answer is, how can we build the best full drive encryption product for the cloud? And once you're encrypting your data, any services offered by your cloud provider, such as web access, become impossible, regardless of whether you're uploading files or unstructured blobs of data.
So really, once you're encrypting, it doesn't matter which approach you use. Either way, encrypted data is opaque.
The reasons that I chose the opaque block approach for StableBit CloudDrive are:
You get full random data access, regardless of whether the provider supports "range" (partial file) requests or not.
Say for example that you upload a 4GB file to the cloud (or maybe even a 50GB file or larger). This file perhaps may be a database file, or a movie file. Now say that you now try to open this file from your cloud drive, with something like a movie player. And the movie player need to read some bytes from the end of the file before it can start playback.
If this file is an actual file in the cloud, uploaded to the cloud as one single contiguous chunk (not like what StableBit CloudDrive does), how long do you have to wait until the playback starts? Your cloud drive would need to download most of the file before that read request can be satisfied. In other words, you may have to wait a long while.
With the block based approach (which StableBit CloudDrive does use), we calculate the block that we need to download, download it, read that part of the block that the movie player is requesting, and we're done. Very quick.
For StableBit CloudDrive, the cloud provider doesn't need to support any kind of structured file system. All we need is key/value pair storage. In that sense, StableBit CloudDrive is definitely geared more towards enterprise-grade providers such as Amazon S3, Google Cloud Storage, Microsoft Azure, and other similar providers. Partial file caching becomes a lot simpler. Say for example that you've got a huge database file, and a few tables of that file are accessed frequently. StableBit CloudDrive can easily recognize that some parts of that file need to be stored locally. It's not aware of file access, it's simply looking at the entire drive and caching locally the most frequently accessed areas. So those frequently accessed tables will be cached locally, even if they're not stored contiguously. And lastly, because the file based approach has already been done over and over again. Why make a me too product?
With StableBit CloudDrive, it is my hope to offer a different approach to encrypted cloud storage.
I don't know if we've nailed it, but there it is, let the market decide. It'll succeed if it should.
Talyrius reacted to Alex in Amazon Cloud Drive - Why is it not supported?
Me too, that's the way that I read it initially also. But the difference in allowed bandwidth would be huge if I'm wrong, so I really want to avoid any potential misunderstandings.
As for larger chunks, It's possible to utilize larger chunks, but keep in mind, the size of the chunk controls the latency of read I/O. So if you try to read anything on the cloud drive that's not stored locally, that's an automatic download of at least the chunk size before the read can complete. And we have to keep read I/O responsive in order for the drive's performance to be acceptable.
In my testing a 1MB limit seemed like a good starting point in order to support a wide variety of bandwidths.
But yeah, if you're utilizing something like Amazon S3 (which has no limits I assume) and you have a Gigabit connection, you should be able to use much larger chunks in theory. I'll explore this option in future releases.
Talyrius reacted to Alex in Amazon Cloud Drive - Why is it not supported?
Some people have mentioned that we're not transparent enough with regards to what's going on with Amazon Cloud Drive support. That was definitely not the intention, and if that's how you guys feel, then I'm sorry about that and I want to change that. We really have nothing to gain by keeping any of this secret.
Here, you will find a timeline of exactly what's happening with our ongoing communications with the Amazon Cloud Drive team, and I will keep this thread up to date as things develop.
On May 28th the first public BETA of StableBit CloudDrive was released without Amazon Cloud Drive production access enabled. At the time, I thought that the semi-automated whitelisting process that we went through was "Production Access". While this is similar to other providers, like Dropbox, it became apparent that for Amazon Cloud Drive, it's not. Upon closer reading of their documentation, it appears that the whitelisting process actually imposed "Developer Access" on us. To get upgraded to "Production Access" Amazon Cloud Drive requires direct email communications with the Amazon Cloud Drive team. We submitted our application for approval for production access originally on Aug. 15 over email: On Aug 24th I wrote in requesting a status update, because no one had replied to me, so I had no idea whether the email was read or not. On Aug. 27th I finally got an email back letting me know that our application was approved for production use. I was pleasantly surprised. On Sept 1st, after some testing, I wrote another email to Amazon letting them know that we are having issues with Amazon not respecting the Content-Type of uploads, and that we are also having issues with the new permission scopes that have been changes since we initially submitted our application for approval. No one answered this particular email until... On Sept 7th I received a panicked email from Amazon addressed to me (with CCs to other Amazon addresses) letting me know that Amazon is seeing unusual call patterns from one of our users. On Sept 11th I replied explaining that we do not keep track of what our customers are doing, and that our software can scale very well, as long as the server permits it and the network bandwidth is sufficient. Our software does respect 429 throttling responses from the server and it does perform exponential backoff, as is standard practice in such cases. Nevertheless, I offered to limit the number of threads that we use, or to apply any other limits that Amazon deems necessary on the client side. I highlighted on this page the key question that really needs answered. Also, please note that obviously Amazon knows who this user is, since they have to be their customer in order to be logged into Amazon Cloud Drive. Also note that instead of banning or throttling that particular customer, Amazon has chosen to block the entire user base of our application.
On Sept. 16th I received a response from Amazon. On Sept. 21st I haven't heard anything back yet, so I sent them this. I waited until Oct. 29th and no one answered. At that point I informed them that we're going ahead with the next public BETA regardless. Some time in the beginning of November an Amazon employee, on the Amazon forums, started claiming that we're not answering our emails. This was amidst their users asking why Amazon Cloud Drive is not supported with StableBit CloudDrive. So I did post a reply to that, listing a similar timeline. One Nov. 11th I sent another email to them. On Nov 13th Amazon finally replied. I redacted the limits, I don't know if they want those public.
On Nov. 19th I sent them another email asking to clarify what the limits mean exactly. On Nov 25th Amazon replied with some questions and concerns regarding the number of API calls per second: On Dec. 2nd I replied with the answers and a new build that implemented large chunk support. This was a fairly complicated change to our code designed to minimize the number of calls per second for large file uploads.
You can read my Nuts & Bolts post on large chunk support here: http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/ On Dec. 10th 2015, Amazon replied: I have no idea what they mean regarding the encrypted data size. AES-CBC is a 1:1 encryption scheme. Some number of bytes go in and the same exact number of bytes come out, encrypted. We do have some minor overhead for the checksum / authentication signature at the end of every 1 MB unit of data, but that's at most 64 bytes per 1 MB when using HMAC-SHA512 (which is 0.006 % overhead). You can easily verify this by creating an encrypted cloud drive of some size, filling it up to capacity, and then checking how much data is used on the server.
Here's the data for a 5 GB encrypted drive:
Yes, it's 5 GBs.
To clarify the error issue here (I'm not 100% certain about this), Amazon doesn't provide a good way to ID these files. We have to try to upload them again, and then grab the error ID to get the actual file ID so we can update the file. This is inefficient and would be solved with a more robust API that included a search functionality, or a better file list call. So, this is basically "by design" and required currently. -Christopher Unfortunately, we haven't pursued this with Amazon recently. This is due to a number of big bugs that we have been following up on. However, these bugs have lead to a lot of performance, stability and reliability fixes. And a lot of users have reported that these fixes have significantly improved the Amazon Cloud Drive provider. That is something that is great to hear, as it may help to get the provider to a stable/reliable state. That said, once we get to a more stable state (after the next public beta build (after 188.8.131.523) or a stable/RC release), we do plan on pursuing this again. But in the meanwhile, we have held off on this as we want to focus on the entire product rather than a single, problematic provider. -Christopher Amazon has "snuck in" additional guidelines that don't bode well for us. https://developer.amazon.com/public/apis/experience/cloud-drive/content/developer-guide Donâ€™t build apps that encrypt customer data
What does this mean for us? We have no idea right now. Hopefully, this is a guideline and not a hard rule (other apps allow encryption, so that's hopeful, at least).
But if we don't get re-approved, we'll deal with that when the time comes (though, we will push hard to get approval).
- Christopher (Jan 15. 2017)
If you haven't seen already, we've released a "gold" version of StableBit CloudDrive. Meaning that we have an official release!
Unfortunately, because of increasing issues with Amazon Cloud Drive, that appear to be ENTIRELY server side (drive issues due to "429" errors, odd outages, etc), and that we are STILL not approved for production status (despite sending off additional emails a month ago, requesting approval or at least an update), we have dropped support Amazon Cloud Drive.
This does not impact existing users, as you will still be able to mount and use your existing drives. However, we have blocked the ability to create new drives for Amazon Cloud Drive.
This was not a decision that we made lightly, and while we don't regret this decision, we are saddened by it. We would have loved to come to some sort of outcome that included keeping full support for Amazon Cloud Drive.
-Christopher (May 17, 2017)