Jump to content

wid_sbdp

Members
  • Posts

    49
  • Joined

  • Last visited

  • Days Won

    1

wid_sbdp last won the day on November 28 2016

wid_sbdp had the most liked content!

wid_sbdp's Achievements

Advanced Member

Advanced Member (3/3)

3

Reputation

  1. The fact minimum_download_size = none = 1MB chunks needs to be addressed in the next build. There's already a 1MB option, why would no minimum default to 1MB? I'd wager 95% of drives are set to no minimum because it makes the most sense of all the options if you don't know what it does in reality. That's an absurd amount of API calls being made to the service(s) that could be fixed.
  2. I changed it to 15 and still got a dismount. Sorry but I'm done with CloudDrive* for the moment, I'll revisit it in a few months and see what sort of improvements have been implemented. I'm 99% sure it has to do with the 1MB prefetch block downloads, that's just absurd I/O being put on ACD for a 3GB file that needs to be prefetched. And why that is tied to "minimum download size" is beyond me (may need to change the wording of that dropdown to clarify). And the fact it defaults to 1MB if you choose "no minimum size" which makes absolutely no sense. Yeah if I remade my drive and made it with settings that worked with what I know now, might not have as many issues (haven't been having issues with the Google Drive that I made with the increased minimum download size), but honestly I'm just tired of getting up to ~3TB of content loaded into a drive and then having to start over when someone notices another random setting that can only be done at drive creation. * From a media storage standpoint, still using some licenses on my PCs at home because it's great for backing up "regular" stuff. I use it daily at my desktop moving off Dropbox to ACD and having an encrypted store mounted locally on my PC. AMAZING for that.
  3. Three times now I've misclicked and cleared out a hundred gigs of local cache lol. I mean it's not a huge deal, but that's another 100GB that will have to be downloaded at some point from the cloud. Should get a pop-up confirming if you actually want to clear it.
  4. Any option of an automatic retry (when the drive dismounts and displays retry/reauthorize)? I think the default settings are a bit too strict. I have upload/download threads set to 3 (yes, three... and even at 3 i get the little yellow turtles still, throttling) with 1Gbps up/down (can hit 400mbs upload easily to ACD even with other bandwidth being used) and, again, while at work my drive dismounted at some point with only 6 errors in the status corner. It's obvious there isn't an issue, or maybe there are times when there are issues on Amazon's end, but forcing a manual retry to mount makes this a very very very managed setup. Maybe 15 minutes later do an automatic retry on mounting? Oh it works? Continue going? Something is still messed up with your connection? Doesn't mount and starts over with the counter.
  5. Thanks for moving this haha. Sounds like a great idea, look forward to seeing that. I realize a lot of people use DrivePool for physical storage but it would definitely be nice to have some behind-the-scenes cooperation between CloudDrive and DrivePool, especially with the rising popularity of using cheap unlimited cloud storage to host files.
  6. This is totally in the wrong forum lol. Hmm
  7. Chistopher explains it best, but basically CloudDrive has no way of knowing if anything was corrupted on the "crash" so it reuploads your cache to ensure data integrity. Hashing wouldn't exactly work because the data in the cloud can't be "scanned" without first downloading it. So if you were thinking just hash the files and compare the hashes you'd have to download every block anyways from the cloud to compare to the local cache block (which would take the same amount of bandwidth as just reuploading it all). It kind of sucks but that's a limitation with using something in the cloud at the moment. Even if there was an option to run repair or purge local cache (thus preventing the reupload, and just banking on redownloading content as it's needed) you wouldn't know if there was a partial transfer that happened that was cut off during the crash and now you have a corrupted block in the cloud. So time goes on and eventually you find out when someone goes to retrieve a file that it's corrupted because the block on the cloud was only partially uploaded. Overall there doesn't seem to be a quick and easy way outside of reuploading everything to ensure there is no corrupted blocks sitting on the cloud. It's one of the reasons I've changed over to pretty small local caches for my clouddrives (due to having 1gbs up/down on a dedicated) as the frequency of drive crashes and reuploads is worse than just having a small cache and having to pull files more frequently from the cloud. Which sucks because as I get closer to 0 cache size I could just be using ACDLI and not have any of these issues to begin with (just have no cache). But at this point I have too much time and space invested in using clouddrive/drivepool to start over (including switching from winserver to linux). In reality this is still a beta product and it's expected things act weird and there are somewhat-regular improvements/fixes. And with cache drives being dynamically changeable I can just bump them up down the road if things become more stable.
  8. Say I have drive A. There is 5TB in the cloud and 100GB cached locally. I add drive B and turn on file duplication with 2 copies (one on each drive). Why does Drivepool immediately start to download data from the cloud (slow) to duplicate to drive B? Wouldn't it make more sense initially to duplicate anything that already exists in cache (locally) on drive A?
  9. Good luck. Rebooting my server is a game of "will clouddrive service stop properly?" There seems to be so much conflict between either clouddrive or drivepool with the built-in windows disk management/virtual disk manager. if CD/DP isn't running 100% stable disk management and diskpart will never even load. 80% of the time if i stop the clouddrive service it'll hang on stopping. at that point... what do you do? if you reboot there's a high chance it'll enter drive recovery and have to reupload the cache (which gets annoying if you have bandwidth caps on a dedicated server).
  10. I don't think it's CD specifically, I think it's the interaction with ACD and Google. I made a new Azure account (with the free $200 in credit you get when you make one) and built a storage instance, added that to CD and uploaded some stuff. I was getting very fast 700mbit transfers to it. I think ACD and GCD are just not "as open" as the respective enterprise services are in terms of speed and capability (number of connections, etc). I mean it would make sense to limit those on Amazon's and Google's side as they are cheaper alternatives to paying for an enterprise-level tier. Which, is fine by me. Stability + 150mbit uploads to ACD for $5/mo beats out Stability + 700mbit uploads for $200-300+ (and into the thousands and tens of thousands if you're crazy and have 200TB on your drive).
  11. wid_sbdp

    Encryption Question

    Plex Cloud doesn't do any sort of protection on the files sadly, so they're wide open to being seen by your provider. Kind of makes it pointless and I'm not sure why Plex is pushing for Plex Cloud when they know for 100% certainty it'll be used for pirated content and just cause a headache in customer service complaints that they can no longer access their files.
  12. You can't reauthorize pre-unlock. only options are literally unlock or destroy.
  13. I save the pdf of my generated key in two spots, in another encrypted clouddrive from my desktop and on a removable USB drive. I name my drives very uniquely so there's no mistaken which key pdf goes with which drive. Today I tried reattaching a drive after detach and it continues to say that they key provided is not correct. This is after using the (same) key from both sources. Is there any way drives get corrupted to the point they don't know that a legitimate key is indeed legit? Is there any way to fix this?
  14. wid_sbdp

    Encryption Question

    AFAIK some services use encryption either way. Forcing encryption lets you have the key though. Also they use obfuscation on the files so while you could technically "look inside" a chunk, the chunk itself is still going to be a long string of random characters. But not using encryption doesn't mean like "file1.mkv" is just uploaded as "file1.mkv"
  15. At this point I just trashed it. Takes less time to spin up a new drive than troubleshoot something with 100 gigs on it. My new stance has been if I'm using CloudDrive/DrivePool... a drive will stop working for some random reason. So I just setup a billion drives (obvious exaggeration) and keep things balanced between them so there's as little loss as possible when one does stop working. Your cache implementation keeps you above the other services much like CloudDrive so don't screw that up
×
×
  • Create New...