Jump to content

wid_sbdp

Members
  • Posts

    49
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by wid_sbdp

  1. The fact minimum_download_size = none = 1MB chunks needs to be addressed in the next build. There's already a 1MB option, why would no minimum default to 1MB? I'd wager 95% of drives are set to no minimum because it makes the most sense of all the options if you don't know what it does in reality. That's an absurd amount of API calls being made to the service(s) that could be fixed.
  2. I changed it to 15 and still got a dismount. Sorry but I'm done with CloudDrive* for the moment, I'll revisit it in a few months and see what sort of improvements have been implemented. I'm 99% sure it has to do with the 1MB prefetch block downloads, that's just absurd I/O being put on ACD for a 3GB file that needs to be prefetched. And why that is tied to "minimum download size" is beyond me (may need to change the wording of that dropdown to clarify). And the fact it defaults to 1MB if you choose "no minimum size" which makes absolutely no sense. Yeah if I remade my drive and made it with settings that worked with what I know now, might not have as many issues (haven't been having issues with the Google Drive that I made with the increased minimum download size), but honestly I'm just tired of getting up to ~3TB of content loaded into a drive and then having to start over when someone notices another random setting that can only be done at drive creation. * From a media storage standpoint, still using some licenses on my PCs at home because it's great for backing up "regular" stuff. I use it daily at my desktop moving off Dropbox to ACD and having an encrypted store mounted locally on my PC. AMAZING for that.
  3. Three times now I've misclicked and cleared out a hundred gigs of local cache lol. I mean it's not a huge deal, but that's another 100GB that will have to be downloaded at some point from the cloud. Should get a pop-up confirming if you actually want to clear it.
  4. Any option of an automatic retry (when the drive dismounts and displays retry/reauthorize)? I think the default settings are a bit too strict. I have upload/download threads set to 3 (yes, three... and even at 3 i get the little yellow turtles still, throttling) with 1Gbps up/down (can hit 400mbs upload easily to ACD even with other bandwidth being used) and, again, while at work my drive dismounted at some point with only 6 errors in the status corner. It's obvious there isn't an issue, or maybe there are times when there are issues on Amazon's end, but forcing a manual retry to mount makes this a very very very managed setup. Maybe 15 minutes later do an automatic retry on mounting? Oh it works? Continue going? Something is still messed up with your connection? Doesn't mount and starts over with the counter.
  5. Thanks for moving this haha. Sounds like a great idea, look forward to seeing that. I realize a lot of people use DrivePool for physical storage but it would definitely be nice to have some behind-the-scenes cooperation between CloudDrive and DrivePool, especially with the rising popularity of using cheap unlimited cloud storage to host files.
  6. This is totally in the wrong forum lol. Hmm
  7. Chistopher explains it best, but basically CloudDrive has no way of knowing if anything was corrupted on the "crash" so it reuploads your cache to ensure data integrity. Hashing wouldn't exactly work because the data in the cloud can't be "scanned" without first downloading it. So if you were thinking just hash the files and compare the hashes you'd have to download every block anyways from the cloud to compare to the local cache block (which would take the same amount of bandwidth as just reuploading it all). It kind of sucks but that's a limitation with using something in the cloud at the moment. Even if there was an option to run repair or purge local cache (thus preventing the reupload, and just banking on redownloading content as it's needed) you wouldn't know if there was a partial transfer that happened that was cut off during the crash and now you have a corrupted block in the cloud. So time goes on and eventually you find out when someone goes to retrieve a file that it's corrupted because the block on the cloud was only partially uploaded. Overall there doesn't seem to be a quick and easy way outside of reuploading everything to ensure there is no corrupted blocks sitting on the cloud. It's one of the reasons I've changed over to pretty small local caches for my clouddrives (due to having 1gbs up/down on a dedicated) as the frequency of drive crashes and reuploads is worse than just having a small cache and having to pull files more frequently from the cloud. Which sucks because as I get closer to 0 cache size I could just be using ACDLI and not have any of these issues to begin with (just have no cache). But at this point I have too much time and space invested in using clouddrive/drivepool to start over (including switching from winserver to linux). In reality this is still a beta product and it's expected things act weird and there are somewhat-regular improvements/fixes. And with cache drives being dynamically changeable I can just bump them up down the road if things become more stable.
  8. Say I have drive A. There is 5TB in the cloud and 100GB cached locally. I add drive B and turn on file duplication with 2 copies (one on each drive). Why does Drivepool immediately start to download data from the cloud (slow) to duplicate to drive B? Wouldn't it make more sense initially to duplicate anything that already exists in cache (locally) on drive A?
  9. Good luck. Rebooting my server is a game of "will clouddrive service stop properly?" There seems to be so much conflict between either clouddrive or drivepool with the built-in windows disk management/virtual disk manager. if CD/DP isn't running 100% stable disk management and diskpart will never even load. 80% of the time if i stop the clouddrive service it'll hang on stopping. at that point... what do you do? if you reboot there's a high chance it'll enter drive recovery and have to reupload the cache (which gets annoying if you have bandwidth caps on a dedicated server).
  10. I don't think it's CD specifically, I think it's the interaction with ACD and Google. I made a new Azure account (with the free $200 in credit you get when you make one) and built a storage instance, added that to CD and uploaded some stuff. I was getting very fast 700mbit transfers to it. I think ACD and GCD are just not "as open" as the respective enterprise services are in terms of speed and capability (number of connections, etc). I mean it would make sense to limit those on Amazon's and Google's side as they are cheaper alternatives to paying for an enterprise-level tier. Which, is fine by me. Stability + 150mbit uploads to ACD for $5/mo beats out Stability + 700mbit uploads for $200-300+ (and into the thousands and tens of thousands if you're crazy and have 200TB on your drive).
  11. wid_sbdp

    Encryption Question

    Plex Cloud doesn't do any sort of protection on the files sadly, so they're wide open to being seen by your provider. Kind of makes it pointless and I'm not sure why Plex is pushing for Plex Cloud when they know for 100% certainty it'll be used for pirated content and just cause a headache in customer service complaints that they can no longer access their files.
  12. You can't reauthorize pre-unlock. only options are literally unlock or destroy.
  13. I save the pdf of my generated key in two spots, in another encrypted clouddrive from my desktop and on a removable USB drive. I name my drives very uniquely so there's no mistaken which key pdf goes with which drive. Today I tried reattaching a drive after detach and it continues to say that they key provided is not correct. This is after using the (same) key from both sources. Is there any way drives get corrupted to the point they don't know that a legitimate key is indeed legit? Is there any way to fix this?
  14. wid_sbdp

    Encryption Question

    AFAIK some services use encryption either way. Forcing encryption lets you have the key though. Also they use obfuscation on the files so while you could technically "look inside" a chunk, the chunk itself is still going to be a long string of random characters. But not using encryption doesn't mean like "file1.mkv" is just uploaded as "file1.mkv"
  15. At this point I just trashed it. Takes less time to spin up a new drive than troubleshoot something with 100 gigs on it. My new stance has been if I'm using CloudDrive/DrivePool... a drive will stop working for some random reason. So I just setup a billion drives (obvious exaggeration) and keep things balanced between them so there's as little loss as possible when one does stop working. Your cache implementation keeps you above the other services much like CloudDrive so don't screw that up
  16. Must say, unchecking "Background I/O" has made a world of difference, not sure if I'm on a stretch of "luck" or if Background I/O had something to do with it.... but it's been running for over 12 hours with no issues whatsoever (knock on wood) and I've had at most 3x warnings about issues uploading or reading data from the cloud drive (normally I'd probably be in the hundreds or thousands).
  17. If you're talking about copying individual files, no. Everything on ACD is encrypted. You have no access to know which chunk = file.ext. You could move every chunk and then use the new ACD account to access the files, but they're not going to be replicated in real time after a copy. And you wouldn't be able to mount both on CloudDrive because they share the same UIDs. So outside of that, yes there exists software that does cloud -> cloud copy and doesn't require you to download the file then reupload to another cloud. I'm not going to mention it because I don't know their stance on mentioning other software on the forums, but you can google.
  18. Nice. Rebooted server because a google drive wasn't grabbing a drive letter (uploading/downloading fine, said disk 5, no letter). Server comes back up, drive mounts to a drive letter.. still says missing in DrivePool. Interesting. Check Disk Management.... Disk 5 is now a RAW partition. Cool, lost the 150GB on there I guess :/
  19. Yeah I was looking at some of the metadata. Are the UIDs known (I assume not since you can technically mount a cloud drive on any PC) by the OS at all? Or could you theoretically change one character of the UID in the metadata file and windows will think it's a "different" drive?
  20. Have done that, dozens of times, still has issues. Seems like a drive will work for awhile then just go completely "wtf." Have even had this issue with Google Drives created post 770 upgrade.
  21. What sort of logs would help with troubleshooting this? I've used both GCD and ACD. Upload speeds are > 200 mbps. Download speeds on average of 70-100 mbps with peaks at 150 mbps. 100% random drives will start accumulating errors of "cannot read data from provider" and eventually dismount with the Retry/Reauthorize/Destroy options. The crappy part is they'll never attempt to retry automatically so if it happens to happen while I'm asleep the whole system breaks down. Drive remains dead, DrivePool completely breaks even if I have 1:1 duplication of the files on another CloudDrive (say I have drive E: and F: as CloudDrives and are 100% duplicated, DrivePool is drive G: ... if drive E: dismounts drive G: goes away and everything doesn't work... it's a weird way of working). So obviously upload speed isn't a problem. Download speed isn't a problem. I have 1gbps upload/download on a dedicated server hosted in a pretty big datacenter in LA with peering all over. We can argue all day that the "cannot read data from provider" error is connection based but I honestly don't think it is, especially at those speeds. The worst part is.. seeing as it's not a real error and a retry generally fixes it for another completely random amount of time.. half the time the drive doesn't remount. It'll show back up in CloudDrive GUI, it'll continue uploading what was in "to upload" and it'll continue to download whatever it is downloading for whatever reason, but it'll never get a drive letter. Cool... I'll just go to Disk Management or use DiskPart and assign one. Disk doesn't show up in either. It'll say "Disk 5" in the GUI but "Disk 5" doesn't exist in either. So now DrivePool is broken until a server restart and it does some "reset" of settings to get everything working. Unfortunately randomly rebooting my server is annoying kicks people off that is using it. So, I'd like to assist in any way I can to help narrow down this bug. It has happened to other people on here, there are forum threads for it. And I'm sure there are people dealing with it just by restarting and not posting. Just let me know what sort of options I need to turn on and what sort of logs I need to submit. Running CloudDrive v770 and DrivePool v734.
  22. Yeah mine has all kinds of issues these days. Google Drive dropped drive letter. It's still in the GUI and still uploading/downloading files but doesn't show up in Disk Management, doesn't show up using diskpart as even an offline or inactive disk. It doesn't exist. And of course since the drive letter dropped DrivePool thinks it is gone and spits out errors....
  23. Is the API issue being worked at all with Google yet? Last I read it's "something we're going to address with them." I'm super pro-StableBit, but I'm honestly about to give up and just go to massive amounts of unencrypted drives pooled together and use something like rclone and risk the ToS bans because it would be 100000x quicker to reupload everything at 500 MB/s than it would encrypted at 11 MB/s :/
  24. Is this possible? Say I have an encrypted Google Drive setup with like 20TB in it and I wanted to create a second Google Drive (different Google account) and use DrivePool to duplicate the two. Current method would be create a new drive on Account2 and setup duplication, files downloaded from Account1, reuploaded to Account2. Takes a long time. You can rclone between two directly and probably transfer that 20TB in an hour... but they'd be 1:1 copies of the encrypted chunks and have all the same metadata. Could you do that and then mount the Acount2 drive and not run into any issues? Encryption key being the same wouldn't bother me, not using encryption as a safeguard from cracking just to keep Google from seeing what I'm storing.
  25. It has been put out multiple times ACD will not be fixed until after public release. I really don't see a point in continuing to troubleshoot it or post that it doesn't work for you either. It doesn't work, unless you had an ACD drive created from awhile back before things changed.
×
×
  • Create New...