Jump to content
Covecube Inc.

darkly

Members
  • Content Count

    63
  • Joined

  • Last visited

  • Days Won

    2

darkly last won the day on September 30 2017

darkly had the most liked content!

About darkly

  • Rank
    Advanced Member

Recent Profile Visitors

469 profile views
  1. Thanks. I've actually tested having parts of a pool absent before and I'm pretty sure you can still write to the remaining drives. My problem is that once the cloud drive comes online, I don't want to simply "balance" the pool. I don't want the local disk(s) in the pool to be used at all. Ideally, once the CloudDrive comes back online, all data is taken off the local disks and moved to the CloudDrive. I'm just not sure how I'd configure that behavior to begin with. If the first part of what you said is correct (that balancing would be paused), then that's great, as that's exactly the behavior I'd want while the CloudDrive is not mounted, as I'd still want to be able to move files onto the pool's local disks (even though those local disks are somehow configured to hold 0% of the data once "balanced"). Any idea how to set up DrivePool to balance in this way?
  2. I'm currently using a CloudDrive that is partitioned into many smaller parts and pooled together with DrivePool. The CloudDrive is encrypted and NOT set up to automatically mount when the OS loads. One downside of this is that if any applications are expecting a directory to exist, it won't be able to find it until I unlock the CloudDrive and DrivePool picks up at least one partition. I have an idea on how to resolve this, but I'm not sure exactly how to implement it. I'm thinking that if I either add a local drive to the existing pool, or create a new pool consisting of just the local drive + the existing pool (nested), and somehow set the balancing rules so that the local drive is always 0% utilized, then 1) that pool would always be available to the system (at least once DrivePool services load), 2) once the CloudDrive is unlocked, the local drive (in the pool) would not be utilized at all, and 3) if the CloudDrive is NOT unlocked, then writes to the pool would be forced to the local drive, but immediately offloaded to the CloudDrive once it IS unlocked. Does this make sense? And how (if possible) could I configure DrivePool to do this?
  3. Most changelogs don't go in depth into the inner workings of added functionality. They only detail the fact that the functionality now exists, or reference some bug that has been fixed. Expecting to find detailed information on how duplication functions in the CloudDrive changelog feels about the same as expecting Plex server changelogs to detail how temporary files created during transcoding are handled. When your cache is low on space, you first get a warning that writes are being throttled. At some point you get warnings that CD is having difficulty reading data from the provider (I don't remember the specific order in which things happen or the exact wording, and I really don't care to trigger the issue to find out). That warning mentions that continued issues could cause some issues (data loss, drive disconnecting, or something like that). I'm 99.99% sure CD does not read data directly from the cloud. If the cache is full, it cannot download the data to read it. I've had this happen multiple times on drives that were otherwise stable, and unless there's some cosmic level coincidences happening, there's no way I'm misattributing the issue when it occurs every time the cache drive fills up and I accidentally leave it in that state for longer than I intended (such as leaving a large file transfer unattended). Considering clouddrive itself gives you a warning about having trouble reading data when the cache becomes saturated for an extended period, I don't think I'm wrong. Anyway, I've personally watched as my writes were throttled, and yet my cache drive continued to fill up more and more (albeit much slower) until it had effectively no free space at all except for whenever CD managed to offload another chunk to the cloud (before the drive ultimately failed due to not being able to fetch data from the cloud fast enough anyway). I've had this happen multiple times on multiple devices and the drives never have any other problems until I carelessly left a large transfer going that saturated my cache. I've had multiple clouddrives mounted on the same system, using the same cache drive, with one drive sitting passively with no reads and writes, and the other drive having a large transfer I accidentally left running, and sure enough, that drive dismounts and/or loses data and the passive drive remains in peak condition with no issues whatsoever when I check it after the fact for any problems. EDIT 90% sure the error in this post is the one I was seeing (with different wording for the provider and drives ofc):
  4. oh THAT'S where it is. I rarely think of the changelog as a place for technical documentation of features . . . Thanks for pointing me to that anyway. This has definitely not been my experience, except in the short term. While writes are throttled for a short while, I've found that extended operation under this throttling can cause SERIOUS issues as CloudDrive attempts to read chunks and is unable to due to the cache being full. I've lost entire drives due to this.
  5. I fully understand that the two functions are technically different, but my point is that functionally, both can affect my data at a large scale. My question is about what the actual step-by-step result of enabling these features on an existing data set would be, not what the end result is intended to be (as it might never get there). An existing large dataset with a cache drive nowhere near the size of it is the key point to my scenarios. What (down to EVERY key step) happens from the MOMENT I enable duplication in CloudDrive? All of my prior experience with CloudDrive suggests to me that my 40TB of data would start being downloaded, read, duplicated, and reuploaded, and this would max out my cache drive, causing the clouddrive to fail and either become corrupted or dismount, if hitting the 750GB upload cap doesn't do so first. As I stated in my first comment, there is NO documentation about how existing data is handled when features like this which affect things at a large scope are enabled, and that's really concerning when you're familiar with the type of problems that you can run into with CloudDrive if you're not careful with things like the upload cap, available space on the cache, and R/W bottlenecks on the cache drive. As someone who has lost many terabytes of data due to this, I am understandably reluctant to touch a feature like this which could actually help me on the long run, because I don't know what it does NOW.
  6. Can you shine some more light on what exactly happens when these type of features are turned on or off on existing drives with large amounts of data already saved? I'm particularly interested in the details behind a CloudDrive+DrivePool nested setup (multiple partitions on CloudDrives pooled). Some examples: 40+TB of data on a single pool consisting of a single clouddrive split into multiple partitions, mounted with a 50GB cache on a 1TB drive. What EXACTLY happens when duplication is turned on in CD, or if file duplication is turned on in DP? 40+TB on a pool, which itself is consists of a single pool. That pool consists of multiple partitions from a single clouddrive. A second clouddrive is partitioned and those partitions are pooled together (in yet another pool). The resulting pool is added to the first pool (the one which directly contains the data) so that it now consists of two pools. Balancers are enabled in drivepool, so it begins to try to balance the 40TB of data between the two pools now. Same question here. What EXACTLY would happen? And, more simply, what happens on a clouddrive is duplication is disabled? I wish there was a bit more clarity on how these types of scenarios would be handled. My main concern with these type of scenarios is that suddenly my 40TB of data will attempt to be duplicated all at once (or in the case of the second scenario, DrivePool rapidly begins to migrate 20TB of data), instantly filling my 1TB cache drive, destabilizing my clouddrive, and resulting in drive corruption/data loss. As far as I can tell from the documentation, there is nothing in the software to mitigate this. Am I wrong?
  7. I have two machines, both running clouddrive and drivepool. They're both configured in a similar way. One CloudDrive is split into multiple partitions formatted in NTFS, those partitions are joined by DrivePool, and the resulting pool is nested in yet another pool. Only this final pool is mounted via drive letter. One machine is running Windows Server 2012, the other is running Windows 10 (latest). I save a large video file (30GB+) onto the drive in the windows server machine. That drive is shared over the network and opened on the windows 10 machine. I then initiate a transfer on the Windows 10 machine of that video file from the network shared drive on the windows server machine to the drive mounted on the windows 10 machine. The transfer runs slower than expected for a while. Eventually, I get connection errors in clouddrive on the windows server machine, and soon after that the entire clouddrive becomes unallocated. This has happened twice now. I have a gigabit fiber connection, as well as gigabit networking throughout my LAN. I'm also using my own API keys for google drive (though I wasn't the first time around so it's happened both ways). Upload verification was on the first time, off the second time. Everything else about the drive configuration was the same. Edit: to be clear, this only affects the drive I'm trying to copy from, not the drive on the windows 10 machine I'm writing to (thank god because that's about 40TB of data).
  8. Just received this email from Google regarding my G Suite account. This won't affect CloudDrive right? " . . . we’ll be turning off access to less secure apps (LSA) — non-Google apps that can access your Google account with only a username and password, without requiring any additional verification steps." Also worth noting: I couldn't remember how CloudDrive authenticated with Google so I tried adding another account and received this message that I don't remember seeing before. Looks like Google flagged CloudDrive for some reason?
  9. I've been using 1165 on one of my Win10 machines for the longest time with no problems, and I mean heavy usage, hitting the 750GB daily upload cap for a week straight once. The same version ran into the "thread aborted" issue within a couple days on my Server 2012 machine, but 1171 fixed it. Need to install 1174 now to test.
  10. I was having this issue and after trying a bunch of different things, installing this version and rebooting did the trick. Thanks!
  11. Oddly, I did another chkdsk the other day and got errors on the same partition, but a chkdsk /scan and chkdsk /spotfix got rid of them and the next chkdsk passed with no errors. I didn't notice any issues with my data prior to this though. Going forward, I'm running routine chkdsks on each partition to be sure. If the issue was indeed caused by issues with move/delete operations in DrivePool, it would appear that this is a separate issue from the first issue I encountered (where I was only using one large CloudDrive), but maybe the same as the second time I experienced the issue (the first time I did the 8 partition setup). I was using ReFS at the time though, so I'm not sure. I'll try to find some time to reproduce the error. In case anyone else with some spare time wants to try, this is what I'll be doing: 1) Create a CloudDrive on Google Drive. All settings on default except the size will be 50TB, full drive encryption will be on, and the drive will be unformatted. 2) Create 5 NTFS partitions of 10TB each on the CloudDrive. 3) Add all 5 partitions to a DrivePool. All settings to be left on default. 4) Add the resulting pool to yet one more pool by itself. 5) Create the following directory structure on this final pool: Two folders at the root (let's call them A and B); one folder within each of these (call them A2 and B2); finally one folder inside A2 (A3). 6) Within A3, create 10 text files. 7) Move A3 back and forth between B2 and A2, checking the contents each time until files are missing. This approximates my setup at a much simpler scale and SHOULD reproduce the issue, if I'm understanding Chris correctly as to what could be causing the issue I experienced, and if this is indeed what is causing the issue. I plan on getting another set of licences for all 3 products in the coming weeks as I transition from hosing my Plex on my PowerEdge 2950 to my Elitebook 8770w which has a much better CPU for decoding and encoding video streams (only one CPU vs two obviously, but the server CPU had 1 thread per core, and the 8770w's i7-3920XM has much better single threaded performance for VC1 decoding). I probably won't have to much time to attempt to reproduce the issue until this happens, but I'll let you know once I do. Finally, some question: Is there any sort of best practice of sorts to avoid triggering those DrivePool issues, or any sort of workaround for the time being? Do you know the scope of the potential damage? Will it always be just some messy file system corruption that chkdsk can wrap up or is there the potential for more serious data corruption with the current move/delete issues?
  12. I feel like this kinda defeats the purpose of CloudDrive. I will say there are other tools that do this already (most obviously, something like Google Drive File Stream, but there are other 3rd party, paid options that work on other cloud providers). Are those not options for you? To address your issue directly though, I did provide my workaround for this issue on another thread, and the first response on this thread covers part of that: Network share your drive over LAN. What I do in addition to this is run openVPN on my Asus router to access my drive away from home if need be. I also use Solid Explorer on Android to access my files from my phone as well (after using the Android openVPN client to connect).
  13. I just network share the CloudDrive from the host (which is a 24/7 server), then all my other systems on the local network can read/write to it. Every machine on my network is connected to gigabit ethernet, so I don't really have a noticeable drop in performance doing this. I also have gigabit upload and download through my ISP. For mobile access, I have my home network on an Asus router with built-in VPN functionality running an openVPN host on it. I connect to it using openVPN on my Android phone and use Solid Explorer to mount the network share (having gigabit upload really helps here). MX Player with the custom codec available on the XDA forums allows me to directly play just about any media on my CloudDrive, bypassing Plex entirely.
  14. To add on this, keep in mind that team drives have a hard limit of 400,000 files or folders and can only be nested to 20 layers of directories. I don't think the 20 layer limit would affect CloudDrive much, but the 400,000 file limit would cause issues as your drive grows, I'd imagine.
  15. Got some updates on this issue. I'm still not sure what caused the original corruption, but it seems to be of only the file system, not the data. I'm not sure if this was caused for the same reasons as in the past, but I'm happy to report that I was able to fully recover from it. The chkdsk eventually went through after a reboot and remounting the drive a few times. Like I suspected above, it was only one partition of the CloudDrive that was corrupted, and in this case, having this multi-partition CloudDrive + DrivePool setup actually saved me from quite the headache, as only the files on that partition had gone missing (due to DrivePool dealing with entire files at a time). I took the partition offline and ran chkdsk on it a total of 5 times (the first two being repeats of what I mentioned above, stalling after a certain point; the next 3 being performed after the reboot and remounting) before it finally reported that there were no more errors. I remounted the partition, and upon checking my data, found that everything was accessible again. Just to be sure, I'm having StableBit Scanner running in the background until it passes the entire cloud drive, though it's going to take a while as it's a 256TB drive. One thing that maybe someone could look into: The issue seemed to have happened during a move operation on the CloudDrive. I had moved a bunch of directories to a different directory on the same drive (in one operation), and found that some files had not been moved once my data was available again. Maybe this is somehow related to what caused the corruption to begin with, or maybe it's just coincidence.
×
×
  • Create New...