Jump to content
Covecube Inc.

RG9400

Members
  • Content Count

    26
  • Joined

  • Last visited

  • Days Won

    1

RG9400 last won the day on February 20

RG9400 had the most liked content!

About RG9400

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. RG9400

    WSL 2 support

    The DrivePool disk works in Docker containers using a WSL2 backend, but the performance is abysmal, especially compared to the CIFS mount I was using before. I've gone back to CIFS for now. A Plex library scan took almost 2 hours using the DrivePool automount. As a CIFS volume, the same folder was scanned in 5 minutes.
  2. RG9400

    WSL 2 support

    I can confirm, seems my DrivePool is accessible from WSL 2 now, and I can mount and use it from Docker containers like any other DrvFs mount (automounted Windows volumes).
  3. RG9400

    WSL 2 support

    In the meantime, if anyone wants to get this working with Docker for Win using WSL2 as a backend, you actually don't have to do too much complex stuff. If you use compose, you can create a named volume like below, and then mount this into other containers. Performance is what you would expect from a CIFS mount...not sure if DrvFs is better or not, but it works at least volumes: drivepool: driver_opts: type: cifs o: username=${DRIVEPOOLUSER},password=${DRIVEPOOLPASS},iocharset=utf8,rw,uid=${PUID},gid=${PGID},nounix,file_mode=0777,dir_mode=0777 device: \\${HOSTIP}\${DRIVEPOOLSHARE}
  4. RG9400

    WSL 2 support

    +1 from my side. I've been using WSL2 extensively, and it's been hard to work around DrivePool's lack of support. Windows is heading in a direction where it works in harmony with Linux, so it would be nice for the software to be able to support that. I am able to cd to various directories on the mount that Windows does natively, but I can't actually list out any files, and no software running within WSL2 can see them either.
  5. I did some reorganizing, and I basically created a pool of pools. The pool contains an HDD pool (a bunch of local hard disks) and a CloudDrive pool (a bunch of CloudDrive partitions). Now the optimizer plugin seems to be available, and I can properly set the HDD pool as the SSD, and the CloudDrive pool as the Archive. This seems fairly close to what I want, but I still don't know how the balancing would work. 1. Does it try to move *all* files off the SSD into the archive in the first balancing run? Does it run for a set amount of time, or once it starts, it will try to move everything? Reason is that I have a ton of data on the HDD pool that cannot move to the SSD pool right away. I only see a way to immediately balance or to balance at a set time every day, maybe indicating there is no way to move the files over time 2. Are the files that are being moved inaccessible during this time? Can this create issues if applications like Plex are running? 3. If the balancing is running, and the files being moved are too large in size for my CloudDrive cache, I assume writes will be slowed, and the whole balancing task will still run as and when data is uploaded from the CloudDrive. If, during this time, I add a file, will it get placed on the HDD pool since that is still the SSD? Basically, could I theoretically be running balancing 24/7 where files are being added to the HDD pool as others are being moved to the CloudDrive pool? The concern would be that those files may be inaccessible while being moved, and I guess my cache drive would be perpetually be at 99% capacity with slowed writes and under heavy wear.
  6. Yeah, I knew the cache drive limitation which is unfortunate. I was actually thinking to do it the other way around. Basically the CloudDrive remains on a single SSD cache, and then I add a local hard drive to a pool with the CloudDrive. I set DrivePool to download to the local HDD first, and then use the SSD Optimizer or some other balancing mechanism to move files from the HDD to the CloudDrive. In this manner, the CloudDrive cache remains on a single SSD, but I have an upload buffer via the HDDs. I am not sure if the above is feasible. I think the biggest concern would be how to move the files from local to CloudDrive given that the pool will not see the space of the underlying cache. EDIT: It does not seem like my pool with CloudDrives in it allows the balancing plugins to actually work. The options seem disabled, though I felt the SSD Optimizer Plugin was close to what I wanted in theory where the HDD acted as an "SSD" and the CloudDrive acted as the Archive.
  7. I've been thinking about a new setup, and I wanted to float an idea to see if it works (pros/cons). Basically, right now, I have my clouddrive mounted to a single Optane SSD (C:). This works great with speeds, but the drive is limited to 1TB, and with slow upload, this drive is full almost always, so it's hard to copy new data over. For this purpose, I have a bunch of HDDs that I added to a DrivePool pool along with the CloudDrive, and I copy over new data to those pools before manually moving it to the PoolPart folder in C: when the most recent data is uploaded. It is manual and cumbersome. Could I do something like this instead? I create a pool of my HDDs, then add that to the CloudDrive pool. I set up the CloudDrive pool to download new data to the HDD pool, and then use the plugin to move data from the HDD pool to the CloudDrives? This way, my pool sees all the data, but the underlying locations are being managed automatically. If this scenario is feasible, I do have a few questions. 1. If I have a file on the HDD portion, and I do a "move" via my Pool, will it remain on the HDD portion? Will it remain on the same disk it was downloaded to initially? 2. Can I control the plugin so it moves to CloudDrives based on the cache drive space? The CloudDrives will always look empty/have tons of space but my underlying cache may not. I am concerned that the plugin will constantly be trying to move data to the cache drive, resulting in it being full 100% of the time and slowing writes down.
  8. You should be able to Re-Authorize the drive, which would start using the new API key
  9. I just "Re-Authorized" the drive. You can check by going to the API console, and you should see queries for the Google Drive API -- the Quotas tab is what I am using to monitor it.
  10. The main reason is that Windows cannot run various fixes and maintenance on drives more than 64TB (e.g. chkdsk). It cannot mount a partition larger than 128TB either. Some people are also concerned that historically, various partitions got corrupted due to outages at Google, so by limiting the size of any individual partition, you also limit the potential losses
  11. Appreciate it, and love the vision for this cloud dashboard. Very excited to see future new features.
  12. That makes sense, and it was also my experience after enabling this new feature. One extra development that could really help is to display a changelog against the release notification. It helps to know what new features/bugfixes are being pushed in each release, and it's a bit cumbersome to find the various changelogs for each application (seems beta and final releases for each app have separate txt files).
  13. I saw that the latest dev Beta version of CloudDrive allows for automated updates via StableBit Cloud. This seems like a useful feature, but I wanted to know exactly what that meant before I turned it on. Basically, right now, after updating CloudDrive, I have to restart the computer due to something with the drivers. If I get an automated update downloaded, would that mean my computer would restart automatically, or would it need to be manually restarted, or can the various applications now update without a need for restarting the computer? My main worry is that by enabling automatic updates, I end up with something like Windows updates where an update is pushed that causes (either instantly or on a delay) the computer to have to restart.
  14. RG9400

    Stability issues

    Do you have pinned data duplication enabled? For my case, I found that when it ran weekly cleanups, it caused the memory to increase dramatically. I am on .1184 and still get that if it runs cleanup, so I turned off the pinned data duplication for now. I do notice my system is a bit unstable, and I am not necessarily sure if CloudDrive is the culprit. With CloudDrive no longer doing weekly cleanups though, things are running more or less smoothly. I had found .1145 was very stable, and if you are convinced the BSOD issue is due to CloudDrive, it may be worth trying to downgrade to that version and see how it works. Just uninstall CloudDrive via Windows Add/Remove programs feature, restart your computer, download and install .1145 from http://dl.covecube.com/CloudDriveWindows/beta/download/, and run cleanup. If you had pinned data duplication, you would also want to turn that off now. Should work, and please report back if that helps. It would give me some evidence if my issues are similar to yours or separate.
  15. Current: I have a CloudDrive with multiple partitions, and I have pooled them together using DrivePool. This is mounted on my SSD which has around 1TB space, with 200GB set for the cache on the CloudDrive (expandable). I also have a bunch of HDDs storing local data that has yet to be uploaded onto the CloudDrive. I added these HDDs to the aforementioned pool so that applications like Plex can access the files before they are uploaded to the CloudDrive. Since space is limited on my cache drive, the upload process has been a bit of a pain. I currently check if there is space on the cache drive as data is constantly being uploaded, and I manually move files out of a HDD "poolpart" folder onto the drive's own directory, then copy them back onto my Pool where they are now placed onto the CloudDrive partitions due to them having a lot more free space than the HDDs. Then I wait a day until the data is uploaded and repeat. Goal: Would there be a way to use DrivePool to automate the above? Basically, as my cache drive empties (based on my config, my cache drive itself is not in any pool) via uploading data, the local HDDs move data over to the CloudDrive partitions. Of course, this would ideally happen in the background, and for all application purposes, the Pool itself remains the same, only the files shift from HDD -> Cache -> Cloud. Then, when I download future data, it could be placed on the local HDDs and be added to the queue. Basically, my goal is to let the multiple HDDs serve as an extension of the upload queue while keeping the cache portion on the SSD itself. I feel like the SSD optimizer is close to doing this, but as I understand it, the problem is that it uses free space from the CloudDrive partition instead of from the cache drive itself. So the partitions might only be 5% full, but the cache drive would be 95% full.
×
×
  • Create New...