Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Today
  2. I run drivepool in a Windows Homeserver 2011 HP machine. The pool is nearing capacity and I want to replace all 4 drives (1TB) with 2TB drives. If I do it one at a time will the pool replicate across? Would it be better just to replace one 1TB drive with a much bigger one. say 4 TB which would give me ample space for the future? Should I upgrade to the latest Drivepool version?
  3. Thanks! Sounds like this is handled without needing a reupload of these files, right? I'm still on the fence because I don't know that I like the idea of losing control of the volume on which specific files/folders end up being stored.
  4. AFAIK, DP can use Raid-arrays as disks in a Pool. DP does not, however, provide RAID-functionality itself (other than duplication which is sort of a file-based Raid-1).
  5. Hi Stuart, What type/speed of disks are you using, what controller are you using and what are the ambient room temps like? I have a 24-bay LogicCase SC-4324S in a rack in my study, which is quite small space, currently running with 18 drives and even in the recent hot UK weather, none of my drives got above about 34-35 degrees C. Right now (Saturday AM, ambient room temp of 22 degrees) they are mostly running at 25-26 degrees (as reported by StableBit Scanner). These are mainly 5400rpm drives and set to spin down when not in use, but even the few 7200rpm drives I have are only 2-3 degrees warmer. I did have an issue in the past where the HBA I was using wasn't properly supported in Windows Server 2016 and wouldn't allow the disks to spin down regardless of any power management settings, even though it had worked perfectly under Server 2012.
  6. So I have built a new Plex box out of a combo of spare parts and some new drives, the box is running Windows Server 2019. What has occurred over the past week has been a journey into discovering how much of a mess RAID support is in the Windows world right now. So what I originally had wanted was 3X - 4 TB Ironwolves in RAID-5 with a single 120 GB SSD as write cache. I have a 10 TB Barracuda Pro as backup drive for that array. My original plan was to use Intel IRST to manage this. Well it turns out Intel IRST only supports drives on the Intel mobo controller, it won't support other PCIe SATA controllers. So no go there. Next up, RAID-5 with Windows Disk Manager. So this works but I can't easily expand the RAID with new drives and there is no way to add SSD cache. Okay, lets try Windows Storage Spaces. This does work, but for a home user this is kind of a half baked mess and the parity support is beyond awful. It actually has fantastic read speed but the write speed is just ridiculous. Since the Storage Spaces GUI shoehorns you into a very specific type of setup the only way to actually setup a storage pool with the configuration I wanted was to build the entire thing in Powershell which involved a lot of trial and error with a bit of head banging. It was possible though. So I keep getting recommended to use Drivepool. My issue with Drivepool is it means I lose parity (which is not the end of the world) but worse I lose the benefit of RAID-5 combined drive read speed (this is a deal breaker). So my question comes down to is there anyway for Drivepool to incorporate RAID-5 /parity arrays from Windows Disk Manager or Windows Storage Spaces into the pools? Thanks
  7. Sorry to refloat the thread but I've seen something related to what I want to do and it may add up to a lot of useful information for the future. In my case is there a way to erase some folder that is corrupt, has reading errors and can't be deleted in a normal way. So how should I proceed to delete it permanently? My google drive is 250 TB fully encrypted (ntfs).
  8. Thank you for your informative post. I appreciate the response. I eventually worked out that I could see the file segments that I uploaded and that I couldn't see the actual content. I finished up going with Mountain Duck. (Albeit after purchasing a copy of Air File Drive at a good discount and then finding that I couldn't see the content that way either because I ignorantly had a checkbox unselected in the options dialog.) I thus achieved my ultimate goal of being able to transfer my files (photos) from OneDrive to SmugMug without having them sit on my hard drive (other than in the cache). Thanks again, Jeff
  9. You will never see the 1 for 1 files that you upload via the browser interface. CloudDrive does not provide a frontend for the provider APIs, and it does not store data in a format that can be accessed from outside of the CloudDrive service. If you are looking for a tool to simply upload files to your cloud provider, something like rClone or Netdrive might be a better fit. Both of those tools use the standard upload APIs and will store 1 for 1 files on your provider. See the following thread for a more in-depth explanation of what is going on here:
  10. You're just moving data at the file system level to the poolpart folder on that volume. Do not touch anything in the cloudpart folders on your cloud storage. Everything you need to move can be moved with windows explorer or any other file manager. Once you create a pool, it will create a poolpart folder on that volume, and you just move the data from that volume to that folder.
  11. Yesterday
  12. Has this happened to anyone else before? I hit "Update License" and it says it was successful but just sort of confused as to what happened and if I need to check anything else.
  13. Last week
  14. That does not seem unreasonable to me. Having said that and having done some programming myself I can also easily see how that would not be implemented.
  15. It seems to me that if I tell DrivePool I want to remove three drives that even in sequential operation it would have 'sense' to not copy data to one of the drives in queue for removal.
  16. Well, not incredibly.... But it would have been more efficient if, for instance, you set Drive limit options in the Drive Usage Limiter. Removing drives is a serial / single "thread" I think. I don't know how one could have known this easily.
  17. Am I just incredibly dumb or am I missing something in regards to removing multiple drives? I selected to remove three drives from the pool and now DrivePool is copying data from the first disk to one of the disks I also wish to remove. That seems incredibly dumb and redundant. Now I have to move all of the data again.
  18. Sorry to revive an older post, but I came across this one after a quick search... also need an 8 port sata card for my server, I'm using Windows 10. I've been running a Highpoint card I wound up with for several years, but it appears to be throwing some errors and probably time to replace it... Same as OP, I also just need a simple JBOD solution, no RAID necessary. Just need it to be reliable (of course, right? haha), handle large (10+ TB discs), and have decent/recent driver support. Any suggestions? Is the Supermicro card recommended a few years ago still a reasonable bet?
  19. @srcrist Thanks for your very detailed post, I'm reaching the limit of my first 16TB (Google Drive) volume and your setup looks more streamlined than mounting more and more separate CloudDrive volumes. I'm uncertain about the part I quoted above and am very frightened to make a mistake that might nuke my existing CloudDrive volume. I have slow upload speed, it took me months to upload 16TB. How do you accomplish the above? Can I move the Cloudpart directory of my initial CloudDrive volume to the Drivepool folder?
  20. Hi, maybe you can use Bitwar Data Recovery to scan the hard drive to find and recover the corrupted data. It is safe and free to use.
  21. Heya, i know there is a thread with this question before, but i didnt want to respond in an old thread. So heres my question, im running Drivepool, with encryption via DiskCryptor Fork, Diskcryptor doesnt automount as i havnt encrypted the system drive. So when i login to the PC, i need to type my password manually, but Drivepool doesnt recognize the drives, so i manually restart the Drivepool Service and it works. "BitLocker_PoolPartUnlockDetect": { "Default": true, "Override": null }, CoveFs_BypassNtfsFilters (Default: "False") SOLVED Managed to solve the Diskcryptor issue, made a startup script, see my example: if exist \\?\Volume{70946c90-a592-11ea-91e8-b42e99cbce4b}\keyfile "C:\Program Files\dcrypt\dccon.exe" -mountall -p "YOURPASSWORD" -kf \\?\Volume{70946c90-a592-11ea-91e8-b42e99cbce4b}\keyfile Created a mount.cmd file with the text above. Windows Start > RUN > write gpedit.msc and go to Computer config > Windows > Script > Start and click show files, paste the mount.cmd here, and OK, click add, and choose the mount.cmd. Had to create a keyfile for every drive as for a password, used same password and keyfile for all drives. Volume ID can be found in Diskcryptor. My keyfile is on a USBdrive. Couldnt get symlink to work from USB drive to C drive, but no biggie.
  22. Is there a user guide for SSD Optimizer so I can set it up correctly ?
  23. nwc

    Duplication speed slow

    roughly how long would it take to duplicate 10tb of data? mostly video files i'm only at 500gb duplicated after around 12 hours.
  24. A very newbie question. I am trialing Cloud Drive. I have been testing the upload via the virtual drive of some miscellaneous files to a Microsoft OneDirve account. When I do so should I expect to see the uploaded files when I log in to my account in my browser? Thank you, Jeff
  25. Do I need to deactivate the license before I reinstall windows ?Im using the same computer.
  26. Resolved, but it took a while to find the issue because nothing in the drive pool logs helped point to the source of the problem, and the behaviour of drive pool as mentioned above was a complete misdirection. It turned out to be a folder on one of the existing drives in the pool had broken permissions. For some reason this did not create any problems opening up the file duplication settings, unless I added the new drive. Even removing the new drive allowed it to work again. So the symptom of the issue was exclusively when the new drive was in the pool, but the error was caused by a folder on an existing drive. No logs seemed to tell me much of anything. The logs don't mention the file or folder that caused the error. This seems like a massive oversight as this should be really easy to log and would have made the cause immediately apparent and saved a lot of time. Likewise the log let me down when removing drives from the pool yesterday - when the same issue occurred and the only message I got when attempting to remove was 'access denied' and the drive failing to remove. I ended up removing it with the damaged option though and then moving the affected files back into the pool manually. It turned out that it was files with changed permissions not affected by drive removal yesterday that were already on a disk that stayed in the pool that ended up causing the issues after adding the drive back in. Despite the fact that they weren't on the drive added... This could really do with better logging that actually tells us the file that caused an error to be thrown. At the very least, it makes tracking down the causes simple.
  27. A freshly formatted drive with no contents at all is giving me an "Unable to enumerate folder" error when I try to edit folder duplication options, preventing me from changing them *at all*. What I did: I removed a drive from my pool that had a read error. Formatted the drive (non-quick). Scanned the whole drive with scanner to see if error still occurred (it does not, reports healthy in scanner now). Added drive back to pool. Went to Manage Pool -> File Protection -> File duplication on the pool. Error popup saying, ""Unable to enumerate folder. Access is denied" No folders listed in file duplication box now (it works when drive is not in pool) so totally unusable. This drive is totally empty, it was literally just formatted, and it was not a quick format so that format took many hours and destroyed anything that even theoretically was on the drive! I checked the ownership of the drive, and is it owned by Administrators, there should be no access problem as far as I can tell. The drive was previously in the pool before I reformatted and rescanned it to ensure the error was smushed, and I did not have this problem accessing file duplication - I have several folders configured to duplicate!
  1. Load more activity


  • Create New...