Jump to content
Covecube Inc.

stuart475898

Members
  • Content Count

    12
  • Joined

  • Last visited

  1. I'd like to have real-time parity stored on a dedicated parity disk. Snapraid exists but that doesnt do real-time parity, and if a disk goes you lose availability of those files until you can get a new disk and rebuild from parity. Drivepool with built in parity beats RAID, as if a disk goes along with the parity one, you have still only lost the files on the original disk. While the rebuild goes on, you still have availability of the pool and files on the faulty disk.
  2. Hello, I'm having trouble removing a CloudDrive disk from DrivePool, which I believe is largely due to a ~215GB file that DrivePool keeps failing to copy to a local disk. I figure I may have more success with Robocopy, and was wondering if its possible to Robocopy the file directly from the CloudDrive PoolPart folder to the PoolPart folder on one of the local disks? If not, I'll Robocopy it to a non-pooled disk. Thanks Stuart
  3. Hello, I need to remove all files from my CloudDrive, as some backups have ended up on there which is slaughtering my internet allowance. I'm trying to remove the CloudDrive from DrivePool, but keep hitting errors about missing files and CloudDrive is complaining about issues with the cache. I'd like to move the CloudDrive cache off the spinning disks its on now and put it onto an SSD. When I tell CloudDrive to detach the disk however, it states that Access is denied once everything queued for upload is complete. Is there a solution to this problem? Thanks Stuart
  4. Having tried a few different types of storage 'systems' for my VMs, the most recent being SS, I've decided to go back to straight up SSDs + dedupe as its given me the best performance + capacity. Now I would go with a RAID1 or a RAID5, but some people appear to have had success with storing VMs on DP. I've done some searches and found some older posts about VMs + DP and dedupe + DP, but wanted to try and get an update or idea of where things are with it all now. The plan is to start off with 2x SSDs added to a pool. Both SSDs will have NTFS dedupe enabled on them. I will then selectively duplicate VMs, although as I grow the storage I will likely duplicate the whole of the pool. When I want to grow, I'll add another SSD and let re-balancing do its job. Is this viable? Has anyone had any experience with this? Ultimately I'd like to see if I can extend it to a failover cluster using Starwind VSAN, but I'm a while off of that yet. Thanks for the help
  5. Thanks for the reply. Thinking about it, you can more or less ignore the Virtual SAN bit actually - I am just asking can I present 4 normal file shares to a VM, have CloudDrive 'mount' them as a disk and then add them into Drivepool? Will CloudDrive/Drivepool gracefully handle losing 2 of the file shares in the event a host has to go down? Stuart
  6. Hello, I have recently been playing with a pair of microservers, and have put Server Core 2016 on them + Starwind Virtual SAN giving me a nice Hyper-V failover cluster. If testing this turns out well, I'd like to use it for both the home lab and normal home server. The issue is all of our films, TV series, files etc come in at around 8TB, which wont be part of the virtual SAN. This means that 4x 3TB disks I currently have will be distributed between the microservers. I'd like to continue to use Drivepool, and it looks like the best way to achieve what I want is to share out each individual hard disk, and add the lot into CloudDrive running on a highly available VM running Drivepool. I don't think I will want to use any features in CloudDrive such as pre-fetching or local caching; I only want the shares presented as physical disks for Drivepool. What are everyone's thoughts on this? When one of the microservers is shutdown, the VM running Drivepool should failover and continue to provide a service. I take it CloudDrive will gracefully handle the shares going missing from the shutdown server? I have included a diagram below to try and articulate what I am getting at better: ^ | Drivepool shared via SMB | | +---------+----------+ +------------> |VM running Drivepool| <------------------+ Each disk|SMB shared | and CloudDrive | Each disk SMB shared | +------> | | <------------+ | | | +--------------------+ | | | | | | | | | | +--------------------+ +--------------------+ | +---+ +---+ +---+ | | +---+ +---+ +---+ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |SSD| |HDD| |HDD| | | |SSD| |HDD| |HDD| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | +---+ +---+ +---+ | | +---+ +---+ +---+ | +--------------------+ +--------------------+ | Microserver +-------------+ | Microserver | | Virtual SAN | | +---------------------> | CSV for VMs | <----------+ +-------------+ Hope that makes sense. Thanks for the help! Stuart
  7. Hello, So I finally snapped up some 4TB external drives in today's sales, and I want to use them for off site backups. To do this, I would like to bring them home, plug them into the storage server and then simply robocopy everything on. If I create a new pool solely for these disk, will drive pool 'gracefully' handle them being disconnected when I take them off site, and then reconnected every couple of weeks when they are brought home? Thanks for the help Stuart
  8. stuart475898

    robocopy

    Thank you for the reply. Not a problem. For now, I am using the following Powershell script to move everything. It wont copy files at the top level, only folders however. get-childitem p: | foreach-object{ robocopy p:\$_ \\tv-server\p$\$_ /e /W:5 /R:10 /MT:32 > p:\backup.log }
  9. stuart475898

    robocopy

    Hello, Having exactly the same issue as described here. I take it this still isn't fixed in the current stable release of drivepool, and I need to use the beta to robocopy from the pool root? Thanks Stuart
  10. Never mind. I ran the Re-Measure function on the pool and it is now reporting correct usage.
  11. Hello, I've recently chucked 3 disks into a new pool, 2 of which had a load of data on. I moved the data into the pool by cutting and pasting it into the PoolPart.xxx hidden folders which I understand should 'work' instead of moving them into the logical drive. However, drivepool is now reporting 2.21TB of unduplicated data, and 3.13TB of other data. When I go to the pool logical drive and select all files/folders to check their size in properties, it gives the total size as 5.32TB. I thought Other represented files not part of the pool, and yet they appear to be part of it in the logical drive. Does anybody know why this is happening? Thanks for the help. Stuart
  12. Hello, About to start a trial of drivepool when I rebuild the home server, and was wondering what happens at the end of the trial period if I haven't bought by then? Will the pool still work normally, but I wont be able to make any changes via the program? Thanks Stuart
×
×
  • Create New...