Jump to content
Covecube Inc.

XAlpha

Members
  • Content Count

    21
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by XAlpha

  1. Measured and rebalancing with 701, but the GUI has some glitches. And for whatever reason, when I reboot the server, any volumes without drive letters lose their labels and won't show up again in the pool until I assign a drive letter and restart the DP service.
  2. I may be missing the issue - what's the down side of having both types in the pool?
  3. The issue also seems to be with file names greater than 255 characters... Win10 is using the pool for file histories, and some of those directory structures get pretty long. That also caused the measurements to hang. I'm waiting for the pool to "heal" & rebalance after removing the long file/directory names, along with re-populating the dedup'd drives. That will probably take a week or so... so stay tuned.
  4. For future can we make sure both types work within the pool? I have data aware SAN's and tiered data on them that has to stay in the NTFS format to maintain compatibility with the SAN's, but those SAN's are pooled with stand-alone drives that are ReFS. That's working as it is, but if the pool has to be of homogeneous file type, it would prevent me from operating the pool as it is, and I could never move to ReFS in the foreseeable future...
  5. Ditto. I had to undo all deduplication (that took about a week). DP does not like dedup yet, so it would seem.
  6. With build 692 I seem to be having issues with dedup - it's set up on a couple of the larger disks, and direct drive access through Windows is working fine. However, DP will not successfully finish measuring the pool. It will measure all the drives, except the drives with dedup enabled.
  7. XAlpha

    Measuring Speed

    I was using StableBit.DrivePool_2.2.0.652_x64_BETA.exe on WHS2011. I just upgraded to 655.
  8. XAlpha

    Measuring Speed

    "Measuring" is taking a very, very long time (days). Is there a setting that would allow it to measure multiple drives at once or to increase the threads to get this done faster? DP Pool is ~90 TB with 30-32 drives.
  9. XAlpha

    Duplication Warnings

    Remeasuring with 561 didn't fix it. And the problem originally occurred while using 634. I've re-installed 634; the pool has been measuring for over 24 hours. I'll let you know...
  10. XAlpha

    Duplication Warnings

    Yes, why after the file and folder were removed, and I checked all the individual drives - the file and folder were gone from every individual drive, the duplication warning still persisted.
  11. XAlpha

    Duplication Warnings

    Load cycle count, and yes using Scanner. No bad sectors, Scanner reports healthy other than the SMART warning. And maybe? But not sure, as I successfully moved the folder off the pool onto a Drobo, and even with the folder gone, not in the pool, DP continued giving me the duplication error for that folder & file.
  12. XAlpha

    Duplication Warnings

    Kind of, one of the drives threw a SMART warning, and I have been having issues getting the pool to complete measuring, checking, and duplicating. Assuming the two may be related, I increased the duplication from 2 to 3 until the problem could be isolated and to ensure duplication was re-checked for all folders. And then I got that warning. And I'm currently on WIndows 8.1 Pro. I had previously used WHS 2011 and saved the directory in case I needed one of the backups. So no, the files were not in use by the OS at the time of this warning.
  13. XAlpha

    Duplication Warnings

    Never mind, I recreated the directory with a dummy text file with same file name, and then it resolved the discrepancy. It's still odd it happened to begin with.
  14. XAlpha

    Duplication Warnings

    I'm trying to resolve an issue with a "duplication warning" that has come up. "There were problems checking, duplicating or cleaning up one or more files on the pool." The file is from WHS client computer backup folder on the pool "\client computer backups\data.4096.127.dat." DP couldn't resolve it, so I moved the folder out of the pool onto an independent drive. The folder and the file are no longer on the pooled drive, but I'm still getting the error, and DP can't seem to fix it. I've rebooted, uninstalled version 561 and installed the latest beta, and went back to 561, but none of that has resolved the duplication warning issue. Please advise.
  15. Since you brought it up, any plans to implement a balancer by file size? That would allow DP to automatically shift large files to archive-oriented drives...
  16. Is there a way to force a consistency check? If not, feature request please. :-)
  17. No, a way to manually force a consistency check. There used to be an option with previous versions on WHS 2011.
  18. Is there a way to manually trigger a consistency check with DP 432? It was in the old 1x version for WHS 2011, and if nothing else it gave a nice warm fuzzy feeling to press it from time to time.
  19. Windows only recognizes drives as fault tolerant if Windows is managing the dynamic volume. RAID controller volumes and drobos both show up to windows as not fault tolerant. So the gui for drive pool would need to have a section that would allow fault tolerant disks to be designated. For your 2, that would be the downside. As I mentioned, a controller fault would still leave you vulnerable, but while it happens it is extremely rare. Post people would take that chance. To answer your question, no a drobo drive or a drive in a RAID pool cannot be pulled out and read as a standard NTFS volume. However, the drobo drives can be put in a different drobo as long as it's a comparable model, and RAID arrays can be move to a different controller. I realize it's not a "clean" a solution as having a bundle of a dozen disks in a server, but for instance I have three systems at work that have three drobos plus an internal RAID array, and it'd be nice to be able to pool them all without double duplicating.
  20. Thanks, but that designates whether to duplicated and/or unduplicated data on each disk. I'm proposing that if a disk is fault tolerant (an option to designate that would have to be made available), then the data that's on it isn't duplicated even if the folder containing it is set to duplicate. I realize this wouldn't protect against controller failure, but it would still protect against HDD failure, which is what happens most of the time anyways.
  21. I have a few systems with a similar hardware configuration - several internal hard drives (up to 10) each with its own drive letter, plus at least one external Drobo disk array. My conundrum has always been - include the Drobo in the pool, or leave it out? I use duplication on all my folders... Drobo's are great, they're fault tolerant using either single or dual parity; however when coupled with a DrivePool box, adding the Drobo to a pool that has all its contents duplicated would be a waste of drive space that's already protected by duplication. So it would be great to be able to qualify a disk as "fault tolerant," allowing DrivePool to write files/folders that are enabled for duplication to only the fault tolerant disk (such as a Drobo). I also have had RAID-6 arrays that I had broke up and put each drive in DP; however, it would be nice to be able to mix and match single drives along with RAID arrays without wasting disk space.
×
×
  • Create New...