Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by Umfriend

  1. Actually, if you have duplication then this procedure is, IMHO, incomplete. When you move files out of the poolpart.* folder, DP may remeasure and suddenly find duplicates are missing and re-duplicate.

    If you have duplication, you can just power off the machine, physically remove the drive, reboot and then remove the missing/faulty drive from the Pool through the GUI. Remeasure/re-duplicate and it is done.

    If you do not have duplication, then I would attach the new drive, stop the drivepool service (so DP will not interfere),  copy the data from the faulty HDD poolpart.* folder to the new drive (to ensure you do not perform any unnecessary writes on the faulty drive), power off, physically remove the drive, reboot, start drivepool service if necessary, remove faulty drive from the Pool through the GUI, add the new drive to the Pool, stop the service, move contents on the new Drive to the poolpart.* folder on that drive, restart drivepool service, remeasure and it is done. I think.

  2. Moving data to the Pool while retaining the data on the same drive is called seeding and it is advised to stop the service first (https://wiki.covecube.com/StableBit_DrivePool_Q4142489). I think this is because otherwise DP might start balancing while you are in the process of moving drive-by-drive.

    I am not sure but I would think you would first set settings, then do the seeding.

    (I am pretty sure that) DP does not "index" the files. Whenever you query a folder DP will on the spot read the drives and indeed show the "sum". Duplicate filenames will be an issue I think. I think that DP measures the Pool it will either delete one copy (I think if the name, size and timestamp are the same or otherwise inform of some sort of file conflict. This is something you could actually test before you do the real move (stop service, create a spreadhseet "Test.xlsx", save directly to a Poolpart.*/some folder on one of the drives, edit the file, save directly to Poolpart.*/some folder on another drive, start service and see what it does?).

    DP does not go mad with same folder names, some empty, some containing data. In fact, as a result of balancing, it can cause this to occur itself.

    I have no clue about snapraid. I would speculate that you first create and populate the Pool, let DP measure and rebalance and then implement snapraid. Not sure though. You may have to read up on this a bit and there is plenty to find, e.g. https://community.covecube.com/index.php?/topic/1579-best-practice-for-drivepool-and-snapraid/.

  3. So I don;t really know. I have a SAS HBA, an IBM1015 butcalled Dell H310 or somesuch. I never understood the LSI chip/card numbering.

    In any case, I did have to get the latest bios for my card, P20* or something. Also, with Scanner, I had to Settings -> Advanced Settings and Troubleshooting -> Advanced Settings -> DirectoIO -> Unsafe check. Never had an issue with that and SMART is read. I would try the Scanner setting first, reboot and see how it goes.

  4. Not a dev either.

    Another problem might be that once one starts writing a file, it may not be yet known how large that file will be. Sure, a simple copy should not have that issue but what if you save a stream of as yet unknown length? Writing to the disk with most space free seems to be the safest option (and personally, I like safe). Also, I find it hard to think of a scenario where a subsequent reshuffle of files would measurably impact performance.

  5. So many have no such issues at all. Could it be there is something somewhat specific to your situation is going on? Some here will be trying to help but it helps if you provide a bit of sensible information. So, again, have you checked Event Viewer (for IDE/ATAPI/DISK errors specifically but any other errors might be relevant) and what model HDD s this exactly?

    Also, can you try to copy some represntative files to that drive F:\ instead of to the Pool?

  6. I know there is an SSD Optimzier plugin but AFAIK it only serves as a write cache, you can't dictate which files should be on the SSDs for fast reads.

    There are also File Placement Rules where you can map certain folders to specific SSD/HDDs but there I don't think you can use a single SSD for both functions (not sure, maybe you can).

  7. Don't panic. If the drives do not show up in BIOS, chances are you made some sort of silly mistake that will be easily correctable. Just take the drives out, connect just one and see if you can get that one recognised. If that works, try it on a different port and with a different power cable. Add on drives as you go.

    I had a PSU with detachable cables. None of my HDDs worked. Turned out cables were not connected to PSU. A brief scare for a few minutes but only something silly. Probably something like that with you.

  8. Have you tried reading https://stablebit.com/?

    Anyway, DP combines volumes into a large NTFS volume. It can have redundancy if you want. Either for the entire "Pool" or just certain folders therein. If one drive fails, the most you can lose is the data on that one drive. The data on the other drives will still be available. If all was redundant (we call that x2 duplication) then the Pool will reduplicate the lost files, provided you have enough space.

    Each drive you add to the Pool is not exclusively for the Pool. It will create a hidden PoolPart.* folder within which the Pooled files will be stored. You can use the rest of the drive as well outside of the Pool. DP simulates a NTFS drive but it also uses plain NTFS on the drives that are part of the Pool so any recovery tool that works on NTFS will work.

    x2 duplication is a bit like RAID-1 in the sense that it requires twice the space to store data. Parity/checksums are not supported by DP.

    You might be able to learn a lot actually from just reading a few threads here on the forum.

  • Create New...