Jump to content

Shane

Moderators
  • Posts

    756
  • Joined

  • Last visited

  • Days Won

    70

Posts posted by Shane

  1. If you're manually moving new files into the pool via the hidden poolpart folders as per Q4142489, it is up to you to ensure they do not overlap existing folders/files in the pool.

    This is because DrivePool's duplication works via having the same file exist in the same path on multiple drives in the pool.

    For example, say you have a pool P consisting of drives D and E, whose contents are as follows:

    d:\poolpart.1\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file
    d:\poolpart.1\folder1\file2 --> p:\folder1\file2
    d:\poolpart.1\folder1\file3 --> p:\folder1\file3
    e:\poolpart.2\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file
    e:\poolpart.2\folder1\file4 --> p:\folder1\file4
    e:\poolpart.2\folder2\file1 --> p:\folder2\file1
    e:\poolpart.2\folder2\file2 --> p:\folder2\file2

    If you then had a new drive F you wanted to manually seed into the pool as per Q4142489, with new (i.e. different to the above) content as follows:

    f:\folder1\file2 - - -> f:\poolpart.3\folder1\file2
    f:\folder2\file3 - - -> f:\poolpart.3\folder2\file3

    You would have to first change the name of F's folder1, folder2, file2 and/or file3 before moving \folder1\file2 into any hidden poolpart as otherwise it would overlap with the existing \folder1\ and \folder2\ as follows:

    d:\poolpart.1\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file
    d:\poolpart.1\folder1\file2 --> p:\folder1\file2 <-- this existing file is in conflict with a new file
    d:\poolpart.1\folder1\file3 --> p:\folder1\file3
    e:\poolpart.2\folder1\file1 --> p:\folder1\file1 <-- this is a duplicated file
    e:\poolpart.2\folder1\file4 --> p:\folder1\file4
    e:\poolpart.2\folder2\file1 --> p:\folder2\file1
    e:\poolpart.2\folder2\file2 --> p:\folder2\file2
    f:\poolpart.3\folder1\file2 --> p:\folder1\file2 <-- this new file is now in conflict with an existing file
    f:\poolpart.3\folder2\file3 --> p:\folder2\file3 <-- this new file is now in the same folder as two existing files

    @Christopher (Drashna) I recommend that the Q4142489 wiki entry should mention this explicitly; e.g. by instructing the user in step 4 to "First, check that the folder structure you intend to move into the pool does not already exist in the pool, unless your goal is to merge the content of those folder structures together."

  2. The method would work, though you'd want to ensure the old drives weren't reconnected later to the drivepool computer without either

    • stopping the DrivePool service and renaming the hidden poolpart folder (e.g. just put an 'x' in front of 'poolpart') on the drive you plan to swap out before shutting the computer down to physically swap out that drive
    • or renaming/removing the hidden poolpart folder on the old drive via another computer after the drive is taken out of the drivepool computer

    to avoid any potential issues shoud the old drive(s) be later reconnected to the drivepool computer.

  3. This thread by MitchC goes into the file indexing issues with DrivePool.

    TLDR is DrivePool currently generates its own file index table for files opened from the pool, held in RAM, with each file being assigned an ID incrementing from zero as they are opened (edit: the first time after each boot), renamed or moved within the pool, which is not how NTFS does it (a file should retain its FileID regardless of being renamed or moved within a volume and do so regardless of reboots), which means until this is fixed any program that assumes normal FileID behaviour from a pool drive (because DrivePool presents the pool as a NTFS volume) may behave in an unplanned manner.

  4. 4 hours ago, red said:

    Does anyone know how exactly "Duplicate files later" works in conjunction with a CloudPool?

    It's the same for both local and cloud drives being removed from a pool: "Normally when you remove a drive from the pool the removal process will duplicate protected files before completion. But this can be time consuming so you can instruct it to duplicate your files later in the background."

    So normally: for each file on the drive being removed it ensures the duplication level is maintained on the remaining drives by making copies as necessary and only then deleting the file from the drive being removed. E.g. if you've got 2x duplication normally, any file that was on the removed drive will still have 2x duplication on your remanining drives (assuming you have at least 2 remaining drives).

    With duplicate files later: for each file on the drive being removed it only makes a copy on the remaining drives if none already exist, then deletes the file from the drive being removed. DrivePool will later perform a background duplication pass after removal completes. E.g. if you've got 2x duplication normally, any file that was on the removed drive will only be on one of your remaining drives until the background pass happens later.

    In short, DFL means "if at least one copy exists on the remaining drives, don't spend any time making more before deleting the file from the drive being removed."

    Note #1: DFL will have no effect if files are not duplicated in the first place.

    Note #2: if you don't have enough time to Remove a drive from your pool normally (even with "duplicate files later" ticked), it is possible to manually 'split' the drive off from your pool (by stopping the DrivePool service, renaming the hidden poolpart folder in the drive to be removed - e.g. from poolpart.identifier to xpoolpart.identifier - then restarting the DrivePool service) so that you should then be able to set a cloud drive read-only. This will have the side-effect of making your pool read-only as well, as the cloud drive becomes "missing" from the pool, but you could then manually copy the remaining files in the cloud poolpart into a remaining connected poolpart and then - once you're sure you've gotten everything - fix the pool by forcing removal of the missing drive. Ugly but doable if you're careful.

  5. The default DrivePool balancing setting when Scanner is also installed is to evacuate any drive that Scanner detects as damaged, as a precaution to prevent possible/further loss of data.

    Scanner may be providing information about the reason for the damage, e.g. SMART warnings, and/or it may offer to attempt a fix, e.g. for file system issues.

    If it's not offering a SMART warning and it's not offering a fix, you could indeed try removing, formatting, scanning and re-adding the drives.

    You may find that DrivePool encounters problems removing the drives from the pool depending on the type of damage, at which point it'll tell you and you can decide how to proceed.

    If the problem is fixed and the drives are viewed as healthy again according to Scanner, they will be used again by DrivePool.

  6. Likely very little difference versus plugged directly into the motherboard.

    It is possible to evacuate/move files much faster if you don't mind stopping the DrivePool service while you manually move them yourself (from hidden poolpart to hidden poolpart) then restarting the service and requesting a re-measure. Basically a tradeoff between speed and comfort, if that makes sense.

  7. Honestly don't know; I was hoping to find some time to create a testbed and experiment using Local Disk providers as a stand-in (since I can toggle the VM volume read-only and see what happens) but this past week has been hectic and I needed sleep more - and I still couldn't be sure that Local Disk == Google Drive in terms of provider behaviour. If I had to guess the potential outcomes it would be similar to unexpectedly losing the connection? Frankly I'd suggest finishing moving away before the deadline arrives or ensuring you have duplication/backups elsewhere; at the very least I'd suggest don't have anything in the upload queue when the date arrives and mark the drive read-only yourself beforehand.

  8. If you're simply wanting to keep the data that's already in your pool, just Add the new drive first then Remove the old drive second; DrivePool will automatically move the data to whatever drive(s) has the most free space (so most likely your new drive) as part of the process. If it turns out there isn't enough room on the remaining drive(s) it'll just stop at that point, you won't lose anything.

    VapechiK's method above is for if you want to move files from outside your pool to inside your pool as absolutely quickly as possible when the drive(s) those files are on are going to be added to the pool, and can also be used (with some changes) if you want to move data from an old pool drive to a new pool drive for the same reason (the Remove function is slower since it has to keep all of the pool's features going while moving the data in the background).

    TLDR: if you're not in a rush, Add the new drive first then Remove the old drive second, DrivePool will take care of moving your pooled data and you can keep using your pool for other things while it does it.

  9. While a SATA cable plugged directly into a SATA controller can in theory support a 6 Gb/s transfer rate (before overhead), the transfer rate on a mechanical drive isn't likely to come even close - e.g. a 16TB WD Red Pro's ideal sequential read rate benches a little over 2 Gb/s - and that can drop way down if the drive is being asked to pull many small files non-sequentially (as little as 0.1 Gb/s or less in certain scenarios). If you've an external enclosure doing SATA to USB, the USB connection can be a bottleneck as well.

    That said, DrivePool is sluggish in evacuating drives; AFAICT it methodically removes each file before starting the next with no queueing optimisation, so it taking noticeably longer than it would take for you to do a bulk move of the contents yourself wouldn't surprise me.

  10. Could it be a lack of contiguous free space? You may have 800GB free total, but the limit for any single incoming file still can't be bigger than the largest amount of free space on any single given volume in the pool (e.g. if you had eight drives in the pool that 800GB might be made up of 50+50+50+50+100+100+100+300GB).

    Could it be running into the error because it's attempting to pre-allocate space for multiple incoming files at the same time? E.g. if this Sonarr is telling DrivePool to allocate room for twenty incoming files of 7GB each simultaneously and the largest contiguous free space available is less than 140GB then there'd be an error because DrivePool defaults to choosing the poolpart with the most free space at the time but it won't be able to simultaneously fit all twenty there.

    Could the Sonarr software be mistakenly doing some sort of percentage checking? 800GB of 83TB is less than 1% free...

    (note too that if you're using duplication, 800GB free is more like 400GB free or less)

  11. Hmm. I've just installed Solid Explorer on my android phone and tested adding a LAN/SMB cloud connection with username/password access to my pool machine; I was able to access the pool and non-pool shares on it (including a share of the pool drive itself) without issues. Do you have a Windows PC you can use to test re accessing the pool and non-pool shares to see if that has the same problem?

  12. Maybe check in Windows Disk Management just to make sure (if you haven't already)? Also, what do you see when expanding the drive's entry in Scanner (via the plus sign and then via the drop arrow for File system health)?

×
×
  • Create New...