Jump to content

santacruzskim

Members
  • Posts

    11
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by santacruzskim

  1. Curious if there is a way to put the CloudDrive folder, which defaults to: (root)/StableBit CloudDrive, in a subfolder?

    Not only for my OCD, but in a lot of instances, google drive will start pulling down data and you would have to manually de-select that folder per machine, after it was created, in order to prevent that from happening. When your CloudDrive has terabytes of data in it, this can bring a network, and machine to its knees.

    For example, I'd love to do /NoSync/StableBit CloudDrive.  That way, when I install anything that is going to touch my google drive storage, I can disable that folder for syncing and then any subfolder I create down the road (such as the CloudDrive folder) would automatically not sync as well.

    Given the nature of the product and how CloudDrive stores its files (used as mountable storage pool, separate from the other data on the cloud service hosting the storage pool AND not readable by any means outside of CloudDrive), it seems natural and advantageous to have a choice of where to place that CloudDrive data.

    Thanks,

    Eric

  2. @ikon - I'm basically just monitoring disk activity in Stablebit Scanner (and also the disk "performance" section of DrivePool), which shows pretty clearly what's going on - the active drive(s) are being pinned and non-active literally sitting at or around zero, not to mention the fairly large difference in transfer speed as shown in File Explorer and TeraCopy, depending on whether it's writing solely to the SSD or not.  I'm also being methodical about knowing what files I'm transferring and checking afterwards which drive(s) they ended up in, in the hidden poolpart folders just to make sure there's no funny business going on.

    Also, unlike my archive / offline pool currently sitting at 22 drives of all different speeds and manufacturers, this "online" pool I'm testing and trying to configure is just 3 drives, 2 of which are identical in performance and the 3rd is the SSD which performs totally differently. You really can just change settings in DP, start a transfer, and in about 20 seconds, after things level off know with pretty good certainty what's going on.

    Hope that helps

     

     

  3. Thanks for the responses.  To be clear, I am trying to intentionally NOT have duplication on write to the pool for the benefit of performance, and I'm aware of the inherent risk involved - I'm a big boy, I can handle it :)

    The only drive I have set to "SSD" is the SSD.  If I have real-time duplication off, it doesn't touch the SSD. If I have it on, it writes to 1 SSD and 1 HDD.  What I want is for it to write solely to the SSD, then upon hitting a threshold or becoming idle, the pool balances, sending the data from the SSD to both spinners, in the background, without my intervention.

    FYI, splitting the SSD into 2 partitions creates some interesting results - each partition shows up independently as drives for me to add to the pool. If I add both partitions to the pool, SSD optimizer is not fooled - they're listed as one option in the plugin settings, reading …"E:/SSD_01, F:/SSD_02." However, the result is different - with RTD off it writes to the SSD almost exclusively (!), alternating between the 2 SSD partitions as files are copied onto the pool. But it will still occasionally choose an HDD as the target, making me think those decisions are being made by other balancers and not the SSD optimizer (the SSD is always the least full disk, for example, so maybe that's why DP is sending files there first?). With RTD on, it copies to the SSD and 1 HDD in parallel as it did before, with the performance hit, indicating DP itself is likely not fooled by splitting a single drive into partitions (as is shouldn't be!).

    So good and bad - It appears to be working as I was originally hoping by using this split partition workaround, but not reliably, and only through this weird hack. Maybe I'm misunderstanding the purpose of this plugin to begin with?  Personally, I feel like my use case is the most logical use of an "ssd cache" in a pooled storage environment, but maybe my whole premise is off here, hence the confusion. However, what continues to perplex me is the fact that the "new file placement limit" (shown with red triangles on the pooled disks' horizontal bars) indicates no files should touch the HDDs until the SSD is 75% full, yet DP is still choosing the HDDs exclusively when RTD is off, avoiding the empty, speedy SSD altogether.

    Thanks for any help here. I'm closer to my goal, but just as confused!

  4. I've read through this whole thread and I've still come up a little confused with what I'm seeing in my (now extensive) testing of this plugin.

    MY SETUP:  Local Storage Over USB3 in 4-BAY Enclosure.  (2) 10TB Spinners + (1) 2TB SSD

    MY GOALS: Use the SSD as a write cash for the pool. Write data to the pool super fast, when the pool becomes idle or a threshold / trigger is reached, data on the SSD would be moved to both HDD's (in duplicate).

    MY OBSERVATIONS:

    "Real-time Duplication" OFF - Data does not touch the SSD.

    "Real-time Duplication" ON - Data gets written to the SSD AND one of the HDD's to maintain RTD. Consequently, performance drops to about equal to when I'm completely avoiding the SSD and writing directly to the HDD's in parallel because at that point, I'm saturating the USB3 Gen1 bus (and DP will still have to eventually empty the SSD out to the other HDD to balance the pool).

    This seems to be the inverse logic of a "Landing Zone." If I am explicitly selecting to NOT duplicate in real-time, why would that cause the SSD to be ignored entirely when that is precisely the use case where writing exclusively to the SSD would make the most sense and the balancer rules are telling it to do exactly that (new file placement limits are at 0.0% on both HDDs)?  Is there a way to force the behavior I'm looking for to happen, without having to micromanage my pool (looking for a solution, not a hack/workaround).

    I've tried A LOT of different settings here, rebuilt the pool, reset DP, etc.. I'm hoping there's a check box somewhere that I'm overlooking. I very much love DrivePool BTW!

  5. Got it. thanks for the follow-up Drashna.  I think this wraps back around to, given these limitations, what do i do now?

     

    In a backup scenario, i'd just pull the good file from the backup, confirm its good, delete the corrupted file, and copy over the healthy one from the backup.  In this case we're dealing with live, or scheduled (though my pools duplicate live), file duplication and I don't want to irritate or confuse that system.  Should I...

    1. find out which drive in the pool has the healthy copy and hunt that file down. (I'm guessing this is a manual task of looking in the folder hierarchy where the file should be located on each drive in the pool until you find it).
    2. then, copy the healthy file to a location outside of the pool
    3. then, access the file in the pool through the volume created by DP representing the pool and delete the file
    4. then, copy the good copy of the file back into the pool and let drivepool re-duplicate it

    Thanks Drashna.  I sing the praises of both products regularly (and will continue to do so), but I never thought about this part of the puzzle until I was forced to and I'm finding it less than intuitive.

  6. I recently had Scanner flag a disk as containing "unreadable" sectors.  I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified.  Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted.

     

    As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool.  Meaning, a perfectly good copy of the file was sitting 1 disk over.

     

    SO...

    1. Is Scanner not aware of this file?
    2. What is the best way to handle this manually if the file cannot be recovered?  Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool?  Should I overwrite the bad file with the good one???

    IN A PERFECT WORLD, I WOULD LOVE TO SEE...

    1. Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files).
    2. At this point, the user can take action with a list of options which, in one way or another, allow the user to:
      1. Flag the sectors-in-question as bad so no future data is written to them (remapped).
      2. Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool.
      3. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation)

    Thanks for your ears and some really great software.  Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.

  7. ya, I get that. The majority of my confusion actually comes from that same document, located here, which states, "SSD disks will receive all new files created on the pool.  It will be the balancer's job to move all the files from the SSD disks to the Archive disks in the background." -definitely not what i'm seeing.

     

    (drashna- you are also implying that if a folder is flagged for duplication, any file copied to it will be instantly copied to 2 drives, outside of any balancing tasks or balancer rules, correct? ...DP wont wait for the next scheduled re-balance to duplicate)

     

    also (UPDATE) I was seeing my SSD being completely ignored when transferring a folder of files (a few GB, multiple file sizes) to / from both duplicated and unduplicated folders in the pool.  However, I just got back from a trip, fired up the server, and things are working much more in line with the balancer's documentation. I had already tried a restart when i was troubleshooting so hmmm.... I guess as long as its working properly now, I shouldn't care why it was acting up before :) !

  8. Just to be clear, if you have 1 "SSD" drive and multiple "archive" drives, the following should happen?

    1. New writes to a duplicated folder should go to the SSD, then when drivepool next initiates a balancing of the pool, a copy of the new files will be transferred to 2 archive drives. The new files originally copied to the SSD will then be removed. OR are new writes simultaneously written to the SSD and 1 archive disk, then a second copy made to another archive disk and the SSD's copy deleted upon balancing?
    2. New writes to a non-duplicated folder should go to the SSD, then when drivepool next initiates a balancing of the pool, a copy of the new files will be transferred to 1 of the archive drives (priority given based on other balancer settings). The new files originally copied ot the SSD will then be removed.

    I ask because I have a very simple test setup including 1 SSD and 2 Archive drives and while monitoring writes in both scanner and drivepool, the SSD is rarely, if ever, used (tested copying files to both duplicated and unduplicated folders). Sometimes the SSD will be used to copy the very 1st file in the transfer and nothing more. For the sake of identifying the issue i turned all other balancers off and have the following settings (see image). I've also tried turning off all of the "ordered placement" options. 

     

    Am I misunderstanding this tool?

     

    thanks!

     

    SSD%20Optimizer.jpg <https://www.dropbox.com/s/d56zbf3x8g3nxm4/SSD%20Optimizer.jpg>

×
×
  • Create New...