Jump to content

santacruzskim

Members
  • Posts

    11
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by santacruzskim

  1. Understood, (and kind of assumed, but thought it was worth asking). Getting pretty deep into CloudDrive testing and loving it. Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools! Thanks for following up. -eric
  2. Further, it would be great if there was an indicator of which sub-folder in Google Drive corresponded to which "drive" in CloudDrive. I'm sure the CloudDrive folder names are crazy for a reason so maybe another way to quickly identify which is which? Thanks, Eric
  3. Curious if there is a way to put the CloudDrive folder, which defaults to: (root)/StableBit CloudDrive, in a subfolder? Not only for my OCD, but in a lot of instances, google drive will start pulling down data and you would have to manually de-select that folder per machine, after it was created, in order to prevent that from happening. When your CloudDrive has terabytes of data in it, this can bring a network, and machine to its knees. For example, I'd love to do /NoSync/StableBit CloudDrive. That way, when I install anything that is going to touch my google drive storage, I can disable that folder for syncing and then any subfolder I create down the road (such as the CloudDrive folder) would automatically not sync as well. Given the nature of the product and how CloudDrive stores its files (used as mountable storage pool, separate from the other data on the cloud service hosting the storage pool AND not readable by any means outside of CloudDrive), it seems natural and advantageous to have a choice of where to place that CloudDrive data. Thanks, Eric
  4. Got it. thanks for the follow-up Drashna. I think this wraps back around to, given these limitations, what do i do now? In a backup scenario, i'd just pull the good file from the backup, confirm its good, delete the corrupted file, and copy over the healthy one from the backup. In this case we're dealing with live, or scheduled (though my pools duplicate live), file duplication and I don't want to irritate or confuse that system. Should I... find out which drive in the pool has the healthy copy and hunt that file down. (I'm guessing this is a manual task of looking in the folder hierarchy where the file should be located on each drive in the pool until you find it). then, copy the healthy file to a location outside of the pool then, access the file in the pool through the volume created by DP representing the pool and delete the file then, copy the good copy of the file back into the pool and let drivepool re-duplicate it Thanks Drashna. I sing the praises of both products regularly (and will continue to do so), but I never thought about this part of the puzzle until I was forced to and I'm finding it less than intuitive.
  5. I recently had Scanner flag a disk as containing "unreadable" sectors. I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified. Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted. As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool. Meaning, a perfectly good copy of the file was sitting 1 disk over. SO... Is Scanner not aware of this file? What is the best way to handle this manually if the file cannot be recovered? Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool? Should I overwrite the bad file with the good one??? IN A PERFECT WORLD, I WOULD LOVE TO SEE... Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files). At this point, the user can take action with a list of options which, in one way or another, allow the user to: Flag the sectors-in-question as bad so no future data is written to them (remapped). Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation) Thanks for your ears and some really great software. Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.
  6. ya, I get that. The majority of my confusion actually comes from that same document, located here, which states, "SSD disks will receive all new files created on the pool. It will be the balancer's job to move all the files from the SSD disks to the Archive disks in the background." -definitely not what i'm seeing. (drashna- you are also implying that if a folder is flagged for duplication, any file copied to it will be instantly copied to 2 drives, outside of any balancing tasks or balancer rules, correct? ...DP wont wait for the next scheduled re-balance to duplicate) also (UPDATE) I was seeing my SSD being completely ignored when transferring a folder of files (a few GB, multiple file sizes) to / from both duplicated and unduplicated folders in the pool. However, I just got back from a trip, fired up the server, and things are working much more in line with the balancer's documentation. I had already tried a restart when i was troubleshooting so hmmm.... I guess as long as its working properly now, I shouldn't care why it was acting up before !
  7. would love a follow up on this and any other experiences getting parity + drive-pooling!
  8. Just to be clear, if you have 1 "SSD" drive and multiple "archive" drives, the following should happen? New writes to a duplicated folder should go to the SSD, then when drivepool next initiates a balancing of the pool, a copy of the new files will be transferred to 2 archive drives. The new files originally copied to the SSD will then be removed. OR are new writes simultaneously written to the SSD and 1 archive disk, then a second copy made to another archive disk and the SSD's copy deleted upon balancing? New writes to a non-duplicated folder should go to the SSD, then when drivepool next initiates a balancing of the pool, a copy of the new files will be transferred to 1 of the archive drives (priority given based on other balancer settings). The new files originally copied ot the SSD will then be removed. I ask because I have a very simple test setup including 1 SSD and 2 Archive drives and while monitoring writes in both scanner and drivepool, the SSD is rarely, if ever, used (tested copying files to both duplicated and unduplicated folders). Sometimes the SSD will be used to copy the very 1st file in the transfer and nothing more. For the sake of identifying the issue i turned all other balancers off and have the following settings (see image). I've also tried turning off all of the "ordered placement" options. Am I misunderstanding this tool? thanks! <https://www.dropbox.com/s/d56zbf3x8g3nxm4/SSD%20Optimizer.jpg>
×
×
  • Create New...