Jump to content
Covecube Inc.

santacruzskim

Members
  • Content Count

    11
  • Joined

  • Last visited

  • Days Won

    2

santacruzskim last won the day on April 12 2019

santacruzskim had the most liked content!

About santacruzskim

  • Rank
    Member

Profile Information

  • Gender
    Male
  • Location
    San Francisco, CA
  1. Understood, (and kind of assumed, but thought it was worth asking). Getting pretty deep into CloudDrive testing and loving it. Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools! Thanks for following up. -eric
  2. Further, it would be great if there was an indicator of which sub-folder in Google Drive corresponded to which "drive" in CloudDrive. I'm sure the CloudDrive folder names are crazy for a reason so maybe another way to quickly identify which is which? Thanks, Eric
  3. Curious if there is a way to put the CloudDrive folder, which defaults to: (root)/StableBit CloudDrive, in a subfolder? Not only for my OCD, but in a lot of instances, google drive will start pulling down data and you would have to manually de-select that folder per machine, after it was created, in order to prevent that from happening. When your CloudDrive has terabytes of data in it, this can bring a network, and machine to its knees. For example, I'd love to do /NoSync/StableBit CloudDrive. That way, when I install anything that is going to touch my google drive storage, I can di
  4. Got it. thanks for the follow-up Drashna. I think this wraps back around to, given these limitations, what do i do now? In a backup scenario, i'd just pull the good file from the backup, confirm its good, delete the corrupted file, and copy over the healthy one from the backup. In this case we're dealing with live, or scheduled (though my pools duplicate live), file duplication and I don't want to irritate or confuse that system. Should I... find out which drive in the pool has the healthy copy and hunt that file down. (I'm guessing this is a manual task of looking in the folder hierar
  5. I recently had Scanner flag a disk as containing "unreadable" sectors. I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified. Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted. As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool. Meaning, a perfectly good copy of the file was sitt
  6. ya, I get that. The majority of my confusion actually comes from that same document, located here, which states, "SSD disks will receive all new files created on the pool. It will be the balancer's job to move all the files from the SSD disks to the Archive disks in the background." -definitely not what i'm seeing. (drashna- you are also implying that if a folder is flagged for duplication, any file copied to it will be instantly copied to 2 drives, outside of any balancing tasks or balancer rules, correct? ...DP wont wait for the next scheduled re-balance to duplicate) also (UPDATE)
  7. would love a follow up on this and any other experiences getting parity + drive-pooling!
  8. Just to be clear, if you have 1 "SSD" drive and multiple "archive" drives, the following should happen? New writes to a duplicated folder should go to the SSD, then when drivepool next initiates a balancing of the pool, a copy of the new files will be transferred to 2 archive drives. The new files originally copied to the SSD will then be removed. OR are new writes simultaneously written to the SSD and 1 archive disk, then a second copy made to another archive disk and the SSD's copy deleted upon balancing? New writes to a non-duplicated folder should go to the SSD, then when drivepool next i
×
×
  • Create New...