Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'duplication'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. Hello together, I just optimized my Drivepool a bit and set a 128 GB SSD as write cache. I had another SSD with 1GB I had left I put into the pool and set drivepool to write only folders on it which I use most and should be fast available. This folders have also a duplication (the other Archive drives are HDDs). I'm not sure if this is working or the SSD is useless for more speed. How is drivepool working: - does drivepool read both the original und the duplicated folders if I use them from the Client? - will drive pool recognize which drive is faster and read from this fir
  2. I am a complete noob and forgetful on top of everything. I've had one pool for a long time and last week I tried to add a new pool using clouddrive. I added my six 1TB onedrive accounts in cloudpool and I may or may not have set them up in drivepool (that's my memory for you). But whatever I did was not use drivepool to combine all six clouddrives into a single 6TB drive and I ran out of space something was duplicating data from one specific folder in my pool to the clouddrive. So I removed the clouddrive accounts, added them back, and did use drivepool to combine the six accou
  3. I posted about this last year or earlier this year and never really got an answer, then it went away, and now it is back again... At seemingly random intervals (I have been unable to figure out what triggers it) my drive pool will decide that I didn't *really* want those folders to be duplicated and will turn off folder duplication. It then takes me literally days of waiting for it to reduplicate after I tell it to do so. It does not turn off all folder duplication - just certain folders, and not even the same ones consistently. For example, today it decided that I didn't need my TV
  4. Hello! I'm fairly new to StableBit, but liking the setup so far! I've got a few pools for various resources in my server. In case it's important, here's a quick idea of what I've got running: Dell R730 running Windows Server 2019 Datacenter, connected to a 24 disk shelf via SAS. Each drive shows up individually, so I've used DrivePool to create larger buckets for my data. I'd like to have them redundant against a single drive failure, but I know that means duplicating everything. I will eventually have a pool dedicated to my VMs, and VMs deduplicate very well since each one req
  5. Is it possible to schedule duplication so that it is not real time?
  6. 1.) Is there any way to override the default 3x duplication for reparse points? They're stored on my C Drive (I know, I know) which is backed up once a week. 2.) Are files verified when they're balanced? Is that why my poor drives are pegged at 100% utilization? Again, any way to turn this off?
  7. I suspect the answer is 'No', but have to ask to know: I have multiple gsuite accounts and would like to use duplication across, say, three gdrives. The first gdrive is populated already by CloudDrive. Normally you would just add two more CloudDrives, create a new DrivePool pool with all three, turn on 3X duplication and DrivePool would download from the first and reupload to the second and third CloudDrives. No problem. If I wanted to do this more quickly, and avoid the local upload/download, would it be possible to simply copy the existing CloudDrive folder from gdrive1 to gdrive2
  8. Hey everyone, New to the drivepool game here, absolutely loving this software. Has made storage and backup so much easier. Sorry if these are stupid questions here. 1. I seem to be a bit confused how the "Folder duplication" process works. I have one of my movies folders set to duplicate, it's around 2.5tb. However, when I turn on duplication, the duplication across all my drives is around ~5.7tb. I don't have any other folders set for duplication, why would this figure be so high? I guess I was expecting duplication to be around the same size as the folder I was duplicating (2.5tb
  9. Not sure if this is possible or easy... Right now I have my pool 3x duplicated (upgraded from 2x duplicated) and I have 1.51TB marked in a dark blue which is keyed as "unduplicated." Does that mean it's only x1 on the pool, or it's x2 but not yet x3 duplicated? If you had colors for how many copies of a file are on the pool that would be great! (i.e. Red for 1x copy, Blue x2 Copy, Green x3 Copy, etc.) * Of course, not sure how this would work if folks are doing a full pool duplication like I am and instead duplicating some folders only.
  10. I am a newbie to StableBit. My son has been using it for over a year and introduced it to me after I nearly lost 48TB of data at home due to Windows 2016/10 poor Storage Spaces [I'd never make that mistake again and pity anyone who does. They're asking for problems!]. Thank goodness for tape backups! Anyway, I installed it and added QTY (23) 4TB drives to create my pool. Then began restoring data to the new pool. This afternoon I saw a warning that one of my 4TB drives went from 100% health to 9% in 3 hours. I guess the stress of writing data was too much. I keep the server room at
  11. I recently had to remove a drive from a pool when I realized it was failing during a duplication check. The drive was emptied OK, but I was left with 700MB of "unusable for duplication" even after repairing duplication and remeasuring the pool. Then I decided to move a 3GB file off of the pool, and now it's saying it has 3.05GB of "unusable for duplication." But 700MB + 3GB != 3.05 GB... I'm lost. What is going on? How can I see what is unusable for duplication? Did I lose data? How do I fix this? EDIT: I also have 17.9GB marked "Other" but even after mounting each component drive
  12. Started a duplication of my 40 tb drivepool and read this answer which confused me and seems to contradict how i'm going about this I made a 40 tb clouddrive and thought that if i selected the original drives as unduplicated and the cloud drive as duplicated and selected duplication for the entire array it would put a copy of every file in the original pool on the cloud drive. If a drive failed i could easily rebuild from the cloud drive. Here are some settings In that other example you told the user to make another pool with the original pool and the cl
  13. Just added two new 8TB HDD to my pool. All seemed OK before the addition. After the addition, balancing will not start and I have a "Duplication inconsistent" message. Here is what I have tried so far: 1) Settings -> Troubleshooting -> Reset all settings 2) chkdsk /f on all pool drives (everything was OK) 3) Ran duplication check within drivepool. 4) Multiple reboots None of these have helped. What can I try next?
  14. I've read Drashna's post here: http://community.covecube.com/index.php?/topic/2596-drivepool-and-refs/&do=findComment&comment=17810 However I have a few questions about ReFS support, and DrivePool behavior in general: 1) If Integrity Streams are enabled on a DrivePooled ReFS partition and corruption occurs, doesn't the kernel emit an error when checksums don't match? 2) As I understand it, DrivePool automatically chooses the least busy disk to support read striping. Suppose an error occurs reading a file. Regardless of the underlying filesystem, would DrivePool automatically s
  15. Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes d
  16. I'd like to pool 1 SSD and 1 HDD and with pool duplication [a] and have the files written & read from the SSD and then balanced to the HDD with the files remaining on the SSD (for the purpose of [a] and ). The SDD Optimizer balancing plug-in seems to require the files be removed from the SSD, which seems to prevent [a] and (given only 1 HDD). Can the above be accomplished using DrivePool? (without 3rd party tools) Thanks, Tom
  17. Maybe this is a feature request...I migrated (happily) from Drive Bender. And for the past year, I've been very satisfied with Drivepool/Scanner combo. However, one tool that Drivebender had (that I miss) is the ability to run a duplication check. It would identify any files marked for duplication that it can't find a duplicate of. After that, I could force duplication and it would begin creating those duplicates. Of course, this is supposed to be automatic. I have my entire pool duplicated. But when one lone 3TB drive fails, I'd like to run a duplicate check to see if all the fil
  18. To clarify what I mean here: If you want to ensure that one copy of a duplicate file is on a CloudDrive, but you don't care where the other copy is (or it can be on any local disk). This can be confusing and a bit counterintuitive. Additionally, this will cause your system to want to continually rebalance, as the balancing ratio will always be off. To start off with, you'll need to micromanage the pool a bit. You'll need to create a rule for the folders that you want duplicated (or use "\* " for the entire pool). In this rule, select ONLY the CloudDrive disk. Hit "Ok".
  19. Hi, I am running DP 2.x with WHS 2011. I have 2 Pools of 2x2TB HDDs, duplication is set to x2 for everything. I backup everything using WHS 2011 Server Backup. One Pool (392 GB net) contains all the shares except for the Client Backups. The other (1.16 TB net) contains only the Client Backups. Of each Pool, I backup one underlying HDD. Given that the Client Backup database changes, the Server Backup .vhd fills up tracking changes and reaches its limit of 2TB. At that time, all Server Backups are deleted and WHS Server Backup starts over again with a full backup. This is fine in pri
  20. Hello, I moved most of my hard drives from my main PC to my Storage / HTPC server. Now currently I am running a - Windows Raid 1 (duplication) two 3 TB drives (personal Media up to 2011) - 1 Single 3 TB harddrive (personal other files) - 1 single 2 TB files which seems to be slowly failing (personal Media 2012+) - adding two new 3 TB drives next week all of those are backed up to the cloud with crashplan. Crashplan is re-synchronizing right now. I heard much good about drivepool. BUT I am really confused why I hardly cant find any YOUtube videos like reviews or hands one video
  21. I have a drive, C;, the system drive which has suddenly decided to dump all its duplicated files on the other drives. Why? Doesn't seem very efficient. I've attached a screen capture of what I see. Richard
  22. Hi, I'm currently evaluating DrivePool to figure out if it's for me. So far, things are looking very positive. The flexibility compared to Storage Spaces and the newly added folder placement feature are exactly what I need . One thing I would like to do however is to place my user accounts on a pool for redundancy. I know i can redirect documents and things easily within Windows, but this doesn't include AppData and all the other special folders. My plan is to keep the administrator acount on my root install so if the pool's not available I can still login. I created a user acc
  23. The topic of adding RAID-style parity to DrivePool was raised several times on the old forum. I've posted this FAQ because (1) this is a new forum and (2) a new user asked about adding folder-level parity, which - to mangle a phrase - is the same fish but a different kettle. Since folks have varying levels of familiarity with parity I'm going to divide this post into three sections: (1) how parity works and the difference between parity and duplication, (2) the difference between drive-level and folder-level parity, (3) the TLDR conclusion for parity in DrivePool. I intend to update the post
  24. So I recently updated my HTPC drivepool from 2x 320GB, 1x 1TB, and 1x 3TB hard drives by swapping the 320's and 1TB for 3x 3TB's. Decided to enable duplication for some sanity checking (would hate to have to redownload everything) and everything went smoothly. Old drives were removed from the pool quickly enough, new drives added, everything is duplicated and working fine. But I've noticed that DrivePool seems to be filling the original 3TB more than the other drives, instead of an equal distribution of files as I would except. So, is this correct? I haven't changed any default balan
  25. DrivePool quit duplicating even though there is plenty of space remaining. I have 20 drives comprising a 42.3 TB pool. I have approximately 17 TB of data stored on the pool. Per the attached snapshot of the interface, there are 9.40 TB of duplicated and 12.5 TB unduplicated (and 10.4 GB of Other), even though duplication for the entire pool is selected. I started the duplication process by first going folder by folder, but it quit duplicating with less than half duplicated. I then turned on duplication for the entire pool to see if that would restart the duplicating, but no luck. I have tried
×
×
  • Create New...