Jump to content

beatle

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by beatle

  1. This pool is mostly used as a Steam library, and anything not Steam related is already backed up. Would not be a problem to redownload the library, just kind of a pain. Chkdsk /scan /v doesn't work on the pool drive since it's considered raw, but no errors were found on either of the underlying drives.
  2. Tried moving files to a newly created folder and setting duplication on the new folder. Same error. I can't even set folder level duplication on an empty, newly created folder.
  3. I'm attempting to make some changes on my pool on a Win11 machine (two NVMe partitions, separate drives) and I'm running into this error every time I try to enable or disable duplication on a folder. The pool seems to be behaving itself and duplicating data in folders where I have it set, but I am not sure why I am getting this error. All the underlying drives are healthy, I've rebooted, disabled Windows antivirus, disabled read striping, and disabled realtime duplication. Not sure what the problem is.
  4. beatle

    iDrive e2 error

    Maybe it works now, but my trial expired last week. What a shame.
  5. beatle

    iDrive e2 error

    Well, I haven't tried it with Amazon S3, just iDrive. I basically gave up on this.
  6. beatle

    iDrive e2 error

    I'm trying to create a new drive and connect it to an iDrive e2 bucket. I started by creating an access key on iDrive, and then went into CloudDrive and created new Amazon S3 drive, then selected "Use Amazon S3 compatible third party service" and punched in the keys and service URL. Had to add https:// to the front of the service URL that iDrive provided. Hit Save and was greeted with this message: Not sure what this means, but it's like it's expecting this to be for Amazon and not a third party provider that uses S3. I found this reference to making some changes in a json file to get it to work, but I think it might be outdated since it's over 2 years old: https://stablebit.com/Admin/IssueAnalysis/28540
  7. Right, as I mentioned I would be a touch short. Though with everything duplicated I wouldn't lose data if a drive fails; I just wouldn't be able to automatically recover from hazcon state of having some of my data on only one drive. I think of this like running RAID 5 without a hot spare. If I lose a drive from a RAID 5 group, the data is still there and accessible by calculating the block using parity data, but the rebuild to full redundancy won't be able to occur automatically - a spare drive would have to be installed. With Drivepool the data at risk before a full duplication would be whatever the remaining disks couldn't duplicate from the failed drive. To save a few bucks on electricity I am leaning towards having a few cool spares in drive caddies that I could pop into the box while a full size spare is sourced.
  8. I currently have 7 drives in my pool (3, 5, 8, 8, 8, 14, 14) and I have just shy of 12TB free. My most full drive (a 14TB) has 8.26TB of data on it, so if it were to crash, the other drives would need to pick up the duplication slack, so I think I could be just a touch short if I lose a big drive. Of course I wouldn't lose data, but I'd need another drive pronto to get out of hazcon state. How full does everyone else let their pools get?
  9. beatle

    Hiding Drives

    Try this: https://winaero.com/blog/hide-drive-windows-10-file-explorer/
  10. I use DrivePool for my servers and love it, but performance there isn't that important. On my desktop, I want the fastest performance possible, though I would also like better use of my two 500GB SSDs. I'd like to shave off my OS into a separate partition and then pool the remaining space with the other drive. I'd then install my software, VMs, etc. on the pool. No need for duplication here, all my important stuff goes on my server. I suppose I could test this myself, but I'm sure I'm not the first person to do this. Is there a penalty to this configuration? If so, how much?
  11. Thanks, I figured that might be the case. No big deal. I use SnapRAID for protection, so no duplication in my pool. Now, what's the best way for me to remove the drive from the pool without rehydrating all of the data? I'd like to just keep the contents of this drive as they are and just remove it from the pool. I'm thinking I should just power down, pull it out, then remove the drive from Drivepool when the server reboots. I can then move the files out of the "poolpart" folder and back into root of that drive.
  12. I might have a niche case here, but maybe others would benefit as well. I run Server 2016 with deduplication on one of my volumes. All of my backup jobs go there, and I get pretty good deduplication savings from all of the fulls and diffs. For ease of management, this drive is a part of my pool. I also run Scanner, and I know it will evacuate a drive if it detects SMART errors (a great feature). However, in this case if I try to move all of my backups off the deduplicated volume, it will need to rehydrate them all. Due to the savings, I'll need an extra TB to place them all elsewhere in the pooled drives. I'd like to be able to exclude this drive from this particular plugin. If I lose my backups due to a SMART error, no big deal. I guess I could also exclude the drive from the pool altogether and just run my backups to a separate drive... but I figured I'd ask if this were already possible with Drivepool.
  13. beatle

    Scan resets?

    I have a couple of the popular 8TB WD external drives pooled together as the offline backup for my NAS. I bring them on every few weeks to sync my NAS data to them, so they're off most of the time. With that in mind, they're never scanned by Scanner. In addition, whenever a scan actually starts, they overheat and throttle the drive anyway. I decided to put them "through the wringer" and cool them with a fan so I could complete a scan and see how they do. Unfortunately, after a few hours, the scan will restart on a drive and start over at 0%. At an average read speed of ~130MB/sec, one will take ~18 hours to scan. Any idea why the drive resets? It doesn't seem to drop off or lose USB connectivity.
  14. Did a restore of most of the files on one of the drives that was deduplicated. Seems like I got most of them restored. Then I went to do a sync with SnapRAID and noticed that it would not read a file and was basically hung up on it. I could not read the file through the pool or on the disk itself. Corrupted. I did a search for all video files and noticed that a lot of them would not build thumbnails with Explorer. All of those are corrupted. As I did a backup after I enabled deduplication, all of my backup files are also corrupted. I'm not sure if there's anything that could be done at this point. It's not a total loss, but I'm thinking this is probably several terabytes worth. Some files will be very difficult or impossible to replace. Any ideas?
  15. So I'm looking at my files and nearly all of the ones on the drive that was undergoing deduplication are inaccessible, even on the drive itself. This is pretty bad, but I think I have nearly all of them on my backups. Since I'm not running deduplication any longer, I figured these issues would have solved themselves since stopping and uninstalling the deduplication role went off without a hitch. I'm guessing that's not the case. A handful of files on some of my other drives are also showing as "other." Is there a way to determine these? I don't have any files outside of the poolpart folders on those drives. Just want to make sure I don't have more damage.
  16. Ok, I've been doing some digging and trying things out. Tried stopping the service and moving my files out of the poolpart directory, then starting it back up. I was then able to quickly remove the drive, then add them back in doing the reverse. Still had the same missing files when I restarted the service. Then I read about another person with similar "other" issues and the suggestion was to install the latest beta. I did that and it got most of my files back in the pool, but not all. There are also some "other" files on some of my other drives. Thinking back, I tried out Windows deduplication on this box early Saturday. I disabled it since it basically got me nothing (mostly video files) and went about my business. I've since disabled deduplcation on all drives and then removed the deduplication role. I think they may be related, but not sure what to do about reincorporating the "other" files into the pool.
  17. I have a pool on my R510 that spans 7 drives. They're mostly a mix of 2-3TB drives with a small partition carved off my OS drive for some extra redundancy. I protect with SnapRAID, and it has been running pretty well for the past 9 months. I recently upgraded backup solution, a couple of 8TB external USB drives that I attach to my desktop and pull files across using FreeFileSync. It seemed to be working well, though today when I went to watch a couple of Christmas specials, I noticed I could not read them from my server. I could read them fine from my backup drives (just sync'd last night) so I copied them back to my server from the backup drives and all seemed well. Then I spot checked a few more movies on my server and noticed they were also "corrupt." I could not read them across the network or locally on my server through the pool. I attempted to read them at the file level by snooping inside the poolpart folder on the drives and I could read them fine, so this seems to be related to the pool itself and my data is actually intact. I've remeasured the pool and it seems like one drive has a lot of data that is just considered "other" and is not actually incorporated into the pool, despite everything being in the poolpart folder. Any ideas on this one other than removing the drive and adding it back?
  18. I'm planning to start pooling all my old disks and turning on duplication, moving on from SnapRAID so I can more easily just add disks of any size to the pool. Anyway, I'm banking on basically halving the capacity of all my disks to determine usable capacity, but I vaguely understand that it's not that simple. FWIW, I'm planning to pool 3, 3, 3, and 5TB drives. I'd also like to be able to see the "usable" capacity of the pool instead of the raw capacity when using duplication. i.e., if I have 14TB of raw disk and deduplication is set for the whole pool, can it show me 7TB of usable space?
  19. Can you explain this one a bit more? I'm planning on adding an SSD to my pool which is all duplicated data. I don't want files to "hang out" on the SSD beyond just being used as a write cache. I'd like to have them offloaded to the pool pretty much immediately so they're all duplicated. Are you saying this is not possible based on the way the optimizer works? I think this is the same goal as the OP.
×
×
  • Create New...