Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

About Nerva

  • Rank
    Advanced Member

Recent Profile Visitors

183 profile views
  1. I keep having a problem with my home server -- after starting a download of a large (20+ gigs) new torrent in my "Downloads" pool (just a pair of 1TB drives mirrored to each other), the entire server is unresponsive on the network. When I check Stablebit Scanner, I see a high amount of disk activity between the mirrored drives. I use the "preallocate torrents" option, so the instant a large torrent is created, it pre-allocates the file on the pool, and I think the particular way that is done causes DrivePool to "hog" the system when it tries to immediately duplicate the preallocated space. I'm wondering if maybe the torrent software, when it does the preallocation, is actually doing a ton of discrete allocations, and this "hogs" the system.
  2. Since I posted that, I ran the file system scan a few times and it found some marginal files that were recovered, so now the file system checks out fine. However, it keeps saying the disk is damaged because there's a bad sector -- shouldn't it be able to just mark that as bad, remove it from use, and clear the error flag?
  3. So, I got a message saying that one of my older disks is damaged and I may have lost data. Fortunately, I use DrivePool with duplication. I had Scanner scan the disk and the file system, but it kept showing the error saying it was unable to read one cluster and I may have lost data. So I ran the windows checkdisk with the option to scan for bad sectors -- it found some problems -- but Scanner continued to say the disk was damaged etc. I reran the Scanner scan of the disk and file system, but it still showed the error. I got fed up and removed the drive from the pool and reformatted it (not using the quick format option). Scanner continued to claim the empty disk was damaged and scanned the file system. I've since re-added it to the pool again, but Scanner just won't stop saying the disk is damaged.
  4. Is there a way to check what version I'm currently running? I can't seem to find it anywhere in the UI...
  5. I'm not sure what happened exactly, but I booted up my home server and noticed in My Computer that my pooled drives were half empty. I soon realized that much of the duplication had somehow disappeared. In theory the entire pool should have duplication, but currently (DrivePool is only 65% rechecked as I'm writing this) the legend under the pie chart show it has ~8TB duplicated, ~13TB unduplicated, and ~17TB "other", but there's no "other" shown on the pie chart itself. Also, My Computer says the pooled drive is 35.4TB with 14.4TB free. The fact My Computer is reporting that much free space (there should be less than 1TB free) is what really has me concerned...
  6. I ended up moving all the drives onto other (faster PCIe) controllers, but if/when I add more drives and need to use the Supermicro PCI controllers again, I will give that a shot.
  7. Scanner says it has over 6 years of use on it, so it is wayyy out of warranty and has lived a full life so to speak.
  8. I guess I might as well run the drive until it dies. In the past, with my main archive pool, when one of the drives goes offline, the whole pool becomes read-only -- but is that still the case if there is full redundancy? If the bad drive fails and the DVR pool becomes read-only, then I won't be able to record shows until I remove the bad drive from the pool.
  9. Well, I guess it is a good thing I am using the drives for DVR -- that's the least-critical data on my server -- it is why I use my oldest drives for today's rerun of Seinfeld and the local news report. The problem is the DVR workload, not the use of pool duplication. I record up to 200 GB of shows every day on a 2TB disk that is extremely fragmented, and obviously the 2TB drives are not handling it well. Basically it is a write many times, read occasionally application.
  10. So, I've had a home server for over 10 years, and over the years what used to be six 1TB and four 2TB drives have steadily failed until I now have only three of each, but I have of course added newer, larger drives that now store the bulk of my data. The older, smaller drives of course use more watts/TB and take up SATA slots, but small drives still have their niche uses. I decided some months ago to take the remaining three 2TB drives, which had been used for long-term storage of "stuff I want to keep" and repurpose them for SageTV DVR storage -- using two of the drives in parallel for redundancy, and pulling the third drive out of the server to keep in reserve as a spare, so that I can expect to have a redundant 2TB DVR system for years to come. But since the reshuffle, one of the 2TB drives has steadily reported bad sectors in Stablebit Scanner. It probably had less than a dozen bad sectors when I switched it from archive duty to DVR, but now it has over 100 -- I eventually started logging the trend: Date #BS 2016-10-25 32 2016-10-29 37 2016-11-01 40 2016-11-03 43 2016-11-06 45 2016-11-08 47 2016-11-09 48 2016-11-14 57 2016-11-17 69 2016-11-20 72 2016-11-21 80 2016-11-24 84 2016-11-25 98 2016-11-28 101 2016-11-30 114 So, it is typically about 1 new bad sector per day, but occasionally there are bigger jumps. The other 2TB drive has 58 bad sectors, but it already had around 55 when I switched it over to being a DVR drive. At first I thought the other drive was just "catching up", but now it has blown past its brother and has twice the bad sectors. I'm wondering what the significance really is of this trend in bad sectors, and if anyone wants to speculate on what the underlying cause is?
  11. Any update on this? I had to suspend automatic scanning of drives because Scanner thinks the imaginary extra space is bad sectors.
  12. Seems to me that a "Stablebit Defrag" would be a great idea for a new product -- it would complement both Scanner and Drivepool rather well -- and if they're all developed by the same company, in theory there shouldn't be any fear of bad interactions between them.
  13. I've read that with files being as large as they are these days and with HDD's having NCQ, that defragmentation offers little to no benefit, and just puts more wear on the HDD. Is this true? If not, what are the best defragmentation programs these days? I forget if it was the old Norton or Central Point, but back in DOS times, there was one that actually had a "prioritize programs and folders with full file reorder" option -- it would stick all your directory info and .exe's at the front of the drive, and literally reorder every file and every cluster on the drive from end to end, in one long session. Probably overkill, but it was cool to watch.
  • Create New...