Jump to content

ezelkow1

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by ezelkow1

  1. I brought it up via support and they gave me some options so for future reference 1) Run chkdsk on each volume 2) Disable bitlocker detection (https://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings) 3) SMART queries via scanner, so enable smart throttling In my instance I had already done chkdsk on all of my volumes. So I moved on to #2, when I checked the settings.json it was actually just full of nul's, so first I stopped pool, deleted the file, restarted pool to let it generate a valid one, then I disabled the bitlocker detection. For me one of those 2 things, either having an invalid settings.json or disabling the bitlocker detection, appears to have fixed it for me
  2. Also to add to this I have disabled my network adapter and completely stopped the search service, so I see no open file handles what so ever on the drivepool drive, yet I still get this 5s access light going off and can hear the head move Just so I know Im not going crazy taskmgr does show the periodic activity, the one shown below is the drivepool 'drive' which I think is just a 2TB sized placeholder, if I go under the individual USB disks that have no drive letter that are in the pool I also see the same accesses at the same time. So it's like the entire pool is being pinged every 5s
  3. I have a pool of 5 drives just attached via a usb enclosure all in a pool. Up until recently the drives would sleep after ~15min per the enclosure's firmware setting. However something recently (also around the time I upgraded to the beta of drivepool though Im definitely not saying it's related, could be something else entirely) seems to be doing some activity about every 5s keeping the drives permanently awake. Ive tried investigating via resource monitor and all it shows is explorer monitoring a handful of directories on there as well as searchindexer (even though I have disabled searching on the pool but either way its all been indexed already). So I wasnt sure if anyone may have other ideas to try to track down what is keeping my drives awake. I did also recently update to the latest WSL preview, but just in case I tried turning off wsl automounting and that made no difference. Any help is appreciated
  4. And did some more debugging today, got it narrowed down to a single drive in my pool (which is the largest and so with balancing apparently it had become the default for a while). That one drive was showing the slow writes while all the other drives would not. It then dawned on me that I had probably not actually done a cold reboot of my drive enclosure since doing this new build, only swapped the connection. Did a cold reboot on it after the machine had come up, and now it appears to be at full speed on all drives. So something either with the enclosure, or the enclosure<->new machine being confused? who knows. Either way, derp, dont forget to check the simple things
  5. Im good with just the write speed of my normal hdd's, since all the ones I have can top out at like 180-190mb/s, I just need them to be able to keep up with gigabit downloads which they can do easily. But when they have their write speed cut down to 1/3 then it becomes noticeable, especially when it seems tied to drivepool and not the hardware/OS/anything else since I can get their full write speeds just by assigning a drive a letter to them directly
  6. I was fully updated on my last build but to double check today I also tried reverting to an older build I had stashed on a backup and it showed the same issue. I opened a ticket today and submitted logs
  7. Also to add to this. If I assign a drive letter directly to one of the drives in my pool and then attempt writes to that drive, they also run at full speed, ~180mb/s So it really seems to me its pointing to something with the drivepool driver/software cause this write slowness
  8. I had been using drivepool fine on my previous build which was an intel based machine. Its read/write speeds to all the drives in my storage box were acceptable for being usb3.0. However I recently did a new build, amd this time. The pool got recognized correctly and works on the new build, however write speeds are much lower, and also lower than another external drive I have as just a backup drive (this one is usb3.0 and a 5400rpm drive as well where my storage box is all 7200rpm drives). On this new build the pool will read at the usual full speed ~150mb/s, however write speeds hover around 48-50mb/s, much lower than the almost symmetrical I get on my other drive that is not part of the pool. I have gone through and set the performance mode, disabling quick removal, and enable write caching as well on all the drives in the pool Just seeing if anyone else has seen this? Any ideas as to what might be happening? Ive probed it with various usb utils and it appears that my storage box itself is running at usb3.0 fine, its just these low write speeds I cant account for
  9. So it managed to get cleared after I did a full reinstall of scanner wiping out all the settings and letting it start a new scan. So something must have happened during my first install, maybe started/stopped a scan or something at a bad time? Im not sure, but after that it showed the disk as being fine
  10. ^ This I have this for a while on my system and just didnt bother to fix it, i.e. a samsung ssd reported as a wd hdd, etc, I had about 3 drives that were wrong. All you have to do is go in to device manager, uninstall, reboot. Windows will pull the correct names and still maintain drive letters and what not
  11. Im also wondering why it keeps reporting that the file system is damaged. Ive run chkdsk on it multiple times and its found no errors, but scanner keeps reporting it has issues
  12. I think I managed to find a way using TAKEOWN and ICACLS, since anything via a gui just kept creating all sorts of errors and were basically unusable. So letting this run to see if all permissions get reset properly. What Im more concerned about though is, is this a common occurrence when just hitting the repair button? That you basically lose all ownership of the drive and have to completely repair permissions? It just gives me pause on hitting anything automated in the scanner UI
  13. I got a notice that one of my drives had filesystem errors and needed to be repaired. I let it start it and eventually it said it failed. Now I cant access the drive at all. So what do I have to do to actually be able to use the drive again? Is all my data lost by even attempting a repair when at least it was usable before? If I try to view it I get 'access denied'. Ive tried rebooting, changing security permissions, nothing seems to work. I think the files do still exist since I could see when I tried changing permissions that it was about to go through every file on the drive and change them, but once it hit the first file it came up with a permissions error
  14. So over the years Ive thought about getting a jbod enclosure since i wanted something I could just add drives to as needed, and drivepool seems like just the software to manage it without too much worry of data loss (i.e. only lose whats on a dead drive). I was wondering what happens with drives with existing data though. I tried the demo and added some existing drives, saw it created an empty drive with whatever free space was available, but I did not see any sort of 'conversion' type of option. So is there a way to have it start transfering files of drives added to a pool to actually exist in the pool so that you can get rid of the existing drive letter?
×
×
  • Create New...