Jump to content

SootStack

Members
  • Posts

    7
  • Joined

  • Last visited

Recent Profile Visitors

636 profile views

SootStack's Achievements

  1. I started having the same issue, I have a node process which deletes unwanted files every hour and it leaves 'phantom' files in the folder. They've been deleted but the pool is reporting them available. When I try to open them i get file not found errors ( as expected). version: 2.2.0.800 i'll run a check disk on each of the drives and check out the logging link. update: When I re-measure the phantom files are cleaned out. I ran check disk on all of the drives, they all passed. I also checked that the files were deleted off the drives and they were.
  2. Ahah, that does make a lot more sense. Is there anyway to configure global file placement? I did not see any in the configuration.
  3. I had a hunch that maybe both options were the same. I double checked I had it set to percentage and then shrunk the 8tb drive to 6tb, afterwards the plugin worked as expected.
  4. Hi, I'm trying to get all of my drives spinning while I restore 18tb of data to my pool, however I can't seem to get it to use all drives for writing. I am using rclone to do the transfer. I've got rclone set to do 10 concurrent transfers and even stopping and restarting rclone doesn't change the file placement. I'm using only the drive space equalizer plugin and equalizing by percentage. All others are disabled. There are 4 6tb drives and 1 8tb drive. So far it has written ~300gb to the 8tb drive only. Are there other settings required for this plugin to work? My goal is to simply have it round-robin write files to all disks without regard to free space remaining. DrivePool: 2.2.0.651 and diskspace Equalizer Plugin 1.0.2.4
  5. I agree, it was something I kept saying couldn't happen to myself as well. Yet 100% of the time when I shut down the service it worked, and 100% of the time I turned it on it failed. I ran process monitor and I can confirmed your right, it does not ever specifically touch the files in C:\ProgramData\Docker\. Further testing: When I remove the SSD cache volume and with the drive pool service running; I can no longer reproduce the issue. now the weird bit. My hypothesis was that the issue would return. After re-adding the SSD cache volume to the drive pool with the drive pool service running. I am no longer able to replicate the issue, I also tried rebooting and could no longer replicate the issue.
  6. Hello, Drive pool seems to be locking files as their created on drives that are not apart of the drive pool, its causing docker to fail to pull load any images. Is there anyway to limit drive pool from reading / locking files outside of the pool? When I disable the drive pool service then Docker works correctly, and as soon as I start the "StableBit DrivePool Service" the problem immediately returns. docker version Client: Version: 17.05.0-ce-rc1 API version: 1.29 Go version: go1.7.5 Git commit: 2878a85 Built: Tue Apr 11 20:55:05 2017 OS/Arch: windows/amd64 Server: Version: 17.05.0-ce-rc1 API version: 1.29 (minimum version 1.24) Go version: go1.7.5 Git commit: 2878a85 Built: Tue Apr 11 20:55:05 2017 OS/Arch: windows/amd64 Experimental: false Drive Configuration SSD Optimizer is running 500gb ssd: windows volume -> C:\ SSD cache volume (No drive letter assigned) 4x WD red 6tb disks (No drive letter assigned) 1x WD red 8tb disk (No drive letter assigned) One Stable bit drive pool -> D:\ Made up of SSD cache 4x WD red 6tb disks 1x WD red 8tb disk docker pull microsoft/nanoserver Using default tag: latest latest: Pulling from microsoft/nanoserver bce2fbc256ea: Extracting [==================================================>] 252.7MB/252.7MB 6a43ac69611f: Download complete failed to register layer: re-exec error: exit status 1: output: ProcessUtilityVMImage C:\ProgramData\Docker\windowsfilter\d4d43f11aa1cc5bbd0a1369bfc1af1491ab77c8d906a89efee5186f7a6b18084\UtilityVM: The process cannot access the file because it is being used by another process.
  7. This is also my exact intended usage. so +1 for that request.
×
×
  • Create New...