Jump to content

SootStack

Members
  • Posts

    7
  • Joined

  • Last visited

Reputation Activity

  1. Like
    SootStack got a reaction from Antoineki in Disk space equalizer not working?   
    Hi,

    I'm trying to get all of my drives spinning while I restore 18tb of data to my pool, however I can't seem to get it to use all drives for writing. I am using rclone to do the transfer.
    I've got rclone set to do 10 concurrent transfers and even stopping and restarting rclone doesn't change the file placement.
    I'm using only the drive space equalizer plugin and equalizing by percentage. All others are disabled.
    There are 4 6tb drives and 1 8tb drive. So far it has written ~300gb to the 8tb drive only. 
    Are there other settings required for this plugin to work?


    My goal is to simply have it round-robin write files to all disks without regard to free space remaining. 

    DrivePool: 2.2.0.651 and 
    diskspace Equalizer Plugin 1.0.2.4
  2. Like
    SootStack got a reaction from KiaraEvirm in Drive pool interfering with Docker for windows   
    Hello,

    Drive pool seems to be locking files as their created on drives that are not apart of the drive pool, its causing docker to fail to pull load any images. Is there anyway to limit drive pool from reading / locking files outside of the pool?

    When I disable the drive pool service then Docker works correctly, and as soon as I start the "StableBit DrivePool Service" the problem immediately returns. 
     
    docker version Client:  Version:      17.05.0-ce-rc1  API version:  1.29  Go version:   go1.7.5  Git commit:   2878a85  Built:        Tue Apr 11 20:55:05 2017  OS/Arch:      windows/amd64 Server:  Version:      17.05.0-ce-rc1  API version:  1.29 (minimum version 1.24)  Go version:   go1.7.5  Git commit:   2878a85  Built:        Tue Apr 11 20:55:05 2017  OS/Arch:      windows/amd64  Experimental: false Drive Configuration SSD Optimizer is running 500gb ssd: windows volume -> C:\ SSD cache volume (No drive letter assigned) 4x WD red 6tb disks (No drive letter assigned) 1x WD red 8tb disk (No drive letter assigned) One Stable bit drive pool -> D:\ Made up of SSD cache 4x WD red 6tb disks 1x WD red 8tb disk docker pull microsoft/nanoserver Using default tag: latest latest: Pulling from microsoft/nanoserver bce2fbc256ea: Extracting [==================================================>]  252.7MB/252.7MB 6a43ac69611f: Download complete failed to register layer: re-exec error: exit status 1: output: ProcessUtilityVMImage C:\ProgramData\Docker\windowsfilter\d4d43f11aa1cc5bbd0a1369bfc1af1491ab77c8d906a89efee5186f7a6b18084\UtilityVM: The process cannot access the file because it is being used by another process.
     
  3. Like
    SootStack reacted to PsychoCheF in Need some kind of bandwidth control   
    I'm trying out your software suíte as I am to replace my old WHS1 with a new home server.
    Pool & Scanner seems to run just fine, but CloudDrive seems to mess with me :-(
     
    Problem 1:
    I set up a drive on box.com
    When the drive is set up I get a lot of error messages, in short:
    I/O Error
    CD drive h:\ having trouble uploading data
     
    Error: The request was aborted: The request was canceled
     
    The new drive get some MB of data that is marked "To Upload" what is this? The drive is still empty?
     
    Problem 2:
    I really need some way to control bandwidth.
    CD basicly kills my internet connection trying to upload those MB of secret data. Just called my internet provider and yelled some because my up speed was crippled
    CD actully
     
    I suggest an option for schedule syncing of CD, with some extra options for
    * Disable schedule for X hours
    * Set max bandwidth to XXX (non scheduled)
    * Set max bandwidth to XXX for X hours (non scheduled)
     
    Problem 3:
    A minor thing. UAC asks every time i star CD UI if I want to allow CD to change my PC.
    Why? Doesn't happen when using scanner or pool UI.
     
    And maybe a stupid question; I assumed that CD would mirror my cloudaccount, but it seems to reserve space for a new "virtual drive" in my pc?
  4. Like
    SootStack reacted to SootStack in Using a cloud drive and no other for duplicates   
    This is also my exact intended usage. 
    so +1 for that request. 
  5. Like
    SootStack reacted to Carlo in Using a cloud drive and no other for duplicates   
    I'm trying to follow the directions for this but coming up at a loss.
    Any chance specific directions could be given to show how to set this up?
     
    I for example have about a dozen drives as part of my local storage and part of DrivePool.
    I have created a 50TB drive using DriveCloud using Amazon Cloud Drive.
     
    What I want to be able to do is have it Duplicate all LOCAL media to the 50TB cloud drive.
    1) no duplicates should be local
    2) cloud drive is ONLY duplicates of the LOCAL media
     
    Essentially this would become an "automated backup" if it works.
     
    If this can be done I'd like to ask for an option to be added to the next version.
    1) The ability to set this up to NEVER delete a file in the cloud drive if it is locally deleted unless given permission.  Updates yes but no deletes.
    2) Should be configurable per cloud drive
    3) Have the option to via the GUI to perform the deletes ONLY when clicked.
     
    This would help with user error (accidential delete of directory for example) or if something happens to a local drive and the drive pool thinks the files are gone (but not supposed to be).
    Essentially what I'm asking for is an automated "XCOPY" operation that can create or update media but never delete it "automatically".
     
    IMHO backups (online or local) serve 2 purposes.  They protect against hardware failure and user screwups.  I hate to admit it but I've lost more data over time due to user goofs then hardware issues.  By not automatically deleting data in the cloud I can get some protection against the 2nd cause of loss of data.
     
    Thanks,
    Carlo
×
×
  • Create New...