Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11572
  • Joined

  • Last visited

  • Days Won

    366

Posts posted by Christopher (Drashna)

  1. getting shuffled around constantly to maintain equal free space on all disks (who cares)

    You do mean other than the drive must be a certain percent free to defragment it? And a number of other operations (such as backups) need a certain amount of free space (for backups, for the snapshot). Those are both good reasons... Also, there is the persistent rumor/report that NTFS suffers performance issues when ~90% full or more (I'm not sure about this one... though I have seen a lot more issues when the drive has less than 1GB free...)

     

     

    Otherwise, you are pretty much spot on.

  2. Unfortunately, you won't be able to get SMART data.

    The issue is that ESXi (and HyperV even) use a virtual drive (even for the passed through disks) and this driver doesn't collect and pass one the SMART data.  

    Also, since ESXi is based on *NIX, it means there isn't anything we can really do.

    If you were using HyperV, you could install StableBit Scanner on that, but you'd want to disable file system and surface scans for any disks passed through.

     

    I know that's not what you want to hear, Sorry.

  3. Well, first of all, what operating system do you plan on using?  This makes a difference, because the built in tools have some serious limitations (especially for WHS2011/Server 2008R2 and earlier). As well as pricing for any 3rd party software.

     

    .Also, I am guessing that you want actual backups, and not just redundancy/duplication.

     

    Also, if you do want to use robocopy, I'd recommend using the actual pool for the source path instead of a single disk, especially if you have duplication enabled. You may need to split up the tasks, but this method may be more reliable 

    Also, you could use the "dpcmd" tool to increase the duplication count to higher than 2 (so you'd have more than 2 copies)

  4. Yes, if you are using duplication, then you absolutely need two feeder disks.

     

    Please set up two feeder disks, and then see if the Archive Optimizer is exhibiting the same results.

     

    As for the "[Rebalance] Cannot balance. Requested critical only." message, that just means that the Service believes that the pool doesn't need balancing at this time.

  5. If it doesn't have the "PoolPart.xxx" folder, that would be why it's not part of the pool any longer. 

    The actual files from the pool is stored in that folder. Also, that's part of how we identify the disk as part of the pool (and to which pool).

     

    I'd recommend running data recovery on that drive/partition, and to not write anything to it until after you've done that.

  6. Check Disk Management to make sure the partition is there, and still "NTFS".

    If it is, check for hidden files. There should be a hidden "PoolPart.xxxx" folder there. 

     

     

     

    Also, if you don't mind me asking, why do you have the drive partitioned into two parts? If you initialize the disk as "GPT", then you can create partitions larger than 2TBs without any issue. Both WHS2011 and DrivePool have absolutely no issue using GPT disks (as long as they're "basic" and formatted as NTFS).

  7. Normal, could send me your ticket number in a PM, and I'll double check on your ticket for you.

    And I appologize that your ticket has gone unreplied. 

     

    As for the image, it looks like it may be re-measuring the pool.

     

    Also, there is a much newer build out (2.0.0.345 as opposed to 2.0.0.320). Try updating to that, and see if you still have the issue.

  8. We don't usually push out notifications of the new beta builds the first day or so they are out. Let those that are very eager about testing get it first, then push out notifications, just in case there are issues that pop up.

     

    @p3x-749, It should have notified about a couple of different updates after that. Would you mind submitting a ticket about that? (http://stablebit.com/Contact)

     

     

     

    And worst case, you could follow Alex's twitter account for notifications about new builds (http://twitter.com/covecube)

  9. Yeah, it doesn't like to use existing folders. That is precisely what the "rebuild DrivePool Shares" option is for. It manually sets the folders to be on the pool (as well as re-adds other folders that you may have created).

  10. There isn't exactly an easy way to do this natively.

     

    However, there is an advanced config setting to create a file while it's "doing stuff".  And there is an program called "LightsOut" that is hands down the best power management utilities available. And this can look for said file and prevent standby if it exists.

     

    For the "Create the file" setting:

    http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings

    And the setting's name is "RunningFile_FileName". Set the value to a folder and file path (such as "C:\ProgramData\StableBit DrivePool\Service\running.bin"), and then restart the service (or computer).

     

    And here is the link for LightsOut:

    http://homeserversoftware.com/

     

     

     

    However, if you really don't want to get the additional software, please submit a feature request here: http:/stablebit.com/Contact

×
×
  • Create New...