Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Posts posted by Jaga

  1. On 1/5/2016 at 3:04 PM, Christopher (Drashna) said:

    However, if you want, this behavior can be disabled, with the advanced config settings:

    http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings

    Set the "RemoveDrive_PoolReadOnly" value to "False" and restart the StableBit DrivePool Service, or reboot the system. 

     

    Trying to get fully up to speed on drive removal, missing drives, workarounds, read-only pools, etc.  Was this tag deprecated or retired?  I can't seem to find it or anything similar in the .json file for Drivepool 2.X.

  2. Was doing some testing last night and today, adding in volumes to the existing Pool I have, and noticed something odd.  When I used Disk Management in Windows to delete volumes off a SSD and then re-add them with a different size/letter, Drivepool doesn't show the new available volume in it's Non-Pooled list.

    Tried closing/opening the GUI, stopping/starting the DP service, refreshing in Disk Management, rescanning the Device list, etc.  Is a full reboot required for newly-provisioned system disks/volumes to show up in the non-pooled list, or do I maybe have a glitch in the system?  It's the same SSD I was using for DP caching from yesterday, though I've rebooted since removing that functionality for it.

     

  3. 4 hours ago, PetabytesPlease said:

    The software Icaros does not seem to do picture thumbnails but only videos which again does not  fit my needs

    Sure it will.  You have to change the Thumbnailing Preset in Icaros to "Most Known".  It will then include images and so on.  You can even add your own file types to that list, provided it supports them (or the OS does).

     

    4 hours ago, PetabytesPlease said:

    To confirm this only occurs with pooled drives so the issue clearly lies with stable bit, i should not have to get third party tools to resolve the "issue" with your software

    I think if it was an issue with Drivepool, everyone would see it and report it.  But I'll step back from the question now and let higher powers handle it.

  4. Wanted to post this out here so the Stablebit team has a reference for a similar issue.

    Added a SSD Cache (single volume on a SSD drive) to my existing pool, created a folder inside the regular pool structure, and then started a multipart FTP download using BitKinex of a single file to the new folder.  After the download completed, BitKinex tried to re-assemble the parts in the folder and failed.

    If I then open properties on that folder and choose the Security tab, I see the exact same error message:  "permissions are incorrectly ordered which may cause some entries to be ineffective".  I then wiped out that folder and tried a new one just for a standard file copy test which worked fine (as expected).  The SSD cache volume showed an increase in size proportional to the file that was copied to the test folder, and I fully expect it will hit the main pool drives soon.

    Whatever file-assembly mechanism BitKinex uses for multi-part FTP transfers seems to have an issue with the folder on the SSD cache volume.  It states that "no such folder exists", and can't re-assemble the parts correctly.  I'm not sure if other multi-part FTP programs would have the same issue, but since I rely on BitKinex to saturate my Internet pipe (8MB/s vs 42MB/s is no contest), I'll have to disable the Drivepool SSD cache for now.

    If you'd like other info or tests run (or have suggestions for a workaround) let me know.

    Edit:  resolved - see post updates below.

  5. If you mean a child pool of two 120's, then another higher level pool of the child and the 500, then add that pool to the main pool as it's SSD Cache... the limitation imposed on the largest single file copied to the main Pool would be that of the smallest volume/drive in the child caches (i.e. 120GB) since you have 2x duplication enabled.  The child pool of 120+120 would still need to hold a copy of whatever file was being written to the SSD cache (and duplicated there), and it has a limit of..  120GB per file.

    If your largest file size is over 120GB..  getting another 500GB SSD to add as the 2nd cache child won't solve the underlying problem.  Software RAID 0 on the 120's will fix it without any issue however.

  6. 2 hours ago, MitchellStringer said:

    Question, i don't really want to waste those SSDs or money, is it possible to pool the 2,  120gb SSDs and buy 2 more 120gb SSDs and pool those, so i can a 240gb 2x duplication SSD cache? So, (120+120) + (120+120)?

    Just tested this myself using two small partitions off a spare SSD I had in my server.  Turns out you actually can pool drives into a child pool, add that pool to your primary one, and then setup the SSD cache target to be the child pool.

    However - Christopher's statement still holds true:  you'll be limited by the size of a single volume for your largest single file copy to the main pool.  i.e. if it's a file over 120GB, then it wouldn't fit on any of your 120GB cache drives.  That's just how Drivepool works - a file must fit on a single volume/drive.

    As an alternate (and perhaps better solution), you could use software RAID 0 to create a stripe using 2 120GB SSDs, and then another stripe for the two new 120GB SSDs, and you'd have two valid 240GB SSD cache targets for your cache, which would support 2x duplication.  The performance might even go up due to that implementation.

  7. Been using CCleaner now for ~12 years, never had an issue with it or security issues on a machine it was used with.

    Did you try stopping the Scanner service on the remote machine (the one the GUI was connecting to over the network), and then closing/re-opening Scanner on the local machine (the one that was stuck)?  Might force it to re-source a machine in it's list to display, which would end up being the local instance.

    As a last resort you can reset the Stablebit Scanner settings.  Remember that it will wipe Stablebit Scanner data for drives/file systems on the computer where the reset is done, so you'll want to manually mark them as good after re-launching.

     

  8. Have you tried restarting the Scanner service?  Or a cold boot on the machine?  How about just logging out of that user account in Windows and logging back in?

    You can stop the Scanner service on the machine it was connected to, then see if the local copy reverts back to the local machine.

    You could also try a Repair on the Scanner installation through the Windows control panel/Settings area (depending on your version of Windows).

    If none of those work, both Scanner and Drivepool are very tolerant of being uninstalled and reinstalled.  Just be sure to clear temp files with something like CCleaner in-between.

  9. Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do.  I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go.  As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.

  10. Do any of the providers support FTP?  Clouddrive has a FTP layer built in that should work in that case.

    Can you mount any of those providers' spaces as a local drive (w/letter)?  If so, you could use the Local Disk feature to place an encrypted volume on them, and manage it with Clouddrive.

    And - Christopher/Alex are continually evaluating new providers for Clouddrive.  Never know when they'll add support for more.  You can use the Contact link on that page to request additional ones.

  11. The only tidbit of wisdom that I can offer is what I've been told before about how Drivepool "talks" to the pool drives.  It merely passes commands to them like Windows would to any NTFS drive (although there are some "wonky" things NTFS does that Alex had to work around).  I wouldn't think this would interfere with regular copy/move/delete commands, even on system folders.  @Christopher (Drashna) is the real WHS/WSE/Drivepool guru however, so it'd be best to wait and hear what he has to say.

    As for the rest of your criteria - even Windows 7 Pro + Drivepool can handle them, with the exception of WHS V1 style client backups.  The W7 Ultimate server I'm running on now (which I'm going to be upgrading to WSE 2016 soon) does all of them except the backup (currently using Macrium Reflect).  After I migrate to WSE, I *think* I'll be using Veeam for backups based on the research I've done so far.  If you haven't looked at it, it may be worth the time.

    And while WSE might seem to be overkill in a lot of circumstances, I value it highly for the learning experience it provides.  Some of what it does is "next level" stuff, which you don't get to see in a standard desktop operating system.  That comes in handy for me since I'm in the IT field professionally, though it may not for a lot of people.  Because of that, I feel it's worth the extra effort.  I'm going to be installing it on top of a Hyper-V on bare metal...   just for the experience.  If you're into server based installations for any reason, it's good to keep up on the current popular platforms.

  12. 1)  Yes, you would.  Drivepool doesn't care if they are Bitlocked - once they are unlocked it sees them like it would normal drives.  The key is to Bitlock the drives first, then create the pool using them after they are unlocked.

    2)  The pool won't come online until they are unlocked.  Usually you'd auto-unlock at boot time, at which point Drivepool would mount the pool and you can access it.  Either way is fine.

    3)  That's one reason I stopped using full drive encryption (especially on the boot drive).  Had a bad experience many years ago where traditional repair tools couldn't fix issues with the disk, and I lost the build entirely.  It's less impactive on a data drive however, and I think the implementation now is a bit more stable and robust.  If memory serves, @Christopher (Drashna) also uses encrypted drives with Drivepool, so you have a good line of support for it if you choose to go that way.

    More info on Bitlocker+Drivepool here - straight from the horses mouth:

     

    Another option is to use Stablebit Clouddrive with the Local Disk feature to create a fully encrypted volume, instead of using Bitlocker.  You could make two of these (one on each drive), then pool them using Drivepool.  Or you can create a normal 2-disk pool and then put an encrypted Clouddrive on it - the choice is yours.

  13. Were all of the drives empty and added to the Pool before files were placed on it?  Perhaps some of the files were manually copied into the hidden Poolpart-xxxxx folders in the root of each?  Hard to tell without a screenshot and knowing how you populated the pool.  :) 

    What you can do to even things out, is install the Disk Space Equalizer plugin for Drivepool and force a manual re-balance.  You go into "Manage Pool", then Balancing, then enable it in the Balancers tab.  Hit Save after that, and Drivepool will kick off a manual re-balance.  When it's done balancing, toggle that plugin off again.  If you have any doubts about whether or not the pool display is correct, just force a re-measure before starting the manual re-balance.

  14. Not sure how Hierarchical pools react to bad underlying physical drives where duplication/evacuation is concerned.  It's a new-ish feature that Christopher and Alex know the most about, and can get complicated depending on your overall architecture (pools within pools, physical drives at different levels, etc). 

    Moving the Clouddrive cache to a dedicated SSD sounds like a great idea, and avoids a lot of potential space issues.  I'd still look into using the Prevent Drive Overfill plugin in DP, at least as long as you have a part of the Pool residing on C:.

  15. On 9/2/2018 at 12:59 PM, Thronic said:

    I'm a bit hesitant on this step:

    5. Move all my Files from I: to I:\PoolPart.XXXYYY

    Will it start a full cut-and-paste operation or transparently just move the index since it's on the same drivepool mount. That drivepool has 10 drives with 18TB duplicated data on it at the moment. When that's done I was thinking of moving drive letters around, so my services including Plex will be non the wiser.

    It is identical to a normal cut-and-paste on the same drive for Windows.  i.e. it will happen almost immediately.  However Drivepool won't be aware of the new files in the Poolpart-xxxxx folder until you tell it to re-measure.

    As a side note for Plex:  only store media files in the Pool (movies, music, etc).  Plex uses hardlinks on it's C: installation folders (%appdata%\Local\Plex Media Server) where it stores metadata, which aren't supported in Drivepool.

     

     

    On 9/2/2018 at 12:59 PM, Thronic said:

    Also, just to be clear, if a drive goes bad, how will the master pool react to a degradation of the local part in that pool until the missing or damaged drive is swapped? Will the master pool go into read-only and indicate the local pool as a damaged drive until the drive in that pool has been replaced? I'm guessing and hoping all details of such events have been tested and thought of.

    Editing my reply here to clarify based on better info.  When a drive goes missing completely, the Pool it's in will go into a Read Only mode, because it can't determine the duplication status across all drives:

     

     

    On 9/2/2018 at 12:59 PM, Thronic said:

    And, how will the clouddrive cache behave when it suddenly needs to duplicate massive amounts of data? Will it rotate the 1 GB default I've set? or rape the entire drive (I'm using C: as cache, a 150GB available SSD which I do not want to fill up and crash the system). Do have I have to use a dedicated drive as cache for this or set it fixed for it not to run wild?

    Not sure I fully understand your question.  What level of duplication do you have set, and is it on the full Pool, or just on files/folders?  If C: is holding a pool part, it is fully available for use by the pool up to and including 100% full.  You'd have to use the Balancer plugins to prevent drive overfill on that volume to avoid that scenario.

    Pretty certain that if you have a Clouddrive volume as part of a pool and holding duplication items (or space available for duplication), that Drivepool will try to duplicate (or evacuate) to local drives first.  i.e. if you have a pool with 10 drives (one is a Cloud drive, 9 are physical) and have 6x duplication set - you would still have enough local space on the physical drives to handle all of the duplicated copies.  None would -need- to sit on the Cloud drive, though they could.  Also depends on your use of the Duplication Space Optimizer plugin.

    Complete info on your pooling architecture would be helpful.

  16. 11 hours ago, The_Saver said:

    How can I make the HDDs stop balancing among themselves? This would ensure that the SSDs can always move files to the HDDs.

    My DrivePool is constantly increasing in size so it would not be an issue if, let's say, 1 TB is deleted from a single drive because then that drive will have the lowest % of space used and new files will be put on it (I use Disk Space Equaliser).

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first.  They merely act as a temporary front-end pool that is emptied out over time.  The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on.  Try not to let those SSDs fill up.  :)

    I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use.  If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on).  When the re-balance is complete, toggle the Disk Space Equalizer back off.

  17. 12 minutes ago, fattipants2016 said:

    Wrong thread?

    Nope, not at all.  Was proposing a way for you to "delete" your empty Downloads folder by simply merging it permanently with another folder (like Documents, which is usually filled).  Any other non-empty directory would do the trick.

     

    14 minutes ago, fattipants2016 said:

    I got to test SnapRAID earlier when a wonky script deleted ~4,000 files before I could stop it, so until someone comes along with something new to try I'm done.

    And this ^^, BTW, is why anyone using snapRAID without file placement rules is absolutely nuts.

    If those files had been scattered across different drives there's 0 chance I would have been able to get them back.

    How did it work for you?  It's one reason I use 2-parity, so I can recover from multiple drive issues.  But yeah, across a bunch of different disks is rough.  Sounds like it didn't put them in the trash bin for the respective drives either.  Nasty nasty script, whatever you're running.  :blink:

  18. Just brainstorming here since I've never done it - you could use the "Location" tab in the folder's properties area, and move the folder so that it occupied the -same- folder as another, like Documents.

    I just attempted this myself after clearing the Downloads folder for my user account, and got interesting results.  I approved it, and was told immediately after in a new child window:

     

    Quote

    Do you wan t to redirect folder "Downloads" into another system folder "Documents" located at "C:\Users\username\Documents"?  If you proceed with redirection you will not be able to separate them or restore default location.

    Do you still want to proceed with redirection?

     

    So it looks like you absolutely can merge the Downloads folder (location) with another typically non-empty folder (whatever you choose).  You can't un-merge them after the fact, but you could easily re-create the Downloads folder manually and just enter the path into the Location tab dialog.

  19. 1 hour ago, Christopher (Drashna) said:

    @Jaga Step 5 would still be required, if you want to move the data into the E:\ pool, though. 

    True, if it was a single drive pool and the files were outside on the same disk.  I may have made too many assumptions on the pool(s) architecture.  :rolleyes:

×
×
  • Create New...