Jump to content
Covecube Inc.


  • Posts

  • Joined

  • Last visited

  • Days Won


Reputation Activity

  1. Like
    Jaga got a reaction from letsloveadrian in Drive removal from pool failed, now what?   
    Glad you got it resolved.  Pretty sure what I suggested would have worked, but dealing directly with Support (Christopher and Alex) is top-notch too.
  2. Like
    Jaga reacted to Christopher (Drashna) in Harddrive crashes with stablebit scanner   
    So, removing StableBit Scanner from the system fixed the issue with the drives? 
    If so.... well, uninstalling the drivers for the controller (or installing them, if they weren't) may help out here. 
    Otherwise, it depends on your budget.  I personally highly recommend the LSI SAS 9211 or 9207 cards, but they're on the higher end of consumer/prosumer. 
  3. Thanks
    Jaga got a reaction from lazyg408 in "Last scanned: Never" for drives in Mediasonic ProBox   
    That is also a scenario where de-registration/uninstall/re-install/re-registration can help.  If you want to try it, here is how:
    To deactivate the license: go into "Scanner Settings..", navigate to the License tab, click View license details, then deactivate. Then do a manual uninstall of Stablebit Scanner. Re-download the latest version, to be sure your first copy wasn't bad in some way. I find cleaning out Windows temp files to be helpful, especially with botched or damaged installations.  Tools like CCleaner, Wise Disk Cleaner/Wise Registry Cleaner, etc. Re-install, and then re-activate your copy with your activation key.
  4. Like
    Jaga reacted to Christopher (Drashna) in Reformatted Drives not appearing in the Non-Pooled list   
    Yup, we use volume IDs to track the disks, actually.
    However, there is some changes to how that's handled in the latest beta, so this may not be an issue in the future. 
  5. Like
    Jaga reacted to Christopher (Drashna) in Few requests   
    Actually, yes.  That's part of what we want to do with StableBit Cloud.
    Good news!  the latest beta has a concurrency limit, that you can configure.  It defaults to 2 drives, right now.  So this should help you out greatly! 
  6. Like
    Jaga reacted to zeroibis in Correct way to apply PrimoCache write cache   
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again.
    Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  7. Like
    Jaga got a reaction from colibri in Migration to a new system   
    If you're asking about Drivepool and Scanner, they are very easy to migrate.
    You will need to deactivate the license in each piece of software before uninstalling it, so you can reactivate on the new system.  Click the gear icon (upper right) in each, then Manage License. For the pool - simply shut down, migrate the physical drives to the new one, boot up and install Drivepool.  It will see the prior pool drives, and re-create a new pool using them automatically.  Then just reactivate both pieces of software on the new system using the activation key(s) you own.  Click the gear icon (upper right) in each, then Manage License.  
    If you are also using Clouddrive:
    Detach the drives, deactivate the license, install on new system, activate the license or trial and attach the drives.  (per a quote from Christopher)  
  8. Like
    Jaga got a reaction from colibri in Migration to a new system   
    If you're going that route, you'll want to consider your old pool architecture (how many drives, what size, etc) compared to the new one.  If you have the same number of drives, the migration still isn't too hard:
    Deactivate the licenses, uninstall the software Share each hidden Poolpart-xxxxx folder (from each of the old pool's drives) on the network  (i.e. OldPool-E, OldPool-F, etc) Install Drivepool on the new machine, create a new pool using your new drives, then stop the Drivepool service (optional). Access each network drive-share you created on the old machine, and copy it's contents into each hidden Poolpart-xxxxx folder in the new pool.  Each drive in the Pool has one on it. Start up Drivepool on the new machine (restart the service first if you stopped it), and tell it to re-measure the pool so it can see all the content you copied in. If you have a different number of drives, that's okay too.  You'll have to copy the files from the old network drive-shares into the new Poolpart-xxxxx folders on each new drive, and then re-measure (first) and re-balance (second) on the new machine.  If your new drives are large enough, you can copy multiple old Poolpart-xxxxx folder contents into the same Poolpart-xxxx folder on the new pool.
    Basically you are manually populating the new pool's drives with the old files via network copy, then telling Drivepool to "go see what's there" (re-measure), and "spread it all out evenly" (re-balance).  You may want to use the Disk Space Equalizer plugin for Drivepool, to evenly spread out the newly copied files in the new pool.  Install it, open Drivepool, toggle the plugin on once, let it run, then toggle it off.
    It won't matter if Drivepool/Scanner are deactivated/uninstalled on the old machine, since all you're doing is manually accessing the hidden Poolpart-xxxxx folder that it leaves on all pooled drives.  DP doesn't need to be running, activated, or even installed on the old machine for that, just the new one.
    One thing of note:  to keep the same folder structure your old pool had, you want to copy the folders/files from inside the old Poolpart-xxxxx folders exactly as they were.  If you put files or folders into different locations from where they used to be, the pool won't look the same.  The exception of course is copying two drives' worth of Poolpart-xxxxx contents into just one Poolpart-xxxxx on a new pool drive.  The folder/file hierarchy inside the hidden Poolpart folders is important.
  9. Like
    Jaga reacted to zeroibis in Volume does not show up in DrivePool   
    Ok solution is that you need to manually create the virtual drive in powershell after making the pool:
    1)      Create a storage pool in the GUI but hit cancel when it asks to create a storage space

    2)      Rename the pool to something to identify this raid set.

    3)      Run the following command in PowerShell (run with admin power) editing as needed:
    New-VirtualDisk -FriendlyName VirtualDriveName -StoragePoolFriendlyName NameOfPoolToUse -NumberOfColumns 2 -ResiliencySettingName simple -UseMaximumSize

  10. Like
    Jaga reacted to PossumsDad in DrivePool and Scanner in a separate box connected with SAS cables   
    I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them.
    The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. 
    What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off.
    Here is a link to that adapter. Let me know what you think.
  11. Thanks
    Jaga reacted to Christopher (Drashna) in Understanding Ordered File Placement   
    This is what it's supposed to do: 

    But if it's working now, then that's fine.
  12. Thanks
    Jaga reacted to Christopher (Drashna) in M.2 Drives - No Smart data - NVME and Sata   
    Nope, different protocol.  But trust me when I say that NVMe health is FAR superior to SMART. 
    As for protocol: 
    That has a picture of it. 
  13. Like
    Jaga got a reaction from APoolTodayKeepsTheNasAway in Balancing has drives transfering far slower than what they are capable of doing.   
    Click the double-right arrow to the right of the balancing bar while it's working.  That will increase priority on the re-balance and speed up transfers.
  14. Confused
    Jaga reacted to Umfriend in Read Striping not working at all?   
    As far as I can tell, read striping only works when you are reading LOTS of files, not a single large file. If, for instance, I browse through a folder with lots of pictures, then the thumbnails come up way quicker with read striping enabled. I *think* this is because DP is file based and opens I/O to individual files located on an individual disk. If many files are to be read concurrently, then it may initiate some I/O on one and some on the other HDD.
  15. Haha
    Jaga reacted to Umfriend in M.2 Drives - No Smart data - NVME and Sata   
    What a reboot won't solve... Seems OK now.
  16. Like
    Jaga got a reaction from Pichu0102 in Hierarchical file duplication?   
    Got it, looks like you're covered for now then.   
  17. Like
    Jaga got a reaction from Pichu0102 in Hierarchical file duplication?   
    What you might want to do instead, is make a Local Pool 1 which holds local drives A-E,  rename your cloud pool to Cloud Pool 1, and then make a Master Pool that holds Local Pool 1 and Cloud Pool 1.  It's easier if different levels have different nomenclature (numbers vs letters at each level).
    Master Pool (2x duplication) Local Pool 1 (any duplication you want) Local Drive A Local Drive B Local Drive C Local Drive D Local Drive E Cloud Pool 1 (no duplication) FTP Drive A B2 Drive A Google Drive Drive A  
    Note that with this architecture, your cloud drive space needs to be equal to the size of your Local Pool 1, so that 2x duplication on the Top Pool can happen correctly.
    If the FTP Drive A goes kaput, Cloud Pool 1 can pull any files it needs from Local Pool 1, since they are all duplicated there.  Local Pool 1 doesn't need duplication, since it's files are all over on Cloud Pool 1 also.  You can (if you want) give it duplication for redundancy in case one of the cloud sources isn't available - your choice.
    As an alternate architecture, you can leverage your separate cloud spaces to each mirror a small group of local files:
    Master Pool (no duplication) Pool 1 (2x duplication) Local Pool A (no duplication) Local Drive a Local Drive b Cloud Pool A (no duplication) FTP Drive Pool 2 (2x duplication) Local Pool B (no duplication) Local Drive c Local Drive d Cloud Pool B (no duplication) B2 Drive Pool 3 (2x duplication) Local Pool C (no duplication) Local Drive e Cloud Pool C (no duplication) Google Drive  
    What this does is allow each separate cloud drive space to backup a pair of drives, or single drive.  It might be more advantageous if your cloud space varies a lot, and you want to give limited cloud space to a single drive (like in Pool 3).
  18. Like
    Jaga got a reaction from Pichu0102 in Hierarchical file duplication?   
    Sorry, I think I misunderstood your goals in the first post.
    As long as your cloud pool is just a member drive of the larger pool, and you have at minimum 2x pool duplication on the main pool, then you'd be covered against any failure of the cloud pool, yes.  But if you lost 2 or more of the local drives and those were the sole holders of a file that was duplicated, the cloud drive wouldn't save you, the data would still be lost.
    It looks like you're relying on local drives for redundancy, and the cloud drives for expansion.  Normally it's the other way around.
  19. Thanks
    Jaga reacted to Thronic in Recomendations for SAS internal storage controller and configuration   
    It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. 
    I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  20. Thanks
    Jaga reacted to Christopher (Drashna) in Not balancing onto new drives   
    To clarify a couple of things here (sorry, I did skim here):
    StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space.  This means the 1TB drives, first, and then once they're full enough, on the 500GB drive.    So, yes, this is normal.
    The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal.  
    So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. 
    There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically).  If you don't see that, then it defaults to the "normal" behavior. 
  21. Thanks
    Jaga reacted to Christopher (Drashna) in Read only pools   
    No.  Missing disks cause the pool to go read only. Period. 
    There has never been an option to change that, because any data written could cause corruption, as the data may/will get out of sync.   So to prevent that, the pool is made read only until the missing disk is resolved. 
  22. Thanks
    Jaga reacted to Christopher (Drashna) in Moving from WHS V1   
    RIP Essentials.  Rip WHS. 
    2019 Essentials is a rebrand of the "Foundation" server SKU.  
    It lacks everything that makes Essentials "Essentials".  That means no: 
    Dashboard Connector Client Backup and Restore Remote Access website (aka Anywhere Access) Office 365 integration etc What it does give you?  A neutered version of Standard, without the need for CAL (client Access Licenses).  Which is a joke/slap in the face to the Home Server community.  (This is my personal opinion and not representative of Covecube)
    If you want the WHS experience, ... that's not Server 2019, at all.  
    If you want a cheap(er) server OS but don't want to deal with all the licensing involved with servers and are okay with some key features being removed/not present, then 2019 Essentials should be "good enough"
    Mainstream support (til 2022) gets updates to the "Essentials" code, and "new features".  After that, and until 2027, it is only security patches. No feature fixes, no new features, etc. 
    Yeah, the folder structure is because it's a domain controller (the sysvol and netlogon shares).  These are non-negotiable for Essentials.  The File History and Folder Redirection stuff is optional, but nice.  
    That's why you're supposed to use the "Shared Folders" folder on Essentials. It mostly has all the folders that you want, and none of the system stuff. 
    And Veeam is a very popular choice. 
    You're very welcome!  Paul "Tinkertry" was one of the very active members of the WHS community, and he has a lot of good info on his blog.
    My recommendation would be Windows Server 2016 Essentials.   Because with the Standard version, you're supposed to buy User CALs, as well. 
    As for flexibility, there are only a couple of things missing from standard. And that's mostly data deduplication, which doesn't work on the pool, actually.  So, you really don't miss out on much. 
  23. Thanks
    Jaga reacted to Christopher (Drashna) in Read only pools   
    Yeah, I think it was depreciated, as we drastically overhauled the removal process, to make it much more seamless.  
    In fact, when removing a disk now, the pool is not put into a read only mode.  Though, duplicated data may be "marked" as read only (errors out on writes). So this setting was rendered obsolete. 
  24. Haha
    Jaga reacted to Umfriend in Moving from WHS V1   
    Actually, all I want is what WHS2011 does but then with continued support and security updates and support for way more memory. In any case, I was planning to go WSE2016 at some stage but that will be WSE2019. I was just trying to warn you that WS2016 support will end, I think end of 2022 (2027 extended support, no clue what that means) and that going WSE2019 might save you a learning curve.
    Having said that, and missing knowledge & experience with WSE 2012/2016, it may be that WSE 2019 actually misses the dashboard (if that is what is the Essentials Experience role):
    So basically, I don't know what I am talking about...
  25. Like
    Jaga reacted to Umfriend in Recomendations for SAS internal storage controller and configuration   
    I would hope, and am pretty sure, this won't work. DP checks, in case of duplication, whether files are placed on different *physical* devices.
    @OP: I recently bought a, I think,. LSI 9220-8i, flashed to IT-mode (the P20...07 version). The label was Dell PERC H310, it should be the same as the IBM1015. I am not sure, as far as I understand it, the 9xxx number also relates to the kind of bios it is flashed with. In any case, it works like a charm. One thing though, these controllers run HOT and it is advisable to mount a fan on top of it (just use a 40mm fan and mount by screws running into the upstanding bars or somesuch of the heatsink.
  • Create New...