Umfriend got a reaction from Shane in 2nd request for help
Use remove. You can move through Explorer but if you do that you need to stop the drivepool service first. Moreover, once you start DP service, it may try to rebalance files back to other drives so you need to turn of balancing to prevent that from happening. Also, if you have duplication then you want to disable that first. Yes, it will all take some time but it has, AFAIK, never failed. Quick and dirty though... not that failsafe sometimes. And even cutting/pasting will take quite some time.
Umfriend got a reaction from Remcroft in Possible to copy directly to poolpart folders without issues?
In principle, yes. Not sure how to guarantee that they will stay there due to rebalancing, unless you use file placement rules.
Umfriend got a reaction from vfsrecycle_kid in Can/Does DrivePool preemptively calculate the total free space post-replication-rebalance?
No. If, and only if, the entire Pool had a fixed duplication factor then it *could* be done. E.g., 1TB of free space means you can save 0.5TB of net data with x2 duplication or .33TB with x3 duplication etc. However, as soon as you mix duplication factors, well, it really depends on where thre data lands, doesn't it? So I guess they chose to only show actual free space without taking duplication in mind. Makes sense to me. Personally, I over provision all my Pools (a whopping two in total ;D) such that I can always evacuate the largest HDD. Peace of mind and coninuity rules in my book.
Umfriend got a reaction from TeleFragger in My Rackmount Server
Yeah, WS2019 missing the Essentials role sucks. I'm running WSE2016 and I have no way forward so this will be what I am running until the end of days probably....
But wow, nice setup!
With the HBA card, can you get the HDDs to spin down? I tried with my Dell H310 (some 9210 variant IIRC) but no luck.
Umfriend got a reaction from TeleFragger in 2 pools - best setup method?
I am not exactly sure what you want to accomplish. Do you want duplication and fast reads? You might want to consider using hierarchical pools, something like:
Pool A: 12 x 500GB SSD
Pool B: 2x4TB + 2x2TB + 1x 500GB SSD
Pool C: Pool A + Pool B
I would think that writes go fast (Pool A SSD only, Pool B uses the SSD Cache) and that reads go fast as well as they would read from Pool A effectively (even if the request goes out to Pool C).
The downside is that you can only store about 6TB duplicated.
Umfriend got a reaction from Christopher (Drashna) in DrivePool not filling new empty disks immediately?
So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available.
This is at default settings and without duplication.
Umfriend got a reaction from TeleFragger in moving drives around
TL;DR but yes, DP will recognise the Pool. You could disconnect them all and plug them in on another machine and DP would see the Pool again.
One small caveat is that if you use plug-ins that are not installed on the new machine then you may have some unwanted behaviour. Other than that, it should work.
Umfriend got a reaction from TeleFragger in 10gb speeds using ssd cache?
I doubt Stablebit would want to go the RamCache route because of the risk of any system failure causing the loss of (more) data (compared to SSD Cache or normal storage).
I don;t but I know there are people here that succesfully use the SSD Cache. And it really depends on what SSD you are using. If it is a SATA SSD then you would not expect the 10G to be saturated.
In any case, @TeleFragger (OP) does use duplication so he/you will need two SSDs for this to work.
Umfriend reacted to thepregnantgod in How many hdds an evga 1600w t2 can support?
I'm late to the convo but from my personal experience...
I got a 1600W Titanium EVGA PSU that powers a Zenith ROG extreme mobo with a 2950x OC'd to 4.1ghz. It also powers 2xTitan XP (SLI) Nvidia cards.
Attached, I have 2 SSDs, 3 NVME drives, 6 Bluray drives, and various fans.
Three HBA cards (a LSI 9201-16i and two Intel expanders).
I also have 37 Green drives from 3TB-12TB and 3 SAS drives.
My PSU is able to support all that.
Now...that being said, I can't plug the vacuum into the wall in that loft because that'll trip the switch since my system is likely pulling max from the wall.
Umfriend got a reaction from stigzler in Drivepool on new Windows install
No, don't remove in the GUI. That would cause DP to try to move files off of the HDDs. You do not want to change your Pool at all, you just want to migrate it to a new environment. So: just do my new Windows HS install, install DrivePool and then physically re-connect the drives? -> Exactly right.
Umfriend got a reaction from Christopher (Drashna) in Move folders quicker within drivepool
I assume you mean to say that you have data in root folders on HDDs that are already added to the Pool and you want to move the data in the root folder(s) to the Pool, quickly.
This is what you are looking for: http://wiki.covecube.com/StableBit_DrivePool_Q4142489 I think.
Umfriend got a reaction from Christopher (Drashna) in Have Scenario - Need Help
I assume you have no duplication. I would, provided I have enough ports:
Physically remove the faulty HDD (you have done this already) Remove it through the DP UI -> This should stop DP complaining about a missing disk and unlock the Pool. Add the two new HDD to the Pool Remove the two old HDDs from the Pool through the UI -> This will move all files to the new HDDs Remove the two old HDDs physically from the Server Then see what you can recover from the faulty HDD and copy that back to the Pool. I would consider to keep the two performing old HDDs in the Pool and use x2 duplication.
Umfriend got a reaction from Stuart_75 in File duplication from local to cloud possible?
Wrt step 4, Mick is right and clarified my point very well. Thanks. So if it is the case that your files are in Pool A then they are located in:
E:\PoolPart.*, F:\PoolPart.* and G:\PoolPart.*. You could move them, HDD by HDD using explorer to:
E:\PoolPart.*\PoolPart.*, F:\PoolPart.*\PoolPart.* and G:\PoolPart.*\PoolPart.*
The * is for an insanely unintelligble unique name. For instance, I have one that is
^^ Pool A ^^ Pool B
When I used "upper" and "lower" I meant this hierachically: The drive looks like:
-- Other Folders
-- PoolPart.* Folder (this is Pool A and within Pool A you can have)
---- Other Folders (Only in Pool A)
---- PoolPart.* Folder (This is Pool
Umfriend got a reaction from slongmire in Create two pools on same physical disk
I was a bit to quick as I don't know your use-case. But if you run a server that is used by more people who could cause I/O on both partitions, then the HDD performance will suffer as the head actually needs to run from one partition to the other. But such a scenario might not at all be relevant for you. And heck, backups over performance, I say.
In any case, your 2-disk 2-partition plan will work (I had a similar setup for a while). If you have the budget and the machine is somewhat up-time critical, you might consider having a third 8TB HDD handy in case of a failure.
Umfriend got a reaction from Jaga in Read Striping not working at all?
As far as I can tell, read striping only works when you are reading LOTS of files, not a single large file. If, for instance, I browse through a folder with lots of pictures, then the thumbnails come up way quicker with read striping enabled. I *think* this is because DP is file based and opens I/O to individual files located on an individual disk. If many files are to be read concurrently, then it may initiate some I/O on one and some on the other HDD.
Umfriend got a reaction from Jaga in Moving from WHS V1
Actually, all I want is what WHS2011 does but then with continued support and security updates and support for way more memory. In any case, I was planning to go WSE2016 at some stage but that will be WSE2019. I was just trying to warn you that WS2016 support will end, I think end of 2022 (2027 extended support, no clue what that means) and that going WSE2019 might save you a learning curve.
Having said that, and missing knowledge & experience with WSE 2012/2016, it may be that WSE 2019 actually misses the dashboard (if that is what is the Essentials Experience role):
So basically, I don't know what I am talking about...
Umfriend got a reaction from Jaga in Recomendations for SAS internal storage controller and configuration
I would hope, and am pretty sure, this won't work. DP checks, in case of duplication, whether files are placed on different *physical* devices.
@OP: I recently bought a, I think,. LSI 9220-8i, flashed to IT-mode (the P20...07 version). The label was Dell PERC H310, it should be the same as the IBM1015. I am not sure, as far as I understand it, the 9xxx number also relates to the kind of bios it is flashed with. In any case, it works like a charm. One thing though, these controllers run HOT and it is advisable to mount a fan on top of it (just use a 40mm fan and mount by screws running into the upstanding bars or somesuch of the heatsink.
Umfriend got a reaction from TAdams in Moving from WHS V1
For me, the client backups are what makes Windows Server worthwhile. File sharing, running a certain downloading client (that we'll not discuss) and media server etc. are nice extras (although that can be done on a W10 machine as well of course). I am currently at WHS 2011 and intend to go WSE 2019 (which as it turns out will be a SKU, but WS Standard won't offer the Essentials role anymore). It'll be a steep learning curve for me as well, which is a shame. But that is the one reason I am waiting a bit still and go WSE2019: It'll be longer before I have to go through that experience again. My main issue is that with WHS v1 (and 2011 somewhat) there were online resources aimed at, well, SOHO. With WSE2016 en higher that is far less the case. If you find one, pls let me know.
Umfriend reacted to Jaga in Drivepool - a reliability story
With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from..
There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start.
So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened.
Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment.
That's my positive story for the day, and why I continue to recommend Stablebit products.