Jump to content

eduncan911

Members
  • Posts

    38
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by eduncan911

  1. Two people reporting the same problem above... Just to state a commonality between the two: They were both 'formatting' drives at the time they report errors/lost data. Maybe related? Does seem scary. You shouldn't have to; but, I personally install all new drives in a separate machine first to format them (at 64KB clusters, because almost all are movies/tv). Once formatted, I move them to the server. I do this mostly from habit of back in the NT days when random $&@# happened when formatting one drive and accessing another. Absolute guess here... But, perhaps DrivePool constantly scans for new drives to add to the pool. And during the format, something may crash in that scanning - crashing the pool. A reboot should resolve that I think though.
  2. Now that I've let it run for several hours, all disks, the HGST disks are up to 165 to 170 MB/s - all 8. That's nice. The Seagate ones are 155 MB/s. And, it looks like they will all complete within about 3 hours. That's a big improvement in the "work window."
  3. Perfect, that's what I was looking for! Ah, so setting it to 0 is unlimited. Done and done! Yep, I already disabled the throttling on the same controller. A glance over at the watt meter with all 14 HDDs scanning (8x HGTS 4GB, 3x Seagate Blue 4GB, and various other WD Green drives) shows 188 Watts at the wall. So, power consumption: check. The hottest drive is 37C, with most around 27 - 32C. The chassis fans are spinning at 50% PWM. Heat generation: low, check. Vibration.. Eh... That's actually one of the reasons I switched to HGST with their marketing advertisement for vibration isolation... If it's a gimmick or not, we'll see. With all HDDs spun down it idles around 85 Watts... The LSI Expander uses 12 Watts and the LSI card (as well as the IBM M1015) uses 9 Watts. One day I'll have 8x HDDs large enough that I won't need the LSI HBA nor Expander, saving another 21 Watts. I too have 2x IBM M1015s cross-flashed to the 9211-81 (posted in another thread about them recently). I moved to the LSI 9211-8i as it was better for cable routing in my chassis. I need to sale them... They are just sitting in a box next to me. Humm, no "Forsale" section here at Stablebit?
  4. I have set ScanMaximumConcurrent to "4" for my setup. Is there any drawbacks to this setting? Too high, too low? My server has an LSI 9211-8i HBA connected to an LSI Expander hot-swap SAS2 blackplane (the Supermicro SC846 chassis for those wondering). The Expander utilizes all 4x 6 Gbps channels to have a total of 24 Gbps, or about 3,000 MB/s (megaBytes per seconds). Since each HGST 4GB HDD I have can only substain about 130 MB/s, I was thinking "4 concurrent" at ~520 MB/s isn't an issue since that's only ~17% of the overall bus. Or, the same speed as my Sata 6 SSD I have in the pool on the same controller. The only drawback I can see is bandwidth. But please correct me if I am wrong. I'm down to about 14 HDDs in this chassis in the pool. So, having only 1 scan at a time on the SAS2 bus was taking days to get through. I'd like to bump it to 8x if there isn't any issue? But, not all drives gets the same bandwidth. For example, here are the four running right now: 108 MB/s 83 MB/s 71 MB/s 133 MB/s Should I not be running this many concurrent? Can I go higher?
  5. As for file permissions, I reset all permissions and force it onto all sub-files and folders like memory muscle. It's just a default action one has to make when moving NTFS volumes from one machine to another. And annoying to users that aren't familiar with the NTFS side of things. A necessary evil - and I should have mentioned that in my previous posts. Sorry about that. Good to hear it went through.
  6. The Intel i350-T4 NIC has 4 ports on it. Each port is connected to a dedicated switch (and each switch has a connection to itself). The switches are cheap built-in router switches and a few 5 port and 8 port cheap Linksys/belkin switches. All are Gigabit switches. Each Wifi router I have either acts as those dedicated switches, or is connected directly to one of the other switches. This ensures the lowest number of routes from the NIC, through a switch, to the Wifi router. @Chris (Drashna): Humm, no drive A:\ eh? That may explain the odd "why are the drives spun up?" issues I've had with my previous install. Holy ****. Well, as I haven't completed my setup, I guess I can still change it in time...
  7. I should clarify that I am not using the Unsafe Direct IO... I am Windows Power Management to spin down my drives; and, I can access all SMART data through the LSI controllers (as long as I am on P20 firmware/driver versions). Microsoft has an MSDN article on "IPM Configuration and Usage." Idle Power Management (IPM) can be enabled for HBAs that do not allow Windows to power down the disks by modifying the driver's INF file to allow power management by the drivers. https://msdn.microsoft.com/en-us/library/windows/hardware/ff561476(v=vs.85).aspx I have not had any luck manually adding that entry into the registry; mostly because on a clean install with a single HBA there are usually about 2 dozen entries, and not all for the single HBA I have. So, i go the INF route. Which, if your drivers are signed (most are these days), it requires you to force Windows into Test Signing mode in order to allow it to install Unsigned drivers (the Hash check will fail because you've modified the INF file). From an elevated CMD prompt: C:\> bcdedit.exe -set loadoptions ENABLE_INTEGRITY_CHECKS C:\> Bcdedit.exe -set TESTSIGNING OFF That will allow you to install the Unsigned drivers with the modified INF file.
  8. Ah yes, I had "Fill SSD drives up to" set to 98%. I've now set that to 60%. Will monitor. But, that kind of goes against what I wanted to use a 240 GB SSD for... To cache 240 GB of files before it has to divert to the drive pool, not 140 GB (60%). Shouldn't it divert to the drive pool anyways? Even with 98% set? And yes, "out of disk space" was the error I was getting.
  9. You had me all the way to the very last sentence when you say... Which confuses me. Doesn't that contradict your Pro above? It said a Pro is to avoid disk spin up issues. But at the end, you said the default power settings do not spin up a disk unless you've manually enabled Query power mode directly from disk. Which, implies it does spin up a disk? I'm trying to decide if this effort is worth it to me... Right now, my setup with 20+ something odd disks works well with the default OS power settings. I don't know the exact state of the disks, which is annoying yes. But, I know things power down because my server goes from 210 watts usage down to ~85 watts of usage - which is a tad higher than the "no drive, just 1 SSD" power reading I was getting during setup of around 75 watts. As soon as Balancing starts or something, all drives become active and the watts shoots up to 240 or so before settling down to around 210. So should I look into direct access mode like this? Any other benefit besides seeing the actual state? Seeing that some people have to copy a manual file to get Direct I/O working, and my Windows power settings currently work, I think the effort may not be worth it?
  10. Using an Internal virtual network adapter offloads the network traffic from your External-physical NICs. I previously ran into issues streaming an MKV (say, from drive 11) and my NZB process (running on another machine) copying a recent download directly to the pool over the network (to, say, disk 21). They were different disks, but, the same NIC. I got stuttering in the movies... New setup uses the two onboard Intel i210 NICs in a Team-1 for "common" network traffic. I purposely installed an Intel i350-T4 NIC, with all four NICs setup in a single Team-2 dedicated solely to "Remote Streaming Access." IOW, if i want to copy a bunch of movies/TV shows off the server, I go through the IP address I ahve assigned to the Team-2. The problem now is the bottleneck of my cheap switches throughout the house - often times there's only 1 path for data to flow. But, i have assured that each WiFi Access Point (the vast majority of streaming is purely over WiFi in my household) has a dedicated path to the server. That's 4 WiFi routers connected to each port (with a switch inbetween as needed). Zero issues now streaming in my household (up to 7 devices I've seen streaming from the server at once). Well, my desktop uses a NIC connection... I cheat there.
  11. FYI, for LSI HBAs, I've been playing with several and had similar issues... IBM M1015 HBA flashed to LSI 9211-8i v.14... I got SMART to work; but, only with the built-in Microsoft drivers for the LSI. If switching to LSI's v.14 drivers, it did not work. The reason to switch is to have HDD spindown by modifying the INF of the drivers. IBM M1015 HBA flashed to LSI 9211-9i v.20... Same story as the above... But, SMART now works with the LSI v.20 driver modified and installed. LSI 9211-9i HBA @ v.16... SMART data with Microsoft Drivers, no SMART pass through with v.16 drivers. Mixing and matching driver versions and firmware results in errors in Windows Event log (the versions don't match). The point is, for LSI cards, flash to v.20 firmware AND use the LSI v.20 drivers and SMART passes through everything.
  12. Ok, I adjusted the settings to: Notice the "Not more often than every" setting that I cleared. It seems the SSD gets about 50% full now and is able to buffer the 120 GB of data, while balancing moves it off. I am still evaluating it; but so far, that seems to be what is happening (I copy a new file to SSD, and Balancing copies the older file I just finished from SSD to pool at the same time). I determine this because Scanner shows the old 2 TB and 3 TB drives I am copying from at "60 to 80 MB/s"; but yet, Scanner shows the SSD moving at 180 MB/s, with another drive in my pool getting 100 MB/s. E.g. 80 (write) + 100 (read) = 180 MB/s from the SSD throughput overall in Scanner. Obviously, this will make a lot more "balancing" happening which has me concerned - I have 20 disks in my pool (about 60 TB) and "Spin Down" the drives to concern power (230 Watts vs 85 Watts idle). Would highly prefer "Balancing" to happen after hours, of even once a week as I got the room. I jsut want the SSD used as cache, and then the file moved to another drive in the pool - where it waits to be "Balanced" later.
  13. Is it possible to get the C# source to this? I guess I can Reflect it and snag it if need be. From the API, is it possible to force a file/rebalance immediately after a file has been copied to, say, an SSD like this? There's a checkbox on the Settings tab that allows Balancers to force a re-balance. The problem I am running into is my "SSD" drive, a 240 GB SATA6 drive, gets "full" very quickly when I move TBs around - and doesn't get "emptied" until a full rebalance is complete. Which is annoying when trying to move 25 TBs around. I'd have to complete disable this "SSD Optimizer" just so i can copy things. Thanks!
  14. I just did the same thing (completely different hardware; but, the same steps). Yes. It will find and have the actual full Drivepool all setup and active. As long as the drives are "Online": using the built-in Disk Management application, you can check that they are online. I recommend these steps though: Get the hardware setup first before installing the software. - Deactive license, shut down old machine. - Install HDDs in new machine, do NOT install DrivePool!!! Hold off installing any software. - Go to Disk Management and verify the drives are all "Online." If they are not, right click on them and select "Online." - Then install DrivePool, DriveScanner, etc. These are the steps I took and it worked perfectly. The reason why I suggestion this is you can only bring a single HDD "online" at a time. Even though I didn't try it; I would not want DrivePool complaining that I only have 1 HDD out of my pool. It may go wacky or whatever. So, I decided to get the hardware setup first before installing the software. Note: DrivePool will try go through a bit "Checking" phase. After it is done, it will start to Balance with the DEFAULT settings. If you are like me, you may have very custom settings setup in your Balancing. Get that done first and don't let it start balancing. The initial checking that happens after installing DrivePool took several hours; so, it gave me plenty of time to get things "Balanced" correctly according to my settings. Scanner will run with defaults. Mine was set to 3 AM - 6 Am for the work period (I was up until 4 AM working on mine, and wondered why throughput lowered dramatically - it was Scanner that started at 3 AM). As far as virtual machines and DrivePool/Scanner, be warned that SMART access does not pass through Hyper-V to the VMs. I originally tried to get DrivePool/Scanner all setup on VMs under QEMU (a good native hypervisor under Linux) and W2K12R2 using Hyper-V. But times I ran into SMART data not being passed, which ticked me off. It's a Windows/Linux thing. So, I advise setting up the DrivePool / Scanner on the HOST. If that's Windows 10, that's fine. That will give you a single Drive Pool drive. I name mine "A:\", as a throwback to the floppy disk days (if you don't know Floppy Disks, then enjoy your youth!). Once you have that setup, I advise setting up a "Virtual Network Adapter" in Hyper-V for "Internal" only <- if that's possible under Windows 10 (I am using Windows Server 2012 R2, which allows me to do "Internal" network adapters). Internal creates a private 10 Gbps network that all VMs can use, as well as talking to the host - you ahve to setup the proper IPs and Mask. The advantage is there is no physical "NIC' used, so you get some great speed without interfering with remote access to the physical machine. With that, setup a "Share" only for your internal private network (mine is 10.16.210.X). The share will be for your Drivepool drive, open to all anonymous access... And there inlies yet another issue with Windows, true "anonymous" access to a shares is very difficult with so many hoops to jump through at many different security levels. Anyways, use a Windows Share from your Windows 10 box to your "internal" networking for all of your VMs you have running.
  15. Yep, exactly. I was actually more concerned about DrivePool reading the existing data. But now that I think about, you said you are using the hidden NTFS metadata. Perhaps that's readable no matter the permissions. But yes, and for anyone else reading: if you have custom permissions on your drivepool (e.g. \\SERVER\Users\Mommy, \\SERVER\Users\Child1), then you will want to remove all of those permissions before switching to a new server. Easiest way to do that is to Right-Click at the root of your pool, go to editing the permissions, remove everyone but the defaults. Once you do that, make sure to click the checkbox for "Replace permissions on all child objects." that will reset it all.
  16. Ok, thought of another question... What about NTFS permissions? Obviously since moving to a new system, the custom permissions I have set will be inop. Do I need to go through and force Everyone access to the files, move them, and fix the permissions on the new system? If I recall some similar settings when moving from WHS's DE to DrivePool (though I didn't, I just created a new empty pool and copied my data there).
  17. I have 4 Titans to install (at some point, when I am done gaming on a few of them)... EDIT: Actually a mix of AMD 280x and Titans to start. I have some Tor projects I want to do some speed comparisons with (280x was far more efficient in the past over my .NET CUDA code, but with recent Golang CUDA apis I want to give it another shot).
  18. Isn't RemoteFX vGPUs? Not dedicated?
  19. Yeah, read up on RemoteFX... Naaaahhhhhh
  20. Ok, I want REAL-TIME. That's not a question! Humm... I want to be careful not to disable it than. Could you provide us with some details on how to disable it (or what to watch out for that will disable it if a certain option is set)? Sorry for all the questions...
  21. After reading one of your other replies to my threads, I'm wondering if I should focus on the Balancers related to duplication. It sounds like there is two features: * Balancing * Duplication Passes If the default duplication plugins won't work for me (triggering a "Duplication Pass" under certain conditions); then, this may be the very first Balancer plugin I create.
  22. Thanks for those links! Found the first one already and that is what convienced me that I can break from the Windows Hyper-V host and go ESXi. Been really wanting to setup a home Cuda lab for years in Linux but Hyper-V doesn't pass GPUs through. And the better-half would kill me if I was to run more than 1 server...
  23. Yep, I have StableBit Scanner and purchased/working. Mentioned it in the OP that I have it set to scan only twice a week, on Mon and Thurs mornings between 3-6 AM (my time). That is the only scheduled disk access I have that I know of (have closely inspected all other software and packages). So with this option set to not not balance automatically, when does Duplicate occur? What I am thinking of doing is setting it to not balance automatically and configuring one or more of the Balancer plugins to triggit when the disk is X % full. My only concern is, when does Duplication occur if I copy files into a folder that needs to be duplicated. Does it duplicate ONLY during Balancing?
×
×
  • Create New...