
gj80
Members-
Posts
42 -
Joined
-
Last visited
-
Days Won
4
gj80 last won the day on November 21 2022
gj80 had the most liked content!
Recent Profile Visitors
1185 profile views
gj80's Achievements
-
Infomaniac reacted to a question: Tool to keep disks alive (prevent head parking)
-
Ginoliggime reacted to a question: OneDrive for Business maximum file limitation?
-
Ginoliggime reacted to a post in a topic: MyDigitalSSD BPX 480GB NVMe SSD - No SMART status
-
KiaraEvirm reacted to a post in a topic: MyDigitalSSD BPX 480GB NVMe SSD - No SMART status
-
I do. The speeds are amazing. I ran CrystalDiskMark, and I got the speeds they advertised on the product page (http://www.mydigitaldiscount.com/mydigitalssd-480gb-bpx-80mm-2280-m.2-pcie-gen3-x4-nvme-ssd-mdnvme80-bpx-0512/) - ~2700MB/s read and ~1400MB/s write. The price is very competitive too. As far as what Christopher mentioned - yes, it does get very hot when under load. VERY hot. It thermally throttles itself, but not as aggressively as I would like - I've seen its temperature climb above 90C. I did some research and it appears that basically all the nvme drives out there get similarly hot without airflow. This was in an open test bench setup with no airflow. After I set up a quiet 120mm fan aimed in its general direction the temperatures under heavy load became more reasonable (70C or so) and it stopped thermal throttling. The airflow from a norco case with its fan wall should be more than sufficient.
-
gj80 reacted to a post in a topic: MyDigitalSSD BPX 480GB NVMe SSD - No SMART status
-
KiaraEvirm reacted to a question: OneDrive for Business maximum file limitation?
-
It reads SMART data fine since it's just a JBOD controller. I'm not sure about drive spin-down - I keep all my drives on.
-
Christopher (Drashna) reacted to an answer to a question: Setting up a new server - do I have to redo drive pool
-
It picks right back up. I'd attach all the disks first, make sure they're all being seen, and then install DrivePool. The point of doing the disks first just being that it's easier to troubleshoot disks not showing up while DrivePool isn't busy trying to examine the attached disks, in my opinion. If you already installed it, you can just disable the service and then later reenable it.
-
gj80 started following Questions before switching , Setting up a new server - do I have to redo drive pool , Server 2016 Compatibility and 4 others
-
I'm using DrivePool and Stablebit on my new 2016 server with no issues, Jason. I am using the beta versions of both, however. I've also run DP+Stablebit on several Win10 systems.
-
gj80 reacted to an answer to a question: Tool to keep disks alive (prevent head parking)
-
Christopher (Drashna) reacted to an answer to a question: Some questions before I switch to DrivePool
-
My understanding was that Green drives park regardless of any power management settings set via software. That is, that their controllers do so independent of all the other power management stuff that's OS-controlled... I did check just now, though, and all my drives were either set to "APM -> Max Perf (no standby)" or the APM section was entirely grayed out (which seemed to be the case for a significant number of them).
-
For those who aren't already aware of the issue - some drives (particularly WD "Green" drives) aggressively park their heads (every 8 seconds) to save power. This has caused serious longevity issues with those drives. I was switching disks over to a new server I built, and I noticed the load cycle count was quite high for a lot of drives. For some, I had already used the WDIDLE3 utility in the past, but not for all of them. Since using that tool is such a pain to do, I decided to just write a little script/service to keep the disks alive instead. Posting it here in case anyone else would like to use it. Edit the "KeepDisksAlive.ps1" file for notes, how to install, how to customize if desired, etc. Once installed, it runs as a system service. It writes to "volume paths", so there's no need to have your disks mounted to letters/folders. I don't think it will lead to any appreciable difference in power consumption or wear & tear as opposed to just having disks that don't park themselves in general. I monitored my UPS power consumption and didn't see a difference. Also, I monitored the sum of all my drive's load cycle counts before and after to confirm it's working. I included what I used to do that in a subfolder. (Not a DrivePool issue, but I figured many running DrivePool could take advantage of this) Edit 12/10/2018: Re-uploaded attachment upon request since the old one was reporting not being available. Edit 12/10/2018 Part2: It appears that the forum gives a message about the attachment having been removed. Actually though, you just need to be logged in to download it. If anyone else gets that message, just create an account and try again and you should be good. KeepDisksAlive-v1.0.zip
-
1) Yes, 2016 is supported 2) Yes, that's not a problem. DP works at a file-level, so it isn't concerned with interfaces, block sizes, disk signatures, etc. 3) 42
-
Christopher (Drashna) reacted to an answer to a question: Non-Realtime Duplication Mechanics
-
Unsafe Direct/IO didn't work, and none of the "specific methods" did either.
-
Thanks! Sounds like everything should work fine, regardless of duplication settings then.
-
gj80 reacted to an answer to a question: Non-Realtime Duplication Mechanics
-
When realtime duplication is disabled, and an already-duplicated file is updated, presumably this file is written to only one drive (out of, say, 2 it was duplicated onto) ... how does DrivePool later know which of the 2 now-different versions of the file to duplicate? I ask this because I'm writing a file auditing / change indexing / integrity-verifying program that utilizes a custom alternate data stream which is added to each file with miscellaneous tracking information. When that happens, the main data stream ($DATA) doesn't change, and I'm using special overrides to disable updating the DateModified values. My guess is that DP probably looks at whichever file has the newest date, and chooses that as the new "master" file and overwrites the other one...maybe taking into account the filesize as well? Since I'm not changing DateModified and I'm writing to an alternate stream (so the primary "file size" would be the same I think), would this end up randomly writing to 1 file only out of a duplicated pair and not getting duplicated? If the answer to that is yes, then - if I have realtime duplication *enabled* and all of the above is true, would my new data stream get duplicated to both duplicates of a file, in spite of the primary file size not changing and the datemodified not changing? I'm not only writing this program for use with DrivePool, but since I obviously use it, I want to make sure it works, and how I need to have it set for it to work properly. Thanks!
-
I've switched my desktop over to Windows 10 on one of these: http://www.mydigitaldiscount.com/mydigitalssd-480gb-bpx-80mm-2280-m.2-pcie-gen3-x4-nvme-ssd-mdnvme80-bpx-0512/ CrystalDiskInfo shows the SMART status for the drive, but StableBit.Scanner_2.5.2.3103_BETA says "The on-disk SMART check is not accessible" The DirectIOTest results are attached. Thanks
-
Christopher (Drashna) reacted to an answer to a question: Questions before switching
-
There's no need to have the disks mounted at all for DrivePool's purposes, so you can just go to diskmgmt.msc and remove the letters as far as it's concerned. For snapraid, though, maybe you could mount the disks as folders? http://wiki.covecube.com/StableBit_DrivePool_Q4822624
-
Christopher (Drashna) reacted to an answer to a question: Slow copy Speeds, 10G Lan, 2012R2E and Disk Write Cache
-
Ah, gotcha. Backups are different than RAID. If you accidentally deleted everything with a stray command, or got hit with a ransomware virus, for instance, all your data would instantly be lost with RAID. RAID's purpose is only to keep systems up in the event of hardware failure. If a copy of your data is not sitting in another system, with versioning handled, then your data is supremely vulnerable. Also, "parity" is just something people say for short when referring to one type of RAID - RAID5 or RAID6 levels (or, in ZFS notation, "RAIDZ" or "RAIDZ2"). If you use DrivePool with 1 file duplication across the entire pool, then it's basically functioning as a RAID1 at that point. If you used no duplication, it would be equivalent to a RAID0 array (striping). Of course, you can customize it further at the folder level, which isn't typical with RAID, so "duplication" is probably the best way to refer to what DP does. You wouldn't need to do anything. The disk would show up in the DrivePool UI, you'd click "add" and you would immediately have that much more space available in the pool. DrivePool takes care of balancing out the distribution of files across the disks automatically on its own. Because it's all file-based, there's no fixed filesystem that needs to be adjusted.
-
When you add a drive to a pool, all that happens to the drive is that a folder named "PoolPart-<IDString>" is added to the root of the drive. You can then cut and paste content from the disk (or not) into that folder. What you put into that folder appears in the overall merged pool. If two matching files on different disks collide, they will both be added and the second/third/etc copy will just be utilized as duplicate copies. Yes, everything is still a plain file. All your DrivePool data on each disk just reside inside of the PoolPart folder on the disk. You should back up the pool drive which presents the merged view of all the individual disks. You could back up each individual disk's contents, but that gets messy imo. You can use any file-based backup tool. VSS is not supported, though. Parity isn't supported - only duplication levels. You can set duplication levels either globally across the entire pool, or at a folder level instead. So, you could choose to not duplicate temporary/unimportant data, and then set 2x on others, and 3-4x on critical document folders/etc. A popular choice is for people to use DrivePool with no replication along with "snapraid" to implement parity. That's a good choice for pools where data is rarely/never deleted/changed, but it's a bad choice for setups where data is being deleted/reworked/etc frequently, since it's file-based RAID which isn't real-time and there's risk of data loss in the event of a rebuild if parity hasn't been recalculated following significant data changes/deletions (additions are fine). I would like parity too, but I much prefer the simplicity and sense of assurance I get from not having to worry about how much I change my data around or how frequently I do it. There really isn't a product which exists today in which there is no compromise whatsoever to running parity - normally the compromise is performance or complexity + lack of flexibility. ZFS is the best parity option imo, but you don't have raw files on the disks in the event of a catastrophe, and the logistics of disk acquisition to do expansion can be tricky. Personally, I prefer being able to just throw a single drive in as I need space, being able to easily just remove a single disk, never having to worry about drive sizes matching, etc. For me, sacrificing space efficiency to get those things is worth it. I understand if other people have different priorities, though.
-
UPSs help, but power supplies still fail, and UPS units go bad as well. In a home lab/etc situation that's one thing, but domain controllers are rarely in that setting. I think it's an understandable decision to force write caching off for the disk holding AD schema. A corrupt domain can be a nightmare.