Jump to content

gj80

Members
  • Posts

    42
  • Joined

  • Last visited

  • Days Won

    4

Reputation Activity

  1. Thanks
    gj80 got a reaction from Infomaniac in Tool to keep disks alive (prevent head parking)   
    For those who aren't already aware of the issue - some drives (particularly WD "Green" drives) aggressively park their heads (every 8 seconds) to save power. This has caused serious longevity issues with those drives.
      I was switching disks over to a new server I built, and I noticed the load cycle count was quite high for a lot of drives. For some, I had already used the WDIDLE3 utility in the past, but not for all of them. Since using that tool is such a pain to do, I decided to just write a little script/service to keep the disks alive instead.
     
    Posting it here in case anyone else would like to use it. Edit the "KeepDisksAlive.ps1" file for notes, how to install, how to customize if desired, etc. Once installed, it runs as a system service. It writes to "volume paths", so there's no need to have your disks mounted to letters/folders.
     
    I don't think it will lead to any appreciable difference in power consumption or wear & tear as opposed to just having disks that don't park themselves in general. I monitored my UPS power consumption and didn't see a difference. Also, I monitored the sum of all my drive's load cycle counts before and after to confirm it's working. I included what I used to do that in a subfolder.
     
    (Not a DrivePool issue, but I figured many running DrivePool could take advantage of this)
    Edit 12/10/2018: Re-uploaded attachment upon request since the old one was reporting not being available.
    Edit 12/10/2018 Part2: It appears that the forum gives a message about the attachment having been removed. Actually though, you just need to be logged in to download it. If anyone else gets that message, just create an account and try again and you should be good.
     
    KeepDisksAlive-v1.0.zip
  2. Like
    gj80 got a reaction from Ginoliggime in OneDrive for Business maximum file limitation?   
    Hi,
     
    I'm doing some research regarding possibly deploying this for some of our serverless clients that want a mobile shared document repository. I had one question though...
     
    My understanding from research I did a few months back was that onedrive for business has a "20,000 item sync limit". And sharepoint team libraries have a 5,000 item limit. This is an issue for a lot of our users that have more files than that. Does clouddrive inherit the same limitation, or is the way files are stored (by chunk?) block-based and thus doesn't run into that?

    Oh, a few more questions...

    Backups - when using clouddrive, is running a backup from a computer the most practical solution for backups, or is one of the cloud-based backup services for 365/sharepoint also viable? Does clouddrive support VSS?

    Change management - when multiple people edit a file, how is it determined which version is committed? The last one committed takes precedence?
    Edit: on second thought, I think I may have gotten the wrong idea about this - this is entirely a personal drive, right? No sharing? Still curious about the file limitation/backups though.
     
    Thanks!
  3. Like
    gj80 got a reaction from Ginoliggime in MyDigitalSSD BPX 480GB NVMe SSD - No SMART status   
    I've switched my desktop over to Windows 10 on one of these:
     
    http://www.mydigitaldiscount.com/mydigitalssd-480gb-bpx-80mm-2280-m.2-pcie-gen3-x4-nvme-ssd-mdnvme80-bpx-0512/
     
    CrystalDiskInfo shows the SMART status for the drive, but StableBit.Scanner_2.5.2.3103_BETA says "The on-disk SMART check is not accessible"
     
    The DirectIOTest results are attached. Thanks

  4. Like
    gj80 got a reaction from KiaraEvirm in MyDigitalSSD BPX 480GB NVMe SSD - No SMART status   
    I've switched my desktop over to Windows 10 on one of these:
     
    http://www.mydigitaldiscount.com/mydigitalssd-480gb-bpx-80mm-2280-m.2-pcie-gen3-x4-nvme-ssd-mdnvme80-bpx-0512/
     
    CrystalDiskInfo shows the SMART status for the drive, but StableBit.Scanner_2.5.2.3103_BETA says "The on-disk SMART check is not accessible"
     
    The DirectIOTest results are attached. Thanks

  5. Like
    gj80 reacted to Christopher (Drashna) in MyDigitalSSD BPX 480GB NVMe SSD - No SMART status   
    We haven't fixed the issue here (eg, there is likely a new method for querying the data, that we need to research and add. 
     
    That said, NVMe is much faster than SSD, so you'll get that much better performance.
    However, I've seen a few people report that they do get rather hot, so you'll want to have good airflow in the system or you may see performance throttling. 
  6. Like
    gj80 got a reaction from KiaraEvirm in OneDrive for Business maximum file limitation?   
    Hi,
     
    I'm doing some research regarding possibly deploying this for some of our serverless clients that want a mobile shared document repository. I had one question though...
     
    My understanding from research I did a few months back was that onedrive for business has a "20,000 item sync limit". And sharepoint team libraries have a 5,000 item limit. This is an issue for a lot of our users that have more files than that. Does clouddrive inherit the same limitation, or is the way files are stored (by chunk?) block-based and thus doesn't run into that?

    Oh, a few more questions...

    Backups - when using clouddrive, is running a backup from a computer the most practical solution for backups, or is one of the cloud-based backup services for 365/sharepoint also viable? Does clouddrive support VSS?

    Change management - when multiple people edit a file, how is it determined which version is committed? The last one committed takes precedence?
    Edit: on second thought, I think I may have gotten the wrong idea about this - this is entirely a personal drive, right? No sharing? Still curious about the file limitation/backups though.
     
    Thanks!
  7. Like
    gj80 got a reaction from Christopher (Drashna) in Setting up a new server - do I have to redo drive pool   
    It picks right back up. I'd attach all the disks first, make sure they're all being seen, and then install DrivePool. The point of doing the disks first just being that it's easier to troubleshoot disks not showing up while DrivePool isn't busy trying to examine the attached disks, in my opinion. If you already installed it, you can just disable the service and then later reenable it.
  8. Like
    gj80 reacted to Christopher (Drashna) in Tool to keep disks alive (prevent head parking)   
    Well, this specifically interacts with the firmware.  Pretty much everything under "disk control" does.  
     
    It's effectively the same as running "wdidle3" on the drives.
     
     
    But as you've notice, not every drive supports it, so it varies from drive to drive.  Even the same model may have different firmware versions, and that can affect this. 
     
    As for the displayed settings, these may not be accurate, as the drive doesn't always provide enough info.  However, hitting "set" should update this anyways.
     
     
    But for the drives that don't allow you to do this or WDIDLE3, then the scribe above isn't a bad idea.
  9. Like
    gj80 got a reaction from Christopher (Drashna) in Some questions before I switch to DrivePool   
    1) Yes, 2016 is supported
     
    2) Yes, that's not a problem. DP works at a file-level, so it isn't concerned with interfaces, block sizes, disk signatures, etc.
     
    3) 42
  10. Like
    gj80 got a reaction from Christopher (Drashna) in Non-Realtime Duplication Mechanics   
    Thanks! Sounds like everything should work fine, regardless of duplication settings then.
  11. Like
    gj80 reacted to Christopher (Drashna) in Non-Realtime Duplication Mechanics   
    If real time duplication is disabled, then it depends on the file. And what you're doing.
    If the file is already duplicated, then any modification to the file is done in parallel to all copy of the files.  This includes writing to the file or moving them around Newly created files are not duplicated, until a duplication pass occurs (IIRC, 1AM, daily).  If you're reading the file, then this is handled "normally". If read striping is enabled, then it depends on ... well, more factors.
     
    Additionally, when duplicating the file, IIRC, we do set the modify time to be the same.
    Specifically, if there are multiple files, during a duplication pass or when accessing the file, we check the modified time. If that doesn't match, we may check the CRC of the file.  If that doesn't match, then we flag the user for a duplication mismatch... otherwise, we update the info on both files to match the newest file.
     
    (IIRC, I'm not 100% sure about that)
     
     
    As for the Alternate data stream, I'm not sure.  However, I do believe that yes, it would get duplicated to both disks (as ADS are just a special file type, essentially). 
  12. Like
    gj80 got a reaction from Christopher (Drashna) in Questions before switching   
    There's no need to have the disks mounted at all for DrivePool's purposes, so you can just go to diskmgmt.msc and remove the letters as far as it's concerned.
     
    For snapraid, though, maybe you could mount the disks as folders? http://wiki.covecube.com/StableBit_DrivePool_Q4822624
  13. Like
    gj80 got a reaction from Christopher (Drashna) in Slow copy Speeds, 10G Lan, 2012R2E and Disk Write Cache   
    UPSs help, but power supplies still fail, and UPS units go bad as well. In a home lab/etc situation that's one thing, but domain controllers are rarely in that setting. I think it's an understandable decision to force write caching off for the disk holding AD schema. A corrupt domain can be a nightmare.
  14. Like
    gj80 got a reaction from Christopher (Drashna) in Backups   
    I use Crashplan, and I'm just finishing off my first full backup pass with Syncovery to Amazon Cloud Drive (ACD).
     
    I just built out a new Norco 4224 server, which I have sitting beside my current one. Once the Syncovery pass finishes, I'm going to swap the drives into that and then put my current Norco 4220 offsite and run Syncovery sync against that as well. As Christopher said, I'm skittish about only relying on cloud backup. Besides, it gave me an excuse to upgrade my server! lol
  15. Like
    gj80 reacted to Christopher (Drashna) in SSD Optimizer Balancing Plugin   
    No, this would not work.. StableBit DrivePool does detect what physical drive that the volumes are located on, and will actively avoid placing the duplicates on the same physical disk. 
     
    (this is also part of why we don't support Dynamic Disks, as this detection gets INCREDIBLY complex when allowing Dynamic Disks). 
     
    You'd need to use three different SSD drives to accomplish this. 
     
    You could also get a few M.2 cards, and a PCI-e  adapter card. 
     
    This would use up a lot less space, wouldn't require SATA or power hookups, and allow you to install multiple cards, side by side, getting PLENTY of drives in there. 
  16. Like
    gj80 reacted to Christopher (Drashna) in Feature Request: DrivePool UI integration suggestions + Scanner "never warn for this disk" option   
    "open in scanner" request:
    https://stablebit.com/Admin/IssueAnalysis/27281
     
    As for the second request, that's a bit more complicated.  In StableBit DrivePool, we're pulling the volume label, and displaying that. So you could change the volume label, and it will be reflected. 
    In StableBit Scanner, we display the disk name, as it shows up in the Device Manager, basically.   

    So these are two very different values.  However, I'm sure we could figure something out. 
    https://stablebit.com/Admin/IssueAnalysis/27282
     
     
    As for the last, when a SMART warning does come up, we do have the option to "permanently ignore" the settings that are flagged.  Though, this is more of a reactionary action, rather than a setting. 
    https://stablebit.com/Admin/IssueAnalysis/27283
  17. Like
    gj80 got a reaction from Christopher (Drashna) in 8 port sata card for Windows 10   
    http://www.newegg.com/Product/Product.aspx?Item=N82E16816101792&cm_re=AOC-SAS2LP-MV8-_-16-101-792-_-Product
     
    I've bought and used many of those on several builds of mine with Windows 10 (and also Server 2016) with no issues. It is an HBA with no RAID mode at all, so there's no need to flash it, and it's dirt cheap. I've also been using these in very heavily stressed systems that process upwards of 20TB reads per week and 5-10TB writes over a long period, so I'm quite happy with their stability.
     
    If you were to try to use it with an expander, it might get hairy and I wouldn't suggest it (I just don't know what it would or would not work with), but if you're directly connecting it disks/backplanes then it should be fine.
     
    The Windows 10/2016 drivers are available on Supermicro's website. It doesn't *say* it supports 10/2016 last I checked, but if you download the driver, the readme specifically states that the latest update is Microsoft WHQL certified for 10 + 2016, and it has worked for me on several systems running both.
     
    They come with a full and low profile PCI bracket.
     
    Oh, and they're 8-lane PCI-E 2.0 cards.
  18. Like
    gj80 reacted to Christopher (Drashna) in Scanner can't detect drives in Storage Spaces   
    Well, for the first, ReFS may be a good step towards this. It's not a perfect solution, but it may help out. 
     
    As for the integrity checking for the pool, this is heavily requested feature, and is something we may add in the near future (it's something I've been pushing for, because there is a big demand for it, and it would be fantastic!)
     
     
    As for the Storage Spaces stuff, the biggest problem is how to display it, in a meaningful way.   It's something that's come up frequently, and it is something that will be addressed one way or another.  Just don't have an ETA for it. 
  19. Like
  20. Like
    gj80 got a reaction from Quinn in [HOWTO] File Location Catalog   
    Hi,
     
    I really liked the idea of doing this, but as others had also mentioned, I had issues with the maximum path limitation. I also wanted to log other information, like the drive model, serial number, etc. I also wanted to schedule it as a daily task, and to have it automatically compress the resulting files to save disk space.
     
    I wrote a powershell script to do all of that. It relies on the dpcmd utility's output (for which you will need a recent version). DrivePool itself isn't limited by the maximum path limitation, and thus the dpcmd log output also isn't constrained by the path limitation. The script takes this log output, parses it with regex matches, retrieves associated disk information, and then writes out to a CSV. It then compresses it. The header of the file has a command you can paste into a CMD prompt to automatically schedule it to run every day at 1AM.
     
    Please edit the file. Two variables need to be customized before using it, and the file describes requirements, where to put it, how to schedule it, what it logs, etc.
     
    If you want to do a test run, you can just edit the two variables and then copy/paste the entire file into an elevated powershell window. The .CSV (generated once the .LOG is finished) can be viewed as it's being produced.
     
    Also, you might want to hold off playing with this if you're not familiar with powershell scripting/programming in general/etc until a few other people report that they're making use of it without any issues.
     
    @Christopher - If you want me to make this a separate post, just let me know. Thanks
     
    DrivePool-Generate-Log-V1.51.zip
     
  21. Like
    gj80 got a reaction from Christopher (Drashna) in Windows 10 Anniversary Update and Long File Paths   
    I'm so glad I noticed this topic! The character limit has plagued me since I first started using Windows computers... I rely on folder structure a lot at home and at work, and I bump into it constantly. My clients bump into it quite often as well, and they end up losing data inadvertently - it's a nightmare. I understand it could cause application compatibility issues when directly referencing those locations, but at least Explorer can support addressing these files now. Without explorer itself supporting it, any under-the-hood changes before have been kind of moot for most applications.
     
    I was relieved to know that DrivePool itself has supported extended paths previously, though - I had opened a topic about that question since I knew it extended the path length for existing data on each disk since it's stored in specially-named folders.
×
×
  • Create New...