Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Jaga

  1. Running into some issues getting CloudDrive and SnapRAID compatible. I am keeping any cloud drive I create on one of the drives that also holds a poolpart folder. The cloudpart folder resides in the root of the drive, just like the poolpart. The cloud drive is added to my DrivePool and set for duplicated data only, and I don't have any data being duplicated to it (it's essentially empty). What I'm seeing is issues when I go to run my SnapRAID job. I tell the script to stop the CloudDrive service first, then sync, then compute parity on new changes, then re-start the CloudDrive service. That usually ends up in parity issues, as *something* somehow changes in the cloudpart folder (4 files inevitably change during the space of stopping/restarting the service). That makes the scrub job that SnapRAID runs throw errors, and it marks the blocks bad for those 4 files. I also just tried running the SnapRAID sync/scrubnew jobs without stopping the CloudDrive service first, and that ended up bluescreening the server and dismounting the cloud drive after a reboot. I'm going to try again with a smaller/more manageable cloud drive. Are there any recommended procedures to get CloudDrive working with a runtime parity compute, such that it won't alter it's own files for a minimum period of time? Is it actually changing it's files even when the service is stopped?
  2. For good measure (if Christopher's suggestion doesn't help), change to the "Throttling" tab and click the "Medium" or "High" sensitivity radio buttons, and make sure "Do not interfere with disk access" is checked. It shouldn't matter when Scanner isn't actively scanning, but it also doesn't hurt. Additionally Background Priority should be checked by default.
  3. Jaga

    Pool Activity Monitoring

    Even Windows' Resource Monitor can't do it properly. Whenever I want to see activity, I just open up the DP software itself.
  4. Sounds like a good candidate for a custom installation script. You can either accept DP's defaults during install, or (if you know you want something else), you choose 'custom' and pick which balancers/features/etc you want enabled/disabled before the install actually kicks off.
  5. Oh well hell, I had no idea your SASLP controller was that young. Based on the driver set for it and some rather light reading elsewhere, I assumed it was the generation just after the SAT2 controller (3.0 Gbps per port vs 1.5). It probably is, but yours has light duty on it for ~48 months. If it's only been 4 years since you purchased it, it couldn't have been on the shelf more than 4 years longer than that, and is probably still in great shape. That's awesome to hear. Drivepool is absolutely the right software to 'grow' your storage, as you figured out. Remove old drives as they either get too old, or die. That's why I switched from a NAS to internal drives in a pool - it's just a much better solution, with far greater ability to protect the data and architecture flexibility. ======================================== Okay, so.. I did more digging on the expander card. The first thing I wanted to know was what is the largest drive it was ever tested with and certified for (i.e. addressable space). The "HP 4TB 3G SATA 7.2k rpm LFF (3.5-inch) Midline (694374-B21) drive is 4TB and certified to work with the Expander (link). So it should work fine with your 4TB drives. The only consideration there is the Advanced Format your drives were manufactured with. The reason is because "Beginning in late 2009, accelerating in 2010 and hitting mainstream in 2011, hard drive companies are migrating away from the legacy sector size of 512 bytes to a larger, more efficient sector size of 4096 bytes, generally referred to as 4K sectors, and now referred to as the Advanced Format by IDEMA (The International Disk Drive Equipment and Materials Association)." (link) That's probably after the manufacture of your SASLP controller, so it's a total guess if it will talk to them properly. However, I suspect that the AF standard is backwards-compatible (much like SATA/2/3 standards are), so my assumption is that it will work fine with AF drives. See this link for a bit more confirmation in that regard. The second thing is inter-operability between the SASLP controller you're using, and the Expander. This post seems to confirm my suspicions that they will work together just fine. More confirmations found at these links: https://hardforum.com/threads/file-media-server-build-flexraid-24tb-build.1546329/#post-1036163871 https://hardforum.com/threads/file-media-server-build-flexraid-24tb-build.1546329/#post-1036173588 Honestly, I think you're good to go with the Expander card and your SASLP controller. Given the prices and the seller ratings for the two you dug up, the one with cables looks to be a great choice.
  6. Don't know if you've tried updating the firmware on those two cards, but here's a link to the last revision Supermicro distributed: ftp://ftp.supermicro.com/Firmware/AOC-SAT2-MV8/ I also nabbed a copy of the original installation CD in ISO format. I upped it to a server, here's a link to it for you. It should include original drivers for the card(s). Officially SM stopped supporting their own driver sets for the card as of Windows Server 2003 (link). AOC-SAT2-MV8_Rev_102.iso Part of the problem might also be the AF drives - Advanced Format wasn't officially supported until Windows Vista - which was after official support from SM on drivers for these cards stopped. But given you can't even poll individual drives for things like SMART data, serials or temps.. that may be a load of hogwash. Yes - I'm mentally regurgitating whatever pops into my head. If the drives you are trying to use are SAS instead of SATA, you'd need to use a reverse-breakout cable with each drive. The controller only has SATA ports, and for those to talk to a SAS drive you need to use 2 SATA ports -> 1 SAS port on the drive. Sidenote: I read that the Sonnet driver handles 3TB drives, for some reason. Most people seem to use them with 2TB en-mass however. So at this point, if the firmware update + trying the drivers from the ISO doesn't help, and you've covered bases on the other stuff... I think you'd be spinning your wheels wasting time trying to continue using those old controllers. Especially when there are somewhat inexpensive alternatives. It seems like the expander card may be your cheapest bet, though buying on Ebay means very little recourse if it turns out to be incompatible or DOA. And - that AOC-SASLP-MV8 controller is also extremely old - are you sure you want to put more money into it as a solution on a server rebuild? Theoretically it's well past end-of-life already, and could drop dead any time. My nugget of wisdom after mushing all this information around: I tend to try and ensure that my data is protected by hardware I know I can rely on, even if it costs a bit of money. My WD Red 4TBs are around 4.5 years old now, and despite them having a low duty cycle I'm looking to retire them to parity drives soon and replace with 7-10 8TB drives for a new Pool, including a new controller and mSAS breakout cables. Sometimes the old hardware just isn't something you can trust. If you don't have the budget to replace key pieces though... just start calling it an adventure!
  7. Are the drives connected to your controller card recognized in AHCI mode, or RAID mode? I find AHCI to be much better at handling individual drives, as opposed to running them through raid firmware with additional ops. Beware that if they have volumes on them and you switch from RAID<->AHCI, the data can get corrupted (or just not visible). If they are running in RAID and you want to use them as stand-alone drives for a Pool (I assume you do), try removnig the volumes completely, switching them to AHCI, and then re-creating new volumes to see if they then serialize properly. You might just be running into driver issues for WHS2011. You've probably already done a crawl for drivers, but with a OS that's 7 years old you aren't likely to find anything new in that regard. Again though - removing drivers for the controller and switching the drives to AHCI might be all WHS needs to install default working drivers for them and fix the serialization problem. It all depends on what mode that controller has them on. Edit: Scratch my prior thoughts about AHCI vs RAID mode - I took a look and your controller only supports JBOD (and software RAID), so there's no mode switching to be done with it. It is dated at this point, but there should be some WHS or Server 2012 drivers available. Going to go do some digging on your problematic controller and see what drivers I can find...
  8. You'll want to get the "Disk Space Equalizer" plugin for DrivePool from this page, and enable it to force an immediate re-balance. When it's done, turn it off again and let DrivePool do automatic balancing from then on.
  9. I think if you move the two drives from the HTPC to the Desktop PC and run DrivePool, it will "see" the hidden poolpart directories and auto-make *another* pool on that machine. You could then use both of those pools as child pools when you make a new top-level pool in DrivePool's interface, effectively combining all the files into one final pool without doing any additional work. Reading a post that Christopher made seems to confirm this - the only thing you'll have to adjust after the move is the balancing settings, and pool drive letter(s).
  10. Without having access to the License database at Covecube, the only thing I can suggest would be uninstalling the current version, re-installing the last Beta version you had installed previously, activating the license in it (if necessary) and then de-activating the license right after. If that works, you'd then be able to uninstall the Beta and re-install the new version clean with a new activation.
  11. I recently found out these two products were compatible, so I wanted to check performance characteristics of a pool with a cache assigned to it's underlying drives. Pleasantly, I found there was a huge increase in pool drive throughput using Primocache and a good sized Level-1 RAM cache. This pool uses a simple configuration: 3 WD 4TB Reds with 64KB block size (both volume and DrivePool). Here are the raw tests on the Drivepool volume, without any caching going on yet: After configuring and enabling a sizable Level-1 read/write cache in Primocache on the actual drives (Z: Y: and X:), I re-ran the test on the DrivePool volume and got these results: As you can see, not only do both pieces of software work well with each other, the speed increase on all DrivePool operations (the D: in the benchmarks was my DrivePool letter) was vastly greater. For anyone looking to speed up their pool, Primocache is a viable and effective means of doing so. It would even work well with the SSD Cache feature in DrivePool - simply cache the SSD with Primocache, and boost read (and write if you use a UPS) speeds. Network speeds are of course, still limited by bandwidth, but any local pool operations will run much, much faster. I can also verify this setup works well with SnapRAID, especially if you also cache the Parity drive(s). I honestly wasn't certain if this was going to work when I started thinking about it, but I'm very pleased with the results. If anyone else would like to give it a spin, Primocache has a 60-day trial on their software.
  12. Jaga

    Switch from DriveBender

    A - Yes, it is - any OS that can read the file system you formatted the drive with (assuming NTFS in this case) can see all it's files. The files are under a "poolpart..." hidden folder on each drive, fully readable by Windows. B - Yes, it will work with Bitlocker. This is a quote directly from Christopher on these forums: "You cannot encrypt the DrivePool drive, but you CAN encrypt the disks in the pool." (Link)
  13. Small sidenote from me on the drive letters, since I changed to a defragmenter that requires letters assigned and I hate to see all the pool drives in Explorer. I followed the instructions from this article to selectively hide the drives I wanted. They can still be accessed by Win-R (run dialog) and typing them in there. Not sure if it still works in 8/10, but I'd wager it does. It was the perfect setup for my server.
  14. I use BitDefender, but I set the actual DrivePool itself to excluded (no scans directly on the DrivePool letter or directories). I do this for compatibility, since any files I add to the pool have already "landed" and been scanned on machines on the same network. In other words, if you don't download/save files directly to the pool from the web, you don't need to run real-time scans on the pool. Just have a separate machine or directory that IS scanned, which takes all new content from the web first, and you're pretty safe. I also never run programs directly from the DrivePool. I instead copy them to the C: drive of the machine I plan to run them from. That copy is protected by the AV software, and so anything coming off the Pool is re-scanned then. From that perspective if you can set exclusions in whatever AV you are using, you can pick whichever you like, Defender included. Using this technique and a tuned network layer, I've managed to get my storage server to take network writes to the Pool at around 112-115 MB/s (fully saturated Gigabit). No AV scanning, full speed.
  15. DrivePool will work (as you found out) with drives that have existing data on them. To get the data moved onto the pool so it isn't just sitting on a drive, give your DrivePool drive a letter and simply move (cut/paste) the files over to it. It'll start distributing those files to all drives in your pool, according to whatever balancing policy you setup - the default is evenly between all pool drives. Any files that started and ended up on the same physical drive just use the NTFS "move" operation, and the MFT is simply updated instantly (like moving a file between folders on the same drive). Once you have just the pool directory left (which is hidden on each pool drive) and no un-pooled files on them, you can un-assign the old drive letter if you like. I prefer to mount them as folders under my C:\Pool_Volumes folder, so I can quickly and easily get to all the drives outside of the Pool.
  16. Editing my post for a bit more clarity: You can use duplication and then simply take the drive out of the machine, and revert duplication to x1. However according to the DrivePool feature list, when you remove a disk from the pool via the software options: Provided you have enough available space on other disks in the pool, then Christopher is correct in that a removal will transfer all files to other disks prior to removing the drive you wanted. I'm not sure which option is faster. Either should work. Christopher's advice is probably the best you're going to get.
  17. Ask someone what the term "spinning rust" means in 20 years and they may look at you funny. Let us know how it goes WickedLlama!
  18. Yep, basically any secure-erase utility writes whatever pattern you want to the sectors. The more advanced will do multi-writes, toggling bits as they go. i haven't had to use one in a while, but it should do the trick here. I usually retire drives when sectors start to go bad, just as a good disaster-preventative practice.
  19. Good to know, learn something new every day! Now that is an awesome idea.
  20. If the sector has no data in it currently, a secure erase of all empty sectors on the drive would work. Some tools available to do that can be found here: https://www.raymond.cc/blog/make-your-recoverable-datas-unrecoverable/ If it has data in it, simply finding the file via a block map tool and deleting it (removing it from the MFT) would mark the sector empty, allowing you to then do a secure erase which would force a write to it.
  21. Drivepool to my understanding uses a proprietary file system, the CoveFS. It should be fully compatible with NTFS standards, but Oculus is trying to create a temp folder structure, which is where the error in your output comes in: The technical why of this error isn't something I can explain; I'd think Alex or Christopher could. You may be right about Oculus updates, in which case I'd suggest creating that "tmp" directory yourself by hand, which might solve future issues. The symlink you made may cover it however, since the OS generally doesn't care about symlinks vs hardlinks for normal file/folder structures. Do you have a game on Oculus that would typically install, and then need an update after, like perhaps an optional DLC or addon that you could test after moving/symlinking it's main install? That might be a good test, until you get a better answer from the team here.
  22. Jaga

    help with standby

    Are your parity drives the same make/model as your data drives? Some firmware features aggressively attempt to spin down drives (park heads/sleep) while some are more relaxed. I had to patch my WD Reds since they were parking heads after just 8 seconds of inactivity, raising the load cycle count unnaturally in a NAS.
  23. Install the game to your Oculus drive, move the folder to your DrivePool drive, create a Symbolic link on the Oculus drive where you moved the folder from that points to the folder on the DrivePool drive. It's worth a try, since I doubt Oculus requires hardlinks for installed software. If you don't want to do it by hand, you can try software like Junction Link Magic.
  24. Jaga

    Error: Not enough data.

    Updated the ticket today. I was rather... verbose. Sorry! What I added may change what you guys are testing on the ticket, and where I go with my future troubleshooting. Thought you'd want to know. Trying not to waste your time when what I'm seeing changes.
  25. Jaga

    Error: Not enough data.

    I went and updated the ticket and re-submitted a full set of information with BitDefender off, which was successful this time. Appears it was the culprit in the first round of false positives. Small bit of additional info: I left Clouddrive alone after re-enabling uploads, for a good 30 minutes. Still showing "To upload: 252 KB", and doesn't appear to be connecting to the FTP server (but the # of errors keeps rising with uploads turned on). If I had to guess, I'd say there's a lower data threshhold that isn't large enough to trigger an actual upload (like 1mb minimum). But since a small .txt file could be even 1KB, that would be a valid drive upload trigger. I still haven't dropped any files on the cloud drive, in the interest of diagnosing it as-is (empty). It is not part of my Drivepool for now either (though I originally added it and removed it, for testing).
×
×
  • Create New...