Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Jaga

  1. Yep - I edited my post (probably while you were reading) since I saw that step in the procedure. Might be something extra required due to being an on-board chip vs an add-in card. You can always disconnect the drives, switch to RAID mode, then reboot again and see if it lets you in.
  2. Only if you have files that use non-standard English alphabet characters, like é. I had to either rename the file(s), or move them off the pool for it to finish a sync correctly. With my ~10TB data set I only had 2 that qualified for re-naming. It's not a big issue, I just thought I'd point out that getting support on free products is very hit-and-miss sometimes.
  3. This post goes over how to flash that controller in pretty good detail, it may be helpful. The person who did it used the UEFI shell to accomplish the flash (I just did the same with my 9201-16e - it works well). The P20 firmware is stable - the CRC issues they had with the early 20.00.00.00 versions has been fixed, so go ahead and use it if you like. 20.00.07.00 is the latest. Be sure to use the files in the IT/UEFI folder from the package when inside the UEFI shell. You might still need to boot into the controller's BIOS with CTRL-C (RAID or not), to get the address of the controller for the flash. After flashing you should be able to use CTRL-C to get into the BIOS and check/configure it if necessary. Definitely disconnect the pool drives for the whole process, just to be sure. As a reference, I started looking at this topic for different flash procedures on that chip.
  4. You posted elsewhere, and since you addressed Chris there specifically I'll let him answer that one. But since you also asked here, I'll reply. Yes, the adapter should be flashed with IT mode firmware. That ensures it isn't running in RAID mode, but rather runs in HBA mode (JBOD scenario). Depending on your controller, you may have to flash it's BIOS at the same time - check the firmware package to see what's in it. Drivepool truly doesn't care what kind of controller the drives are connected to (card vs motherboard). All it cares about is seeing the hidden pool folder in the root of the drives, and that all drives in the original pool are available to it when you re-install and run it the first time, so it can auto-detect them and re-create the pool. All you have to do, is make sure the drives are *not* formatted or re-initialized when they are re-connected. If possible set them up with the same drive letters (or mount points). You mentioned in your other post you were considering upgrading/reinstalling on a new OS. The same applies there, except you wouldn't have Drivepool installed yet so don't need to disable the service. When Drivepool is done detecting the hidden pool folders and re-building the pool, you'll have to re-do it's configuration. Whether you do that by hand, or copy over the old config is up to you. You can find info on the config file for Drivepool here. So to confirm your steps (without OS upgrade): Stop and disable the Drivepool service Shut down the system & disconnect pool drives Boot system and flash controller cards Test controller with spare drive (if available) Shut down and reconnect old pool drives Boot and verify drives are working, and re-assign letters/mount points Re-enable Drivepool service, and start it. Run Drivepool and check to see if the pool is there again. It should be nearly instantaneous, since Drivepool doesn't have to do anything with the data.
  5. Quite simply because it's more robust, highly configurable, portable pools, GUI for management, multiple pools & hierarchical pools, on-the-fly expandable/shrinkable pools, configurable duplication, read & write pools, performance enhancements, real-time pool monitoring, integration with Scanner for a bevy of other safety/monitoring features, etc etc. As a side note: I've been waiting over a month for a reply to a bug in SnapRAID that critically stops parity generation, and Andrea (the author of SnapRAID) hasn't replied yet. I even have an un-moderated post over there that's sat for 3+ weeks without being approved. It's freeware - you get what you pay for. Here you pay for the software, and you get timely support. Overall DrivePool is just a much better pooling system than what SnapRAID offers. That isn't to say basic read-pooling doesn't work in SnapRAID - it does. But given the choice and comparing the overall feature list, I'd go with Drivepool ten times out of ten. It's more than worth the cost of the lifetime license. On the flip-side, I wouldn't use any other software for parity generation and security of data than SnapRAID. Each program has it's strengths, and Drivepool works amazingly well in conjunction with SnapRAID, each doing what they were originally designed for.
  6. Yes, that's called Hierarchical Pooling, which DrivePool supports. The problem there, is that when you add the DB child pool to the master pool, all it's files become visible there to anyone with access to that master pool. I was under the impression from your first post you wanted to keep the DB files completely out of any pool at all. Perhaps re-defining your goals and giving a little architecture detail would help us to help you: You have a DB with a lot of files ranging in sizes, that you want the 4 SSDs to support in a pool-style fashion (aka software RAID), but which you don't want people to see. You also have a regular Pool of disks that hold your main non-DB style data. And, you're using the 4 SSDs as a front-end cache - are they setup to utilize the DrivePool SSD Optimizer plugin? Are you against having completely separate pools - one for the DB, and one for your main data store? If not you can accomplish what you need rather easily. Partition each of the four SSDs into two logical volumes: the first part holds 1/4 of the DB, the second is used to cache the main data pool. You'd make a Pool for the DB by combining all the 1/4 volumes together. You could utilize the second volume slices on all four as your SSD Optimizer front-end.
  7. I guess if you wanted to keep it simple, you could just dedicate one of the four SSDs to the database, leaving the other three for Pool use. You won't get multi-disk-IO speeds, but you'll still get the raw IOPS speed of a single SSD. Normally where data stores are considered I wouldn't even think of suggesting RAID due to failure rates on rebuilds and so on, but you're not dealing with RAID 5/6/etc or large storage drives scenario with the SSDs, so it would work well here if you wanted to use it. Plus with RAID 0+1 or 1 you get the benefit of double-read speeds and redundancy. But I can respect it if you want to go the simpler route. Just be prepared to keep multiple daily snapshots of the data as backup points. Wait... I thought you didn't want the DB files on the Pool at all.
  8. I'm simply glad I get to keep using Scanner as my pool monitor. The thought of no Smart data and no Scanner... yikes.
  9. You're sorta painting yourself into a corner by wanting to use pool drives, but not the pool, AND split the database across all four of the drives. What I'd recommend is making a RAID 0 (RAID 0+1 if you can afford the 50% space drop) stripe out of your 4 SSDs, then deciding if you want a separate volume on them just for the DB, or if it can share space with the files you move on/off and simply be in "it's own directory". I'd think sharing a single logical volume would be okay. You can have files/folders outside the hidden Pool directory that sit on the drive and behave normally, and which aren't seen by Pool users. But you can't break up that DB onto separate drives without some type of underlying span mechanism, which in this case would be RAID. You could then mount that RAID drive to both a letter, and a folder under the "C:\Users\Admin\AppData\Local" path. Drivepool could use the letter, and you'd still be compliant using the system path for the DB. No matter what happens, you'll want good timely backups running, since you'll be exposed to either a 4x failure rate (RAID 0) or 2x failure rate (RAID 0+1).
  10. In my quest to get passthrough SMART data, I found out some new information on the P20 CRC errors issue: Turns out the original release of P20 (v20.00.00.00) was the buggy one. The latest available off their support website (v20.00.07.00) doesn't have the CRC error problem anymore. I've verified by re-checking SMART data after over 8+ TB of write/read during migration. I can also confirm that there is no bit corruption going on, as I was using FastCopy with verification turned on, and not once did it report errors.
  11. Jaga

    Google Drive

    I sorta read that more as "it might not work, but if you don't use the sharing features, it might work". But it seems Alex is adding in Team Drive support as of ~9 days ago? Not sure when that will hit a beta build.
  12. Found a way to generate SMART data through the LSI adapter. Funny thing is - it was lurking on the Covecube forums the entire time. Simply enable advanced settings in Scanner, stop/start the service, then turn on Unsafe DirectIO. Problem solved. And it only took 2-3 hours of research to bring me right back here. Where's that facepalm emote when you need it?
  13. Update: edited - see my next post. Was having issues getting SMART data through the LSI HBA, but no longer!
  14. Jaga

    Google Drive

    One thing you can do in the meantime is try and diagnose your ISP's line, to make sure it is clean and stable. I made a post in another topic that might be helpful, if you want to check it out: Without specific log detail (that Christopher would ask for anyway), it's hard to know why Gsuite is disconnecting on you. It could be throttling you, or your ISP could throttle you, or you may have intermittent drops on your line, etc. I usually start "at the bottom" by looking at the hardware first, which would mean the ISP and your machine before moving on to Cloud Drive. But again - if you have logs from Cloud Drive that indicate the drops, see if you can dig them out and post them, or send them in to support so Christopher/Alex can see them: http://wiki.covecube.com/StableBit_CloudDrive_Logs https://stablebit.com/Support
  15. Jaga

    Google Drive

    Yes you can. From Stablebit Cloud Drive - Resizing your drive: I'll leave the tuning and disconnect questions for someone with more Cloud Drive experience (a.k.a. Christopher). Though I will say this: if you are planning on a 5TB Cloud Drive, you'd do yourself a big favor figuring out why it's disconnecting, since it'll be much more difficult to keep a 5TB volume current compared to a 100GB volume.
  16. Roger. Already grabbed it, just have to find the least troublesome way to downgrade the firmware. Thanks for the feedback and the heads up. Would have driven me nuts! Edit: done. You wouldn't believe what I had to go through to revert back to P19 from P20. Motherboard's built-in UEFI shell, hard-to-find sas2flash.efi (Broadcom doesn't offer it on their product support pages anymore), flash the firmware AND the BIOS back to a paired state. All good now however, feel better about it. I'm a little surprised honestly that with CRC problems for certain cards, Broadcom is even allowing download of P20 firmware.
  17. Nope, mine's the 9201-16e. Doing a little internet research to see if it's actually necessary to down-flash from 20 to 19 before I do it. The Megaraid software under Windows can be a little hard to work with. But I'd much rather know before I put it into service if it's going to be spitting out CRC errors left and right, so I'm treating it with deserved attention. If you have any recommendations, I'm totally open. It won't go into service until tomorrow night (fingers crossed).
  18. Ah well, was worth asking. You'd think with a little math (scanned sectors / total disk sectors) it could convert. But I haven't seen the code, so I trust your evaluation of it.
  19. That's good to know, thanks for the comment Christopher. I haven't put my new adapter into service yet, and had it sitting at rev 20. I'll grab a copy of 19 and see about re-flashing.
  20. Can't rely on that though, since it's part of the SSD Optimizer which *will* clean out that SSD and move files to the regular pool drives. Something else is going on here.
  21. It would be nice to switch the % scanned status in the GUI from an incremental % to a total drive %. I can't currently see a way to do this, unless it's in the JSON file(s). I didn't see a setting in there that wasn't exposed, but admittedly I didn't go through all 73 files.
  22. Quick update: I went with the LSI 16e that you suggested @nauip, so thanks for that. Couldn't find any other options that were as robust or had as much capacity without taking a large leap in cost (more than double). The card already arrived (OEM version), and was in immaculate condition - clearly never opened or handled. It's shiny, new, properly flashed and humming along in the server now. Having seen it myself, I'm less worried about manufacture date. Mini-SAS cables are on the way, and I've already picked up the 7x8TB drives (in USB enclosures) for shucking later. The drives are being tested by Stablebit Scanner as we speak (of course!). Go figure - the very first unit ended up having controller/sector issues, so that's being returned today. Small side note on the USB enclosures: they have horrible heat dissipation characteristics. Stablebit Scanner was able to push them to 50c within 20 minutes, at which point it suspended surface tests to throttle heat (as it should). The drives sat forever and didn't cool off more than 2c, until I put them on top of case fans blowing up. After that, I was able to resume the surface scans at full speed, with a temp range of ~30-38c thereafter. I'm absolutely certain this is why they have lower warranty periods, simply because the drives in off-the-shelf USB enclosures are going to run hotter, and even thermally throttle on large jobs (which I never even suspected). The catch-22 is that they are 8TB drives, and typically would handle larger jobs... so they are poorly engineered, and should come with internal fans. I think I'm going to try to setup the new Pool drives on the old server and transfer data across locally between the two pools, then build the WSE 2016 server on top of a bare-metal Hyper-V 2016 install before adding the new Pool back in. The thought of transferring >10TB of data on a 1Gbit network makes my skin crawl.
  23. Makes sense - it would make the SSD cache drive more like a traditional cache, instead of simply a hot landing zone pool member which is offloaded over time. +1
  24. Haven't heard of "hot pool spares" before, but it's an interesting suggestion!
  25. It creates a larger virtual volume, breaks that volume up into equally sized chunks, and stores those chunks on your cloud provider. When files/folders change on the virtual volume it updates whichever chunks need to be changed. When one computer makes changes to the data on the cloud drive, you'd have to have a way to tell the other computer the changes happened, or the drive in the filesystem wouldn't reflect those changes. It's a little like the difference between POP3 and IMAP email methods. When you're viewing a HTML interface for cloud storage space, you're just being shown what's out there. IMAP email connections do largely the same thing - give you a view of what's there. CloudDrive has to be able to tell the file system what's there, keep a cache locally on it, keep the metadata (file/folder info) on it locally, etc. That's more similar to POP3, which actually holds email data on the local machine. One downloads data off the server (POP3), one just shows you what's out there (IMAP). Due to the way CloudDrive has to work with the OS, it needs to directly interface with the data and keep portions of it locally. I think it would be troublesome at best to try and deploy CloudDrive on two machines against the same virtual cloud disk. And we're not even considering features like volume encryption.
×
×
  • Create New...