Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Everything posted by Christopher (Drashna)

  1. yeah, backup database corruption is/was fairly common. There have been updates to fix the issue. But if the database got corrupted by the failing disk (which is very likely), or the database wasn't duplicated... then this will definitely happen. And I highly recommend checking the duplication status on the backup database folder. Make sure it's set to AT LEAST 2x. 2x (or just 'duplication enabled") means that there will be a copy of any given file on two different disks. Setting it higher means it will be on more disks (and take up more space in the pool). Setting it to 3x may be a good idea if you suspect issues. Alternatively, you could create a script to stop the backup service, copy the files and start it back up. You could then setup the Task Scheduler to run the script once a month/week/etc. That way, you would always have a good copy somewhere else.
  2. Sounds about right, given their use case. I'm wondering how well they'd fair in the pool honestly. I'll be finding out soon.
  3. You mean other than an SAS expander case. Right? Those can hold a lot of disks. As for an enclosure, I'd recommend IcyDock or something else. Like, really anything else. I've had a friend use this brand: http://www.newegg.com/Product/Product.aspx?Item=0VN-0003-00065 It seems to be very stable, from that I could tell.
  4. To be clear, you're using Data Deduplication on the disks that are IN the pool, correct? If so, then there is an advanced setting that you HAVE to enable to (hopefully) get this to work, otherwise, you will definitely see the issues that you are reporting. http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings Find "CoveFs_BypassNtfsFilters" and set this to "False". However, even with this setting configured, it may not work. The reason for this is how the Deduplication process works. Specifically, it uses a file system filter (you can view these with "fltmc"). Normally, when you access the file, the filter intercepts the request and can modify it (this is also how antivirus programs work). Deduplication requires this, as only part of the file is stored in the normal location. The deduplicated part is stored elsewhere on the disk (in the "System Volume Information" folder, actually, I think). So when you access the file normally, the DeDup filter re-joins the deduped data with the rest of the file, to present the whole file. However, the driver for the pool bypasses the NTFS filters (for performance and compatibility reasons). This means that the Pool is only picking up the non-deduplicated data, and present "corrupted" files to you. Disabling the "BypassNtfsFilters" option will cause the pool to respect the filters, and should "fix" the data.
  5. I'm not entirely sure that the "\*" rule will work, but I am absolutely certain that "\Foldername\*" will work. So if you are using WHS2011 or Server Essentials (as it appears that you are), then you could use "\ServerFolders\*" for the rule and that should work exactly how you want. Additionally, we do have a section in the manual that goes over the File Placement rules: http://stablebit.com/Support/DrivePool/2.X/Manual?Section=File%20Placement
  6. Unfortunately, it sounds like the Client Computer Backup database lost one or more files. You may want to delete the entire contents of the folder and start from scratch. This may be the only way to proceed. (you could just move the contents to a different directory, it produces the same result). Also, if you grab the logs from "C:\ProgramData\Microsoft\Windows Server\Logs", and check the Event Viewer ("eventvwr.msc") and check for errors related to the services. These may give you a better idea of what is going on. However, they can be a PITA to read. If you want, email them to me at christopher@covecube.com and I'll take a look at them. As for the chkdsk, you cannot run that on the DrivePool drive, as it's an emulated disk. However, you can run it on all of the disks in the pool. Just note that it will unmount the disks, and produce a "missing disk" error until the scan completes.
  7. I'm sorry to hear that you did lose a few files. But I'm glad to hear that everything went smoothly otherwise. As for StableBit Scanner: If you own a copy of StableBit DrivePool, you can get StableBit Scanner at a discounted price. To do so, grab your StableBit DrivePool Activation ID, and head to "https://stablebit.com/Buy/Scanner/" and scroll down until you see the "Already own one of our products?" section. Input your Activation ID into the box and hit "Apply Activation ID". Once you've done that, it will will refresh the page with the discounted pricing ($15, instead of $24.95). Depends on how you mean. If you mean to view the pooled disk (eg, to be able to access the files), then you'd need to share the folders and use an app to access the files (or us a media server such as Plex). If you have shared folders on the pool, then this maybe useful: https://itunes.apple.com/us/app/fileexplorer-free/id510282524 If you mean to manage the pool, then no, there isn't a way to do this on a mobile device directly. However, you could use Remote Desktop, VNC, or TeamViewer to access the system and control it that way.
  8. ..... I have a few of those laying around... And a few WD drives too. Well, varying from 2GB to 6GBs. But none are in use. I don't have a PC capable of IDE, and haven't for a while.
  9. Personally, I would recommend against the MediaSonic boxes.... Both Alex and I have had issues with these boxes, as have a few customers. Also, Newegg indicates that it only supports up to 4TB drives. Though, you are right, it's website indicates that it supports up to 6TB drives.
  10. Well, since the default strategy is to fill up disks all at the same time (specifically to place files on the disks with the most available free space), I don't think this would be too difficult. So specifically, you want a balancer that would allow you to create groups of disks to be filled, one group at a time. Are you using StableBit DrivePool 2.X? If so, then you may be able to do this already, by using the "File Placement rules", in theory. Create a rule for "\*" and select the disks you want. Then make sure that you use the option (at the bottom) to "Allow files to be placed on other disks if all the selected disks are this full", and set the slider to something reasonable, like 90% (this prevents the disk from becoming too full, just in case). This should
  11. What mode is the enclosure in? And once the Unsafe option is enabled in DirectIO, it should work with the HighPoint card just fine. However, could you download the "DirectIoTest" tool from here: http://community.covecube.com/index.php?/topic/36-how-to-contribute-test-data/ Run it on the system and select the disk that is having issues. It should (hopefully) light up the "SMART Attributes" option for either the WMI or DirectIO (or both) section. If it does, click on the ellipse ("....") button and make sure it's listing information. If it is, then let us know.
  12. In that case, you don't need the WHS style OS then... and ESX may be a better idea. As for the HCL, I'm genuinely surprised that the SuperMicro boards are not on it. So I dug... their hardware does appear on it, but not as motherboards. They only list the pre-built systems. In fact, this was one of the supported systems: http://www.newegg.com/Product/Product.aspx?Item=N82E16816101791 But it's better to be certain here, I think. As for the BIOS issue, .... yay. that's fun. And speaking of which, for a headless system, I highly recommend a board that supports Intel vPro/AMT or IPMI, as these allow for "out of band management". Meaning you can view the "console" over the network. The intel board you linked has IPMI, so you're set, just remember to enable it.
  13. Depends on the issues, really. In a lot of cases, the full "Zero pass" may fix a lot of issues because it's causing the disk to write to every sector, and triggering the drives' error correction routines. After this, a format isn't really needed (a full format does a zero pass of the partition). And as for a chkdsk, it may not be a bad idea, but again, may not be necessary. But letting Scanner do a full pass of a Surface scan would be telling. As would looking at the SMART data afterwards. And just FYI, I've done this on disks before (specifically that had uncorrectable sector count SMART warnings and damaged sectors). It converted all of the uncorrectabler sectors into reallocated sectors (which is fine). However, after a while, the uncorrectable sector count continued to increase. The moral is... if issues continue, then RMA or toss the disk. Or use it for non-critical data. I've has the opposite experience. Most of the disks I've had go bad on me generally did give SMART warnings prior to failing. The batch of bad ST3000DM001's that I had all showed uncorrectable sector counts in SMART. I pulled them immediately.. Two of them completely failed shortly after pulling them (I was using them for testing). However, given how the disks work, there are plenty of ways for them to fail, including suddenly. Though, I've found the most important metric on drives to be their warranty period. Not only is that a good indication of longevity, but it usually lines up with the component lifespan info. As for that old 20GB, that's pretty amazing. Though, I'm sure it's slow as dirt compared to the newer drives. That's the trade off, the newer drives pack a lot more tech in them, which means there are more points of failure.
  14. Absolutely. There should be no problem with that, at all. Other than getting it mounted when you reboot (but you could use a script to do that, if needed. To clarify, you're having issues with StableBit DrivePool on the newest Windows 10 build? Or with OneDrive? Or? I'm not entirely sure here. And yea, the "Tech Preview", it's an Alpha product.
  15. Yeah, taking ownership of the files may be a good idea when rebuilding the pool. It's pretty much the "end all" for permissions. However, just make sure you a) replace the child entries when you do this, and change the permissions as well (Admins, system and owner with full control, and Users with read and execute). This will ensure that everything is accessible and usable. And should definitely clear up the issues you were seeing.
  16. Investigating the issue and fixing it (if possible) is definitely on our "To-Do" list. However, I can't really give you an ETA. Alex has been focusing on getting "StableBit CloudDrive" out as soon as he can. Once that's done, there is a bunch of code to backport to DrivePool (such as a much better issue feedback UI), and then onto hitting the to-do list. While I know that's not really a great answer, we are a small company, and we are doing what we can, as fast as we can.
  17. Well, I'm glad that you were able to identify the problem, but I'm very sorry to hear about the server crash! And if that was the case, then it sounds like maybe a disk issue with the system disk... Or the drivers for the USB3 controller or externals got corrupted, maybe. But either way, if you need anything else, let us know.
  18. Yeah, it is. Microsoft does a fantastic job of hiding a lot of these types of files form the user (which is a good thing in 99% of cases).
  19. Thanks for submitting. I've flagged alex, and we'll see about adding a proper interpretation for these drives. And I meant to use "VHD(x)". But either way, I'm glad to hear that it is working well. And yea, the write performance is most likely due to the SMR technology (shingled magnetic recording). Ironically, the best analogy is ... shingling a roof. It's easy when you do it layer by layer. But if you have to replace a single tile/shingle, it's a PITA to do so.
  20. Yup, had another user with this bug. Specifically, this is an issue with how Windows is identify the drive and sector size. It's being misreported to us... so we are misreporting the drive size. I doubt it is a coincidence, but the other drive was a WD drive (Red, I believe). Please download the latest beta build, as that should fix the issue: http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.2.3079_BETA.exe
  21. The CPU is a very good one. IIRC, it is still the best "bang for your buck" (eg cost vs features). As for the motherboard, it needs to be a 10th gen board. Otherwise, yeah, you run into the wrong socket. However, here is a list of the X10 (10th gen) series boards, all Micro ATX form factor: http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100006519%2050001655%2040000302%20600452057%20600009017&Manufactory=1655 And here's the one you linked from microcenter: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182819&cm_re=X10SLL-F-_-13-182-819-_-Product As for the server prices, yeah, they're definitely intimidating. But a point that has come up before... if you're using the backup features... that literally pays for the entire OS. I've looked for similar solutions, and either they require hours to get working at all.... or they cost $1000 or more for the backup server functionality... and are licensed per client as well.... But if you're not using the backup, then it may not be worth the cost. (though Microsoft does have a 180 day trial version of all the server OS's). That, and if you're going to run VMWare ESXi (over HyperV), then before you buy the hardware, PLEASE check the hardware compatibility list for ESXi, otherwise you may end up very unhappy. Though, the SuperMicro board should be supported, there is a large gap between "should" and "is".
  22. Storage Spaces is a block based solution, much like (if not almost identical to) RAID. Like RAID, the data is stored in a "raw" format, and read and written to in that style. What does this mean? That the data is in a proprietary format, and may not be readable outside of the system (you should be able to migrate storage spaces to another system, but I've not tested this....). Another issues, is that since is a software implementation, it relies much more on your hardware. You will want a good CPU (it does a lot of number crunching with a PARITY array), and that you will want to have good quality RAM (ideally ECC). DrivePool is a file based solution. Meaning that the data is stored on the disks as normal files. Accessible any time. Also, you should see native disk speeds when accessing the files (or potentially faster with duplication enabled, and Read Striping). The biggest difference here is when it comes to disaster recovery and/or migrating away from the pool. With Storage Spaces... there are a couple of companies (and only a couple) that boast the ability to restore data from Storage Spaces. And their software is $$$$. With DrivePool, ANY recovery software that is capable of accessing NTFS can be used to restore the data. As for migrating away from the pools... with DrivePool, you can just uninstall it and then move around your files as needed. With Storage Spaces... it's a CF. THat's the nicest way I can describe it. But basically, you need to have enough space to empty the Storage Spaces volume.... You may be able to remove some of the disks, so you can play "musical chairs" with the disks... but chances are that you may not be able to.... Another important point here, if you're on a Server OS, using Storage Spaces, and using parity, you should ABSOLUTELY be using ReFS. It supports automatic correction of damage data, by using the parity information.
  23. The OneDrive client uses a new/unique form of file system links that are not supported on DrivePool currently. This means that you will definitely see issues if you try to store it on the pool, unfortunately. For now, it's best to not put the OneDrive folder on the pool.
  24. Well, I hope it's a larger SSD.... I have ~16TBs of various media... and my plex database is ~100GBs. (though I have it configured to download all the metadata it can). So just a warning. And I completely understand the time copying the files.... I just did this a linux VM for Plex... that I may end up removing because it's not working well.
×
×
  • Create New...