Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 07/22/18 in all areas

  1. 2 points
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  2. 2 points
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  3. 2 points
    Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
  4. 2 points
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  5. 2 points
    I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them. The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off. Here is a link to that adapter. Let me know what you think. https://www.newegg.com/Product/Product.aspx?Item=9SIA85V3DG9612
  6. 2 points
    CC88

    Is DrivePool abandoned software?

    The current method of separate modules, where we can pick and choose which options to use together gets my (very strong) vote! Jamming them all together will just create unneeded bloat for some. I would still pay a "forced" bundle price, if it gave me the option to use just the modules I need... and maybe add one or more of the others later. I'm amazed at the quality of product/s that one (I think?) developer has produced and offering for a low - as Chris says, almost impulse buy - price. Keep up the good work and bug squashing guys!
  7. 2 points
    It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  8. 2 points
    To clarify a couple of things here (sorry, I did skim here): StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space. This means the 1TB drives, first, and then once they're full enough, on the 500GB drive. So, yes, this is normal. The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal. So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically). If you don't see that, then it defaults to the "normal" behavior.
  9. 2 points
    Christopher (Drashna)

    Moving from WHS V1

    Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
  10. 2 points
    Jaga

    Recommended SSD setting?

    Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
  11. 2 points
    Jaga

    Almost always balancing

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first. They merely act as a temporary front-end pool that is emptied out over time. The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on. Try not to let those SSDs fill up. I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use. If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on). When the re-balance is complete, toggle the Disk Space Equalizer back off.
  12. 2 points
    With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
  13. 2 points
    Jose M Filion

    I/O Error

    Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
  14. 2 points
    1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
  15. 2 points
    The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  16. 2 points
    Mostly just ask.
  17. 1 point
    Yes, that's definitely a false positive. It's just some of the troubleshooting stuff for the UI. It's nothing harmful. And if you check, the file should be digitally signed. A good indicator that it's legit.
  18. 1 point
    GetdataBack Simple is working for me -- i could get a dir list at least and see the files. It's gonna take days til i'm done the deep scan, but i hope i can recover most things.
  19. 1 point
    Understood, (and kind of assumed, but thought it was worth asking). Getting pretty deep into CloudDrive testing and loving it. Next is seeing how far i can get combining CD with the power of DrivePool and making pools of pools! Thanks for following up. -eric
  20. 1 point
    I'm not sure? But the number of threads is set by our program. Mostly, it's just the number of open/active connections. Also, given how uploading is handled, the upload threshold may help prevent this from being an issue. But you can reduce the upload threads, if you want. Parallel connections. For stuff like prefetching, it makes a different. Or if you have a lot of random access on the drives... But otherwise, they do have the daily upload limit, and they will throttle for other reasons (eg, DOS/DDoS protection)
  21. 1 point
    For a homelab use, I can't really see reading and writing affecting the SSDs that much. I have an SSD that is being used for firewall/IPS logging and it's been in use every day for the past few years. No SMART errors and expected life is still at 99%. I can't really see more usage in a homelab than that. In an enterprise environment, sure, lots of big databases and constant access/changes/etc. I have a spare 500GB SSD I will be using for the CloudDrive and downloader cache. Thanks for the responses again everyone! -MandalorePatriot
  22. 1 point
    srcrist

    Warning from GDrive (Plex)

    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  23. 1 point
    Thread count is fine. We really haven't seen issues with 10. However, the settings you have set WILL cause bottlenecking and issues. Download threads: 10 Upload threads: 10 Minimum download size: 20MB Prefetch trigger: 5MB Prefetch forward: 150 MB Prefetch time windows: 30 seconds The Prefetch forward should be roughly 75% of download threads x minimum download size. If you can set a higher minimum size, then you can increase the forward.
  24. 1 point
    That depends ENTIRELY on your use case. It's not a question that others can really answer. But if performance is important, then the SSD is going to be the better choice for you. But if you're accessing a lot of data (reading and writing), then a hard drive may be a better option.
  25. 1 point
    PocketDemon

    Different size hdd's

    There's no issue with different sizes - &, within the context of your drives & what you're likely to be doing, there's no issue with sizes & using the capacity. Yeah, the only time that there would be an issue is if the number of times you're duplicating wouldn't work... ...so, imagining someone were looking at duplicating the entire pool, for example - - with 2x duplication &, say a 1TB & 8TB drive, they could only actually duplicate 1TB - & with 3x duplication &, say a 1TB, 4TB & 8TB drive, they could still only actually duplicate 1TB ...however, unless you're after >=6x duplication (which is highly unlikely), there's no problem whatsoever. If you are using duplication & your current drives are pretty full already, after adding the new drive then I would suggest pushing the "Duplication Space Optimiser" to the top & forcing a rebalance run just before going to bed... As this then should prevent there being any issues moving forward.
  26. 1 point
    jellis413

    10gb speeds using ssd cache?

    I just ran into this with a pair of Aquantia 10g nics that I purchased. It seems to be a different amount that I could copy depending on the SSD that I used. Their support confirmed that after the SSD write cache was filled, it would drop to below gigabit speeds. I setup a RAM drive and passed it through as an SSD to the SSD Optimizer and speeds consistently stay where they should be and dont drop off like I was experiencing. https://www.softperfect.com/products/ramdisk/ is the product I used and had to make sure I selected the option for Hard Disk Emulation
  27. 1 point
    Umfriend

    Drivepool on new Windows install

    In this case, it is a physical disconnect. It is not really neccessary but it ensures that you do not accidentally select a Pool HDD to install the OS on...
  28. 1 point
    srcrist

    Longevity Concerns

    I think those are fine concerns. One thing that Alex and Christopher has said before is that 1) Covecube isn't in any danger of shutting down any time soon and 2) if it would, they would release a tool to convert the chunks on your cloud storage back to native files. So as long as you had access to retrieve the individual chunks from your storage, you'd be able to convert it. But, ultimately, there aren't any guarantees in life. It's just a risk we take by relying on cloud storage solutions.
  29. 1 point
    Yeah i just figured it out myself, it didn't work even with registry imported, This might take some time if i fill up all 60 drives someday..
  30. 1 point
  31. 1 point
    It could be related, yes. But it's hard to know for sure. If you could, upgrade to the latest beta and see if that fixes the issue. http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.3.948_x64_BETA.exe If not, then try running a CHKDSK pass on all of the pooled disks (not the pool itself). If that doesn't help, then I'd recommend opening a ticket at https://stablebit.com/Contact
  32. 1 point
    It's probably related. Though, this should hopefully be resolved in the 3220 beta: http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3220_BETA.exe
  33. 1 point
    My upgrade worked perfectly - with the disclaimer that "upgrade" in this case meant a complete clean install of Server 2016 to a new system drive. I did however have a critical need to maintain my existing data drives. Both Drivepool and Scanner installed flawlessly - and especially for Drivepool - it picked up the pool (once I reattached my existing drives) like nothing had happened - saving me days of time backing up and restoring up to 10TB of data. Regarding Essentials 2016 - I would assume you should be good as well. Sonic.
  34. 1 point
    So, removing StableBit Scanner from the system fixed the issue with the drives? If so.... well, uninstalling the drivers for the controller (or installing them, if they weren't) may help out here. Otherwise, it depends on your budget. I personally highly recommend the LSI SAS 9211 or 9207 cards, but they're on the higher end of consumer/prosumer.
  35. 1 point
    srcrist

    Google Drive Existing Files

    Unfortunately, because of the way CloudDrive operates, you'll have to download the data and reupload it again to use CloudDrive. CloudDrive is a block-based solution that creates an actual drive image, chops it up into chunks, and stores those chunks on your cloud provider. CloudDrive's data is not accessible directly from the provider--by design. The reverse of this is that CloudDrive also cannot access data that you already have on your provider, because it isn't stored in the format that CloudDrive requires. There are other solutions, including Google's own Google File Stream application, that can mount your cloud storage and make it directly accessible as a drive on your PC. Other similar tools are rClone, ocaml-fuse, NetDrive, etc. There are pros and cons to both approaches. I'll list some below to help you make an informed decision: Block-based Pros: A block-based solution creates a *real* drive (as far as Windows is concerned). It can be partitioned like a physical drive, you can use file-system tools like chkdsk to preserve the data integrity, and literally any program that can access any other drive in your PC works natively without any hiccups. You can even use tools like DrivePool or Storage Spaces to combine multiple CloudDrive drives or volumes into one larger pool. A block-based solution enables end to end encryption. An encrypted drive is completely obfuscated both from your provider and anyone who might access your data by hacking your provider's services. Not even the number of files, let alone the file names, is visible unless the drive is mounted. CloudDrive has built-in encryption that encrypts the data before it is even written to your local disk. A block-based solution also enables more sophisticated sorts of data manipulation as well. Consider the ability to access parts of files without first downloading the entire file. That sort of thing. The ability to cache sections of data locally also falls under this category, which can greatly reduce API calls to your provider. Block-based Cons: Data is obfuscated even if unencrypted, and unable to be accessed directly from the provider. We already discussed this above, but it's definitely one of the negatives--depending on your use case. The only thing that you'll see on your provider is thousands of chunks of a few dozen megabytes of size. The drive is inaccessible in any way unless mounted by the drivers that decrypt the data and provide it to the operating system. You'll be tethered to CloudDrive for as long as you keep the data on the cloud. Moving the data outside of that ecosystem would require it to again be downloaded and reuploaded in its native format. Hope that helps.
  36. 1 point
    Thanks - I'll set this all up once I can get my hands on a good sized SSD drive to use as my cache for the clouddrive.
  37. 1 point
    mentalinc

    Server 2019 compatibility?

    I braved it and has worked fine. Did an inplace upgrade from 2016 to 2019 desktop experience and can read and write to drivepool fine...
  38. 1 point
    Christopher (Drashna)

    Migration to a new system

    Well, for moving the pool over to new drives, you could just copy from one pool to another. No need to share the poolpart folders. In fact.... by doing that, you may end up making more work for yourself. (eg, duplication)
  39. 1 point
    Nope, different protocol. But trust me when I say that NVMe health is FAR superior to SMART. Awesome. As for protocol: http://blog.covecube.com/2018/05/stablebit-scanner-2-5-2-3175-beta/ That has a picture of it.
  40. 1 point
    Jaga

    Hierarchical file duplication?

    What you might want to do instead, is make a Local Pool 1 which holds local drives A-E, rename your cloud pool to Cloud Pool 1, and then make a Master Pool that holds Local Pool 1 and Cloud Pool 1. It's easier if different levels have different nomenclature (numbers vs letters at each level). Master Pool (2x duplication) Local Pool 1 (any duplication you want) Local Drive A Local Drive B Local Drive C Local Drive D Local Drive E Cloud Pool 1 (no duplication) FTP Drive A B2 Drive A Google Drive Drive A Note that with this architecture, your cloud drive space needs to be equal to the size of your Local Pool 1, so that 2x duplication on the Top Pool can happen correctly. If the FTP Drive A goes kaput, Cloud Pool 1 can pull any files it needs from Local Pool 1, since they are all duplicated there. Local Pool 1 doesn't need duplication, since it's files are all over on Cloud Pool 1 also. You can (if you want) give it duplication for redundancy in case one of the cloud sources isn't available - your choice. As an alternate architecture, you can leverage your separate cloud spaces to each mirror a small group of local files: Master Pool (no duplication) Pool 1 (2x duplication) Local Pool A (no duplication) Local Drive a Local Drive b Cloud Pool A (no duplication) FTP Drive Pool 2 (2x duplication) Local Pool B (no duplication) Local Drive c Local Drive d Cloud Pool B (no duplication) B2 Drive Pool 3 (2x duplication) Local Pool C (no duplication) Local Drive e Cloud Pool C (no duplication) Google Drive What this does is allow each separate cloud drive space to backup a pair of drives, or single drive. It might be more advantageous if your cloud space varies a lot, and you want to give limited cloud space to a single drive (like in Pool 3).
  41. 1 point
    Christopher (Drashna)

    Almost always balancing

    Yeah, having the Disk Space Equalizer balancer enabled will cause issues here. It should be fine to use initially, but once it's done, disable it. Specifically, you shouldn't use these two balancers together, since they will interfere with each other. One or the other, always.
  42. 1 point
    I find Scanner almost essential now - I rely on it as a watchdog to keep my data safe, and to alert me to potential problems (heat, SMART errors, etc). And I use it with the auto-evacuate balancer in Drivepool, so that IF it senses a drive might be dying it can move files off to other drives without needing me first. Well worth the cost in my opinion, AND you can get additional copies of each product at a steep discount. I run Scanner on both my server and primary workstation.
  43. 1 point
    Nice - more tidbits of useful wisdom today!
  44. 1 point
    I only have experience with your second question. If you figure out Q1 you could probably just use it to trigger my solution to Q2. 1.) enable drivepool's runningfile option 2.) use bigteddy's FileSystemWatcher script (available on technet) to monitor for the removal of the runningfile you've configured, and write an event 3.) use the event-log entry you set up to trigger snapraid via a scheduled task (in a nutshell, DP will create a dummy file while it's balancing and remove it when it's done. You can use the removal of this file to trigger SR) Some notes: 1. ) the filesystem watcher eats up some i/o, so I still recommend you schedule it and define a max. runtime - If you let it run all the time, it will also trigger snapraid every time DP does any routine checks, not just balancing 2.) I recommend configuring snapraid-helper (from codeplex) rather than calling snapraid from command-line - it will check for a user-defined number of missing files prior to sync, and e-mail you so you can decide what to do - you can also have it email you with a list of added / removed / updated / restored files after every sync if you so desire. I'd never touched powershell prior to configuring the scenaro above, and now I use it for all kinds of cool stuff. It's worth giving it a go. I made quite a few posts, here, while trying to get it working. they might be useful
  45. 1 point
    After some testing, Pool B follows Pool A's SSD Optimiser You can close this thread, thanks for the help!
  46. 1 point
    What @Jaga said, actually. And "coveFS" is the driver for the pool, so it's a critical component, actually. And it may be worth doing this: http://wiki.covecube.com/StableBit_DrivePool_Q3017479 And then reinstall the software. If you continue to have issues, then open a ticket at https://stablebit.com/contact and run the StableBit Troubleshooter: http://wiki.covecube.com/StableBit_Troubleshooter
  47. 1 point
    From what I've seen, no, it hasn't. Had a ticket about this recently, actually. P19 or P15 seem to be the go to firmware versions for stability.
  48. 1 point
    Jaga

    Drivepool SSD + Archive and SnapRAID?

    Makes sense - it would make the SSD cache drive more like a traditional cache, instead of simply a hot landing zone pool member which is offloaded over time. +1
  49. 1 point
    As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  50. 1 point
    Christopher (Drashna)

    TrueCrypt and DrivePool

    We recommend BitLocker, actually. It's baked into windows, and works seamlessly with the system. Additionally, the "Automatical unlock" option works very well with the pool. However, a lot of people do not trust BitLocker. And TrueCrypt, it's forks and a few other encryption solutions bypass the VDS system altogether. Since that is a big part of DrivePool... they don't work. And would require a nearly complete rewrite of the code JUST to support these products. I'm not trying to start a debate, but just stating this and explaining why we don't support them.

Announcements

×
×
  • Create New...