Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 04/20/18 in all areas

  1. 2 points
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  2. 2 points
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  3. 2 points
    Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
  4. 2 points
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  5. 2 points
    I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them. The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off. Here is a link to that adapter. Let me know what you think. https://www.newegg.com/Product/Product.aspx?Item=9SIA85V3DG9612
  6. 2 points
    CC88

    Is DrivePool abandoned software?

    The current method of separate modules, where we can pick and choose which options to use together gets my (very strong) vote! Jamming them all together will just create unneeded bloat for some. I would still pay a "forced" bundle price, if it gave me the option to use just the modules I need... and maybe add one or more of the others later. I'm amazed at the quality of product/s that one (I think?) developer has produced and offering for a low - as Chris says, almost impulse buy - price. Keep up the good work and bug squashing guys!
  7. 2 points
    It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  8. 2 points
    To clarify a couple of things here (sorry, I did skim here): StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space. This means the 1TB drives, first, and then once they're full enough, on the 500GB drive. So, yes, this is normal. The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal. So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically). If you don't see that, then it defaults to the "normal" behavior.
  9. 2 points
    Christopher (Drashna)

    Moving from WHS V1

    Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
  10. 2 points
    Jaga

    Recommended SSD setting?

    Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
  11. 2 points
    Jaga

    Almost always balancing

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first. They merely act as a temporary front-end pool that is emptied out over time. The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on. Try not to let those SSDs fill up. I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use. If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on). When the re-balance is complete, toggle the Disk Space Equalizer back off.
  12. 2 points
    With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
  13. 2 points
    Jose M Filion

    I/O Error

    Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
  14. 2 points
    1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
  15. 2 points
    The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  16. 2 points
    Mostly just ask.
  17. 2 points
    ...just to chime in here...remember that expanders have firmware too. I am running 1x M1015 + 1x RES2SV240 in my 24bay rig for 5+ years now....I remember that there was a firmware update for my expander that improved stability with S-ATA drives (which is the standard usecase for the majority of the semi-pro users here, I think). Upgrading the firmware could be done with the same utility as for the HBA, as far as I remember...instructions were in the firmware readme Edit: here's a linbk for a howto: https://lime-technology.com/forums/topic/24075-solved-flashing-firmware-to-an-intel-res2xxxxx-expander-with-your-9211-8i-hba/?tab=comments#comment-218471 regards, Fred
  18. 2 points
    Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
  19. 2 points
    Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
  20. 2 points
    The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
  21. 2 points
    As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  22. 2 points
    I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
  23. 2 points
    You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  24. 2 points
    It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
  25. 2 points
    "dpcmd remeasure-pool x:"
  26. 2 points
    "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
  27. 1 point
    Well, a number of others were having this as well, and I've posted this info in a number of those threads, so hopefully, confirmation will come soon.
  28. 1 point
    For a homelab use, I can't really see reading and writing affecting the SSDs that much. I have an SSD that is being used for firewall/IPS logging and it's been in use every day for the past few years. No SMART errors and expected life is still at 99%. I can't really see more usage in a homelab than that. In an enterprise environment, sure, lots of big databases and constant access/changes/etc. I have a spare 500GB SSD I will be using for the CloudDrive and downloader cache. Thanks for the responses again everyone! -MandalorePatriot
  29. 1 point
    srcrist

    Warning from GDrive (Plex)

    Out of curiosity, does Google set different limits for the upload and download threads in the API? I've always assumed that since I see throttling around 12-15 threads in one direction, that the total number of threads in both directions needed to be less than that. Are you saying it should be fine with 10 in each direction even though 20 in one direction would get throttled?
  30. 1 point
    It won't really limit your ability to upload larger amounts of data, it just throttles writes to the drive when the cache drive fills up. So if you have 150GB of local disk space on the cache drive, but you copy 200GB of data to it, the first roughly 145GB of data will copy at essentially full speed, as if you're just copying from one local drive to another, and then it will throttle the drive writes so that the last 55GB of data will slowly copy to the CloudDrive drive as chunks are uploaded from your local cache to the cloud provider. Long story short: it isn't a problem unless high speeds are a concern. As long as you're fine copying data at roughly the speed of your upload, it will work fine no matter how much data you're writing to the CloudDrive drive.
  31. 1 point
    Umfriend

    Drivepool on new Windows install

    No, don't remove in the GUI. That would cause DP to try to move files off of the HDDs. You do not want to change your Pool at all, you just want to migrate it to a new environment. So: just do my new Windows HS install, install DrivePool and then physically re-connect the drives? -> Exactly right.
  32. 1 point
    This information is pulled from Windows' Performance counters. So it may not have been working properly temporarily. Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  33. 1 point
    @TeleFragger From the image, it looks like it's writing to the G:\ drive, which is not an SSD. So my guess is your settings are not configured correctly. If you could, open a ticket at https://stablebit.com/Contact
  34. 1 point
    srcrist

    CloudDrive and GSuite and some errors

    The file errors make me wonder if there is something wrong with the installation. Go ahead and run the troubleshooter (http://wiki.covecube.com/StableBit_Troubleshooter) and include your ticket number. I think you'll have to wait for Christopher to handle this one. I'm not sure what's going on.
  35. 1 point
    srcrist

    CloudDrive and GSuite and some errors

    That makes me wonder if there is a corruption in the installation. Do a search and see if you can find the service folder. Search for a directory called "ChunkIds", which is where your DB is stored.
  36. 1 point
    Thanks, Christopher. I agree. Currently, I have a SMART warning just for LCC, so I did permanently ignore it.
  37. 1 point
    srcrist

    Google Drive Existing Files

    Unfortunately, because of the way CloudDrive operates, you'll have to download the data and reupload it again to use CloudDrive. CloudDrive is a block-based solution that creates an actual drive image, chops it up into chunks, and stores those chunks on your cloud provider. CloudDrive's data is not accessible directly from the provider--by design. The reverse of this is that CloudDrive also cannot access data that you already have on your provider, because it isn't stored in the format that CloudDrive requires. There are other solutions, including Google's own Google File Stream application, that can mount your cloud storage and make it directly accessible as a drive on your PC. Other similar tools are rClone, ocaml-fuse, NetDrive, etc. There are pros and cons to both approaches. I'll list some below to help you make an informed decision: Block-based Pros: A block-based solution creates a *real* drive (as far as Windows is concerned). It can be partitioned like a physical drive, you can use file-system tools like chkdsk to preserve the data integrity, and literally any program that can access any other drive in your PC works natively without any hiccups. You can even use tools like DrivePool or Storage Spaces to combine multiple CloudDrive drives or volumes into one larger pool. A block-based solution enables end to end encryption. An encrypted drive is completely obfuscated both from your provider and anyone who might access your data by hacking your provider's services. Not even the number of files, let alone the file names, is visible unless the drive is mounted. CloudDrive has built-in encryption that encrypts the data before it is even written to your local disk. A block-based solution also enables more sophisticated sorts of data manipulation as well. Consider the ability to access parts of files without first downloading the entire file. That sort of thing. The ability to cache sections of data locally also falls under this category, which can greatly reduce API calls to your provider. Block-based Cons: Data is obfuscated even if unencrypted, and unable to be accessed directly from the provider. We already discussed this above, but it's definitely one of the negatives--depending on your use case. The only thing that you'll see on your provider is thousands of chunks of a few dozen megabytes of size. The drive is inaccessible in any way unless mounted by the drivers that decrypt the data and provide it to the operating system. You'll be tethered to CloudDrive for as long as you keep the data on the cloud. Moving the data outside of that ecosystem would require it to again be downloaded and reuploaded in its native format. Hope that helps.
  38. 1 point
    Thanks - I'll set this all up once I can get my hands on a good sized SSD drive to use as my cache for the clouddrive.
  39. 1 point
    Jaga

    Hierarchical file duplication?

    Got it, looks like you're covered for now then.
  40. 1 point
    I would hope, and am pretty sure, this won't work. DP checks, in case of duplication, whether files are placed on different *physical* devices. @OP: I recently bought a, I think,. LSI 9220-8i, flashed to IT-mode (the P20...07 version). The label was Dell PERC H310, it should be the same as the IBM1015. I am not sure, as far as I understand it, the 9xxx number also relates to the kind of bios it is flashed with. In any case, it works like a charm. One thing though, these controllers run HOT and it is advisable to mount a fan on top of it (just use a 40mm fan and mount by screws running into the upstanding bars or somesuch of the heatsink.
  41. 1 point
    Umfriend

    Moving from WHS V1

    For me, the client backups are what makes Windows Server worthwhile. File sharing, running a certain downloading client (that we'll not discuss) and media server etc. are nice extras (although that can be done on a W10 machine as well of course). I am currently at WHS 2011 and intend to go WSE 2019 (which as it turns out will be a SKU, but WS Standard won't offer the Essentials role anymore). It'll be a steep learning curve for me as well, which is a shame. But that is the one reason I am waiting a bit still and go WSE2019: It'll be longer before I have to go through that experience again. My main issue is that with WHS v1 (and 2011 somewhat) there were online resources aimed at, well, SOHO. With WSE2016 en higher that is far less the case. If you find one, pls let me know.
  42. 1 point
    Jaga

    Moving from WHS V1

    The only tidbit of wisdom that I can offer is what I've been told before about how Drivepool "talks" to the pool drives. It merely passes commands to them like Windows would to any NTFS drive (although there are some "wonky" things NTFS does that Alex had to work around). I wouldn't think this would interfere with regular copy/move/delete commands, even on system folders. @Christopher (Drashna) is the real WHS/WSE/Drivepool guru however, so it'd be best to wait and hear what he has to say. As for the rest of your criteria - even Windows 7 Pro + Drivepool can handle them, with the exception of WHS V1 style client backups. The W7 Ultimate server I'm running on now (which I'm going to be upgrading to WSE 2016 soon) does all of them except the backup (currently using Macrium Reflect). After I migrate to WSE, I *think* I'll be using Veeam for backups based on the research I've done so far. If you haven't looked at it, it may be worth the time. And while WSE might seem to be overkill in a lot of circumstances, I value it highly for the learning experience it provides. Some of what it does is "next level" stuff, which you don't get to see in a standard desktop operating system. That comes in handy for me since I'm in the IT field professionally, though it may not for a lot of people. Because of that, I feel it's worth the extra effort. I'm going to be installing it on top of a Hyper-V on bare metal... just for the experience. If you're into server based installations for any reason, it's good to keep up on the current popular platforms.
  43. 1 point
    You'll need to disable the SSD Optimizer balancer. Do that, and see if it works (it should)
  44. 1 point
    Jaga

    New Update Problem

    Many people losing settings on updates recently.
  45. 1 point
    Yup, sounds about right.
  46. 1 point
    fly

    Drivepool SSD + Archive and SnapRAID?

    Well then, seems I have a weekend project. Will report back with script in case anyone else would like to use it.
  47. 1 point
    Dave Hobson

    My first true rant with Drivepool.

    Thanks for all the awesome input everyone. I think I'm gonna say with NTFS. Especially as the SnapRaid site seemingly throws up some suggestions linking to this article https://en.wikipedia.org/wiki/Comparison_of_file_verification_software With regards to Shucking... Although as mentioned I have done this is the past (with 4TB drives when 4TB was the latest thing) the cost difference is negligible especially bearing in mind the reasons Christopher mentions and not an approach I want to return to. Though the cost isn't really the issue my current aim is to get rid of/repurpose some of those 4TB drives and replace them with another couple of 8TB drives. Maybe when that's done I will look again at SnapRaid and It's parity. If Google ever backtrack on unlimited storage at a stupidly low price in the same way Amazon did then it may scale higher on my priorities, but for now... EDIT Now I'm even more curious as I have just read a post on /r/snapraid suggesting that its possible to raid 0 a couple of pairs of 4TB drives and use them as 8TB parity drives. Though the parity would possibly less stable it would give me parity (even though it's not priority) and would allow for data scrubbing (my main aim) and mean that those 4TB drives wouldn't sit in a drawer gathering dust. So if any of you Snapraid users have any thoughts on this, I would be glad for any feedback/input..
  48. 1 point
    msq

    Yay - B2 provider added :)

    In the very latest beta 1.1.0.991 the B2 provided has been added Downloaded, installed, set up and using already. Thank you guys!
  49. 1 point
  50. 1 point
    Pancakes

    I want to encrypt my drives

    I never updated this thread, but I encrypted my drives one by one, with no availability impact. I now have my entire pool encrypted at it works just as normal. Fantastic!

Announcements

×
×
  • Create New...