Jump to content
Covecube Inc.

Leaderboard


Popular Content

Showing content with the highest reputation since 05/27/13 in all areas

  1. 4 points
    VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
  2. 2 points
    Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  3. 2 points
    The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
  4. 2 points
    You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
  5. 2 points
    Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
  6. 2 points
    I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
  7. 2 points
    I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them. The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off. Here is a link to that adapter. Let me know what you think. https://www.newegg.com/Product/Product.aspx?Item=9SIA85V3DG9612
  8. 2 points
    CC88

    Is DrivePool abandoned software?

    The current method of separate modules, where we can pick and choose which options to use together gets my (very strong) vote! Jamming them all together will just create unneeded bloat for some. I would still pay a "forced" bundle price, if it gave me the option to use just the modules I need... and maybe add one or more of the others later. I'm amazed at the quality of product/s that one (I think?) developer has produced and offering for a low - as Chris says, almost impulse buy - price. Keep up the good work and bug squashing guys!
  9. 2 points
    It's just exaggerated. The URE avg rates at 10^14/15 are taken literally in those articles while in reality most drives can survive a LOT longer. It's also implied that an URE will kill a resilver/rebuild without exception. That's only partly true as e.g. some HW controllers and older SW have a very small tolerance for it. Modern and updated RAID algorithms can continue a rebuild with that particular area reported as a reallocated area to the upper FS IIRC and you'll likely just get a pre-fail SMART attribute status as if you had experienced the same thing on a single drive that will act slower and hang on that area in much the samme manner as a rebuild will. I'd still take striped mirrors for max performance and reliability and parity only where max storage vs cost is important, albeit in small arrays striped together.
  10. 2 points
    To clarify a couple of things here (sorry, I did skim here): StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space. This means the 1TB drives, first, and then once they're full enough, on the 500GB drive. So, yes, this is normal. The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal. So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically). If you don't see that, then it defaults to the "normal" behavior.
  11. 2 points
    Christopher (Drashna)

    Moving from WHS V1

    Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
  12. 2 points
    Jaga

    Recommended SSD setting?

    Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
  13. 2 points
    Jaga

    Almost always balancing

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first. They merely act as a temporary front-end pool that is emptied out over time. The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on. Try not to let those SSDs fill up. I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use. If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on). When the re-balance is complete, toggle the Disk Space Equalizer back off.
  14. 2 points
    With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
  15. 2 points
    Jose M Filion

    I/O Error

    Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
  16. 2 points
    1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
  17. 2 points
    The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  18. 2 points
    Mostly just ask.
  19. 2 points
    ...just to chime in here...remember that expanders have firmware too. I am running 1x M1015 + 1x RES2SV240 in my 24bay rig for 5+ years now....I remember that there was a firmware update for my expander that improved stability with S-ATA drives (which is the standard usecase for the majority of the semi-pro users here, I think). Upgrading the firmware could be done with the same utility as for the HBA, as far as I remember...instructions were in the firmware readme Edit: here's a linbk for a howto: https://lime-technology.com/forums/topic/24075-solved-flashing-firmware-to-an-intel-res2xxxxx-expander-with-your-9211-8i-hba/?tab=comments#comment-218471 regards, Fred
  20. 2 points
    Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
  21. 2 points
    Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
  22. 2 points
    The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
  23. 2 points
    As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
  24. 2 points
    I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
  25. 2 points
    You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
  26. 2 points
    It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
  27. 2 points
    "dpcmd remeasure-pool x:"
  28. 2 points
    "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
  29. 2 points
    I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
  30. 2 points
    wolfhammer

    not authorized, must reauthorize

    I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
  31. 2 points
    HellDiverUK

    Build Advice Needed

    Ah yes, I meant to mention BlueIris. I run it at my mother-in-law's house on an old Dell T20 that I upgraded from it's G3220 to a E3-1275v3. It's running a basic install of Windows 10 Pro. I'm using QuickSync to decode the video coming from my 3 HikVision cameras. Before I used QS, it was sitting at about 60% CPU use. With QS I'm seeing 16% CPU at the moment, and also a 10% saving on power consumption. I have 3 HikVision cameras, two are 4MP and one is 5MP, and are all running at their maximum resolution. I record 24/7 on to an 8TB WD Purple drive, with events turned on. QuickSync also seems to be used for transcoding video that's accessed by the BlueIris app (can highly recommend the app, it's basically the only way we access the system apart from some admin on the server's console). Considering Quicksync has improved greatly in recent CPUs (basically Skylake or newer), you should have no problems with an i7-8700K. I get great performance from a creaky old Haswell.
  32. 2 points
    Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
  33. 2 points
    Umfriend

    Recommended server backup method?

    Sure. So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD. To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD". Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files. Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have: Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool C: Pool A + Pool B, duplicated. So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.
  34. 1 point
    Viktor

    StableBit DrivePool Service

    I changed the pool service to delayed start some time ago (wanted to separate the starts of CloudDrive and DrivePool). Works fine, no problems so far.
  35. 1 point
    This information is pulled from Windows' Performance counters. So it may not have been working properly temporarily. Worst case, you can reset them: http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  36. 1 point
    So, removing StableBit Scanner from the system fixed the issue with the drives? If so.... well, uninstalling the drivers for the controller (or installing them, if they weren't) may help out here. Otherwise, it depends on your budget. I personally highly recommend the LSI SAS 9211 or 9207 cards, but they're on the higher end of consumer/prosumer.
  37. 1 point
    Keep awake - get a copy of Stablebit Scanner and tell it not to throttle SMART queries so the drive stays awake. Or use the USB drive's utilities to set it's sleep timer. Drivepool just manages the pool and allows disks to spin down if that's how their firmware or Windows is set. As far as balancing goes - I turn off automatic balancing so files always stay where they were originally put. Less wear and tear, less of a performance hassle if the server is being used. New files are added to the disks with the most available space, bringing it back in line with continuing to be balanced. If I ever see the balance on the disks being less optimal than I'd like, I kickoff a manual re-balance with the Disk Space Equalizer plugin and go to bed. Really the only thing that kicks the pool out of balance for me is the Monthly Full & 3xx/week Differential backups I put on the drives. And I don't care if it's slightly unbalanced because of them - it wipes the fulls every month, and it only keeps two copies of the differentials at any one time. Plus - you can force those to specific disk(s) with file placement rules. If you -want- to schedule periodic (nightly) rebalances, turn on "Balance every day at.." in the Balancing UI. I think that Drivepool does a great job of balancing on it's own, and it's something that is easy for people to overthink. Just disable automatics and manually do it when you want. Edit: I attached a screenshot of my pool drives - I haven't done a manual re-balance on it for over a month now, and probably won't for a long time.
  38. 1 point
    Jaga

    Stablebit Thumbnails

    Windows 7 shouldn't be natively creating the thumbs.db files though. The last Windows OS to do that and rely on them was Windows XP. A bunch of helpful information on thumbnails in Windows is here: https://www.ghacks.net/2014/03/12/thumbnail-cache-files-windows/ Starting with Vista, they were moved to a central location for Windows management: %userprofile%\AppData\Local\Microsoft\Windows\Explorer\thumbcache_xxx.db If they are being re-created on your system PetaBytes, I'm not sure why. I triple-checked my Windows 7 Ultimate x64 media server, and none of the movie/picture folders have the files in them (visible OR hidden). This procedure (and a reboot after) might help your issue: https://www.sitepoint.com/switch-off-thumbs-db-in-windows/ I still maintain a 3rd party utility like Icaros will help the most. What it does (in a nutshell) is maintain it's own cache of thumbnails, so that *if* Windows loses them for a folder, Icaros will supply them back to Windows instead of it having to re-generate them slowly.
  39. 1 point
    Sorry to be negative, but I doubt if 3198 fixed it. I uninstalled 3191, copied my previous folders for 3062 back in place and reinstalled 3062: all disk settings and data were then back to normal. Then I went directly to 3204, since that has the new settings backup and restore feature, bypassing 3198. The result was that no settings were retained (except for 1 of 11 disks, oddly) and all disks needed to be scanned for the first time. (This time, however, I did a printscreen of my names & locations, and re-entered everything into 3204. I'll let Scanner rescan everything and just move on to the future.)
  40. 1 point
    So then, it's the StableBit DrivePool metadata that is duplicated then. This ... is standard, and it's hard coded to x3 duplication, actually. We don't store a lot of information in this folder. But reparse point info IS stored here (junctions, symblinks, etc). So if you're using a lot of them on the pool, then that would by whey you're seeing this behavior. And in this case, you don't want to mess with this folder.
  41. 1 point
    MrBlond

    3TB Drive showing as 115 PB

    Thanks for the info. I had purchased two of the Orico enclosures so have just swapped the unit and disk over and the new one is reporting correctly. I have just emailed the supplier (via Amazon) to ask for a replacement. Thanks a lot for your help on this. Much appreciated. BTW, Windows was reporting correctly and I had tried the "DoNotCorrectSize" option.
  42. 1 point
  43. 1 point
    What's fantastic is that DrivePool supports them!
  44. 1 point
    I needed to go into Disk Management and activate the drive on server 2008. Perhaps you will have to do the same.
  45. 1 point
    i run one setup with 10 google accounts to get away from the daliy limits. So yes it works!
  46. 1 point
    banderon

    Awesome product and awesome support

    I just wanted to say thanks for an awesome product. Drivepool is exactly what I've been looking for. Also, just as important, the support I've seen provided here and on reddit is probably the best of any company I've ever encountered. It's all the more commendable considering how small your team is. That level of customer service, more than anything else, prompted me to grab a copy and to recommend it to others. Thanks!
  47. 1 point
    DP has offered hierachical Pools since rencently, version 2.2.0.744 or so. If you're on an older version you'd need to update. Not sure if there has been a stable release already with this feature. I am running .746 BETA (an early adopter exactly for this feature).
  48. 1 point
    Christopher (Drashna)

    Gallbladders suck

    Seriously, their only purpose is to create stones (and then pain), from what I can tell.
  49. 1 point
    There is ABSOLUTELY NO ISSUE using either the built in defragmentation software, or using 3rd party software. Specifically, StableBit DrivePool stores the files on normal NTFS volumes. We don't do anything "weird" that would break defragmentation. At all. In fact, I personally use PerfectDisk on my server (which has 12 HDDs), and have never had an issue with it.
  50. 1 point
    Christopher (Drashna)

    Surface scan and SSD

    Saiyan, No. The surface scan is read only. The only time we write is if we are able to recover files, after you've told it to. The same thing goes with the file system check. We don't alter any of the data on the drives without your explicit permission. And to clarify, we don't really identify if it's a SSD or HDD. We just identify the drive (using Windows APIs). How we handle the drive doesn't change between SSD or HDD. And in fact, because of what Scanner does, it doesn't matter what kind of drive it is because we are "hands off" with your drives. Grabbing the information about the drives and running the scans are all "read only" and doesn't modify anything on the drives. The only time we write to the drives is when you explicitly allow it (repair unreadable data, or fix the file system). And because we use built in tools/API when we do this, Windows should handle any "SSD" specific functionality/features. I just wanted to make this clarification, because you seem to be very hesitant about Scanner and SSDs. But basically Scanner itself doesn't care if the drive is a SSD or not, because nothing we do should ever adversely affect your SSD. Data integrity is our top priority, and we try to go out of our way to preserve your data.

Announcements

×
×
  • Create New...