Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 05/27/13 in all areas

  1. malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
    5 points
  2. hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
    4 points
  3. VERY IMPRESSED! Didn't need to create an account and password Same activation code covers EVERY product on EVERY computer! Payment information remembered so additional licenses are purchased easily Nice bundle and multi-license discount I'm in love with the Drive Pool and Scanner. Thanks for a great product and a great buying experience. -Scott
    4 points
  4. My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
    3 points
  5. srcrist

    Optimal settings for Plex

    If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex: 20MB chunk size 50+ GB Expandable cache 10 download threads 5 upload threads, turn off background i/o upload threshold 1MB or 5 minutes minimum download size 20MB 20MB Prefetch trigger 175MB Prefetch forward 10 second Prefetch time window
    3 points
  6. Shane

    NTFS Permissions and DrivePool

    Spend long enough working with Windows and you may become familiar with NTFS permissions. As an operating system intended to handle multiple users, Windows maintains records that describe which user owns each file and folder and how much access each user has to those files and folders. These records are kept on the same volumes as those files and folders. Unfortunately, in the course of moving or copying folders and files around, Windows may fail to properly update these settings for a variety of reasons (e.g. bugs, bit errors, power interruptions, failing hardware). This can mean files and folders that you can no longer delete, access or even have ownership of anymore, sometimes for no obvious reason when you check via the Security tab of the file/folder's Properties (they can look fine but actually be broken inside). So, first up, here’s what the default permissions for a pool drive should look like: And now here’s what the default permissions for the hidden poolpart folder on any drive added to the pool should look like: The above are taken from a freshly created pool using a previously unformatted drive, on a computer named CASTLE that is using Windows 10 Professional. I believe it should be the same for all supported versions of Windows so far. Any entries that are marked Deny override entries that are marked Allow. There are limited exceptions for SYSTEM. It is optional for a hidden poolpart folder to Inherit its permissions from its parent drive. It is recommended that the Administrators account have Full control of all poolpart folders, subfolders and files. It is necessary that the SYSTEM account have Full control of all poolpart folders, subfolders and files. The permissions of files and folders in a pool drive are the permissions of those files and folders in the constituent poolpart folders. Caveat: duplicates are expected to have identical permissions (because in normal practice, only DrivePool should be creating them). My next post in this thread will describe how I go about fixing these permissions when they go bad.
    2 points
  7. gtaus

    Removing drive from pool

    Have you determined what speed your TV streaming device pulls movies from your Storage Spaces or DrivePool? For example, when I watch my DrivePool GUI, I can see that my Fire TV Stick is pulling about ~4 MB/s tops for streaming 1080p movies. I don't suffer any stuttering or caching on my system. If I try to stream movies >16GB, then I start to see problems and caching issues. But, at that point, I know I have reached the limits of my Fire TV Stick with limited memory storage and its low power processor. It is not a limit of how fast DrivePool can send data over my wifi. Well, there is how many bars are available to indicate how strong the connection is, but bars does not equal speed. On my old 56K router, I would also have 4 or 5 bars indicating a strong connection, but I was constantly fighting buffering issues while streaming. I upgraded to a 1 gigabit router, which is much faster, and that took care of my buffering problems. Well, good questions but beyond my level of tech expertise with that equipment. I get my internet service from a local telephone company, and they have a computer support team on staff to answer questions and help customers with their equipment. If you are leasing your equipment from ATT, then they might have a support team you could contact for assistance. At least you have something that is currently working for you, so it's not like you are in a panic. After years of running Storage Spaces on my system, and now with DrivePool for just less than 1 year, I don't yet understand why you are experiencing streaming issues with DrivePool. On my system, it made no difference at all in regards to streaming, which I have stated runs at about 4 MB/s tops and usually much less on my system.
    2 points
  8. This is a topic that comes up from time to time. Yes, it is possible to display the SMART data from the underlying drives in Storage Spaces. However, displaying those drives in a meaningful way in the UI, and maintaining the surface and file system scans at the same time is NOT simple. At best, it will require a drastic change, if not outright rewrite of the UI. And that's not a small undertaking. So, can we? Yes. But do we have the resources to do so? not as much (we are a very small company)
    2 points
  9. I've also been bad about checking the forums. It can get overwhelming, and more difficult to do. But that's my resolution this year: to make a big effort to keep up with the forum.
    2 points
  10. (also: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 )
    2 points
  11. methejuggler

    Plugin Source

    I actually wrote a balancing plugin yesterday which is working pretty well now. It took a bit to figure out how to make it do what I want. There's almost no documentation for it, and it doesn't seem very intuitive in many places. So far, I've been "combining" several of the official plugins together to make them actually work together properly. I found the official plugins like to fight each other sometimes. This means I can have SSD drop drives working with equalization and disk usage limitations with no thrashing. Currently this is working, although I ended up re-writing most of the original plugins from scratch anyway simply because they wouldn't combine very well as originally coded. Plus, the disk space equalizer plugin had some bugs in a way which made it easier to rewrite than fix. I wasn't able to combine the scanner plugin - it seems to be obfuscated and baked into the main source, which made it difficult to see what it was doing. Unfortunately, the main thing I wanted to do doesn't seem possible as far as I can tell. I had wanted to have it able to move files based on their creation/modified dates, so that I could keep new/frequently edited files on faster drives and move files that aren't edited often to slower drives. I'm hoping maybe they can make this possible in the future. Another idea I had hoped to do was to create drive "groups" and have it avoid putting duplicate content on the same group. The idea behind that was that drives purchased at the same time are more likely to fail around the same time, so if I avoid putting both duplicated files on those drives, there's less likelihood of losing files in the case of multiple drive failure from the same group of drives. This also doesn't seem possible right now.
    2 points
  12. Managed to fix this today as my client was giving errors also. Install Beta version from here: http://dl.covecube.com/CloudDriveWindows/beta/download/ (I used 1344) Reboot. Don't start CloudDrive and/or service. Add the below to this file: C:\ProgramData\StableBit CloudDrive\Service\Settings.json "GoogleDrive_ForceUpgradeChunkOrganization": { "Default": true, "Override": true } Start Service & CloudDrive. Should kick in straight away. I have 42TB in GDrive and it went through immediately. Back to uploading as usual now. Hope this helps.
    2 points
  13. Unintentional Guinea Pig Diaries. Day 8 - Entry 1 I spent the rest of yesterday licking my wounds and contemplating a future without my data. I could probably write a horror movie script on those thoughts but it would be too dark for the people in this world. I must learn from my transgressions. In an act of self punishment and an effort to see the world from a different angle I slept in the dogs bed last night. He never sleeps there anyways but he would never have done the things I did. For that I have decided he holds more wisdom than his human. This must have pleased the Data God's because this morning I awoke with back pain and two of my drives mounted and functioning. The original drive which had completed the upgrade, had been working, and then went into "drive initializing"...is now working again. The drive that I had tried to mount and said it was upgrading with no % given has mounted (FYI 15TB Drive with 500GB on it). However my largest drive is still sitting at "Drive queued to perform recovery". Maybe one more night in the dogs bed will complete the offering required to the Data God's End Diary entry. (P.S. Just in case you wondered...that spoiled dog has "The Big One" Lovesac as a dog bed.. In a pretty ironic fashion their website is down. #Offering)
    2 points
  14. Unintentional Guinea Pig Diaries. Day 5 - Entry 2 *The Sound of Crickets* So I'm in the same spot as I last posted. My second drive is still at "Queued to perform Recovery". If I knew how to force a reauth right now I would so I could get it on my API. Or at the very least get it out of "queued" Perhaps our leaders will come back to see us soon. Maybe this is a test of our ability to suffer more during COVID. We will soon find out. End Diary entry.
    2 points
  15. gtaus

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one drive you want to consolidate all your information. Once you verify all your files have been successfully reassembled on that one drive, you could go back and format those other drives. The main advantage I see with using DrivePool is that the files are written to the HDD as standard NTFS files, and if you decided to leave the DrivePool environment, all those files are still accessible by simply viewing the hidden directory. I am coming from the Windows Storage Space system where bits and pieces of files are written to the HDDs in the pool. When things go bad with Storage Spaces, there is no way to reassemble the broken files spread across a number of HDDs. At least with DrivePool, the entire file is written to a HDD as a standard file, so in theory you should be able to copy those files from the pool HDDs over to one HDD and have a complete directory. I used the Duplication feature of DrivePool for important directories. Again, I am still learning the benefits of DrivePool over Storage Spaces, but so far, I think DrivePool has the advantage of recovering data from a catastrophic failure whereas I lost all my data in Storage Spaces. If there is a better to transfer your DrivePool files to 1 HDD, I would like to know for my benefit as well.
    2 points
  16. They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space. None of this really has anything to do with this thread, but since it needs to be said (again): CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system. This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that. If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage.
    2 points
  17. I'm always impressed with the extent you go to help people with their questions, no matter how easy it complex. Thanks Chris.
    2 points
  18. Quinn

    [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
    2 points
  19. The problem is that you were still on an affected version 3216. By upgrading to the newest version the Stablebit Scanner Service is forcefully shut down, thus the DiskId files can get corrupted in the upgrade process. Now that you are on version 3246 which fixed the problem it shouldn't happen anymore on your next upgrade/reboot/crash. I agree wholeheartedly though that we should get a way to backup the scan status of drives just in case. A scheduled automatic backup would be great. The files are extremely small and don't take a lot of space so don't see a reason not to implement it feature wise.
    2 points
  20. You can run snapraidhelper (on CodePlex) as a scheduled task to test, sync, scrub and e-mail the results on a simple schedule. If you like, you can even use the "running file" drivepool optionally creates while balancing to trigger it. Check my post history.
    2 points
  21. Issue resolved by updating DrivePool. My version was fairly out of date, and using the latest public stable build fixed everything.
    2 points
  22. I think I found where my issue was occurring, I am being bottle necked by the windows OS cache because I am running the OS off a SATA SSD. I need to move that over to part of the 970 EVO. I am going to attempt that OS reinstall move later and test again. Now the problem makes a lot more sense and is why the speeds looked great in benchmarks but did not manifest in real world file transfers.
    2 points
  23. Ok solution is that you need to manually create the virtual drive in powershell after making the pool: 1) Create a storage pool in the GUI but hit cancel when it asks to create a storage space 2) Rename the pool to something to identify this raid set. 3) Run the following command in PowerShell (run with admin power) editing as needed: New-VirtualDisk -FriendlyName VirtualDriveName -StoragePoolFriendlyName NameOfPoolToUse -NumberOfColumns 2 -ResiliencySettingName simple -UseMaximumSize
    2 points
  24. I used this adapter cable for years & never had a problem. Before I bought my server case I had a regular old case. I had (3) 4 in 3 hot swap cages next to the server. I ran the sata cables out the back of my old case. I had a power supply sitting on the shelf by the cages which powered them. The cool thing was that I ran the power cables that usually go to the motherboard inside of the case from the second power supply. I had a adapter that would plug into the motherboard and the main power supply that the computer would plug into. The adapter had a couple of wires coming from it to a female connection. You would plug your second power supply into it. What would happen is that when you turn on your main computer the second power supply would come on. That way your computer will see all of your hard drives at once. Of course when you turned off your server both of the power supplies would turn off. Here is a link to that adapter. Let me know what you think. https://www.newegg.com/Product/Product.aspx?Item=9SIA85V3DG9612
    2 points
  25. The current method of separate modules, where we can pick and choose which options to use together gets my (very strong) vote! Jamming them all together will just create unneeded bloat for some. I would still pay a "forced" bundle price, if it gave me the option to use just the modules I need... and maybe add one or more of the others later. I'm amazed at the quality of product/s that one (I think?) developer has produced and offering for a low - as Chris says, almost impulse buy - price. Keep up the good work and bug squashing guys!
    2 points
  26. To clarify a couple of things here (sorry, I did skim here): StableBit DrivePool's default file placement strategy is to place new files on the disks with the most available free space. This means the 1TB drives, first, and then once they're full enough, on the 500GB drive. So, yes, this is normal. The Drive Space Equalizer doesn't change this, but just causes it to rebalance "after the fact" so that it's equal. So, once the 1TB drives get to be about 470GB free/used, it should then start using the 500GB drive as well. There are a couple of balancers that do change this behavior, but you'll see "real time placement limiters" on the disks, when this happens (red arrows, specifically). If you don't see that, then it defaults to the "normal" behavior.
    2 points
  27. Windows Server 2016 Essentials is a very good choice, actually! It's the direct successor to Windows Home Server, actually. The caveat here is that it does want to be a domain controller (but that's 100% optional). Yeah, the Essentials Experience won't really let you delete the Users folder. There is some hard coded functionality here, which ... is annoying. Depending on how you move the folders, "yes". Eg, it will keep the permissions from the old folder, and not use the ones from the new folder. It's quite annoying, and why some of my automation stuff uses a temp drive and then moves stuff to the pool. If you're using the Essentials stuff, you should be good. But you should check out this: https://tinkertry.com/ws2012e-connector https://tinkertry.com/how-to-make-windows-server-2012-r2-essentials-client-connector-install-behave-just-like-windows-home-server
    2 points
  28. Even 60 C for a SSD isn't an issue - they don't have the same heat weaknesses that spinner drives do. I wouldn't let it go over 70 however - Samsung as an example rates many of their SSDs between 0 and 70 as far as environmental conditions go. As they are currently one of the leaders in the SSD field, they probably have some of the stronger lines - other manufacturers may not be as robust.
    2 points
  29. Jaga

    Almost always balancing

    With the "Disk Space Equalizer" plugin turned -off-, Drivepool will still auto-balance all new files added to the Pool, even if it has to go through the SSD Cache disks first. They merely act as a temporary front-end pool that is emptied out over time. The fact that the SSD cache filled up may be why you're seeing balancing/performance oddness, coupled with the fact you had real-time re-balancing going on. Try not to let those SSDs fill up. I would recommend disabling the Disk Space Equalizer, and just leaving the SSD cache plugin on for daily use. If you need to manually re-balance the pool do a re-measure first, then temporarily turn the Disk Space Equalizer back on (it should kick off a re-balance immediately when toggled on). When the re-balance is complete, toggle the Disk Space Equalizer back off.
    2 points
  30. With most of the topics here targeting tech support questions when something isn't working right, I wanted to post a positive experience I had with Drivepool for others to benefit from.. There was an issue on my server today where a USB drive went unresponsive and couldn't be dismounted. I decided to bounce the server, and when it came back up Drivepool threw up error messages and the GUI for it wouldn't open. I found the culprit - somehow the Drivepool service was unable to start, even though all it's dependencies were running. The nice part is that even though the service wouldn't run, the Pool was still available. "Okay" I thought, and did an install repair on Stablebit Drivepool through the Control Panel. Well, that didn't seem to work either - the service just flat-out refused to start. So at that point I assumed something in the software was corrupted, and decided to 1) Uninstall Drivepool 2) bounce the server again 3) Run a cleaning utility and 4) Re-install. I did just that, and Drivepool installed to the same location without complaint. After starting the Drivepool GUI I was greeted with the same Pool I had before, running under the same drive letter, with all of the same performance settings, folder duplication settings, etc that it always had. To check things I ran a re-measure on the pool, which came up showing everything normal. It's almost as if it didn't care if it's service was terminal and it was uninstalled/reinstalled. Plex Media Server was watching after the reboot, and as soon as it saw the Pool available the scanner and transcoders kicked off like nothing had happened. Total time to fix was about 30 minutes start to finish, and I didn't have to change/reset any settings for the Pool. It's back up and running normally now after a very easy fix for what might seem to be an "uh oh!" moment. That's my positive story for the day, and why I continue to recommend Stablebit products.
    2 points
  31. Just wanted to give an update for those who have problems with xfinity new 1gb line - I basically had them come out showed them how the line was going in and out with the pingplotter and they rewired everything and they changed out the modem once they did that everything has stabilized and been working great - thank you for all your help guys! long live stablebit drive! lol
    2 points
  32. 1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
    2 points
  33. The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
    2 points
  34. 2 points
  35. Also, you may want to check out the newest beta. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3204_BETA.exe
    2 points
  36. Okay, good news everyone. Alex was able to reproduce this issue, and we may have a fix. http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.4.3198_BETA.exe
    2 points
  37. The import/export feature would be nice. I guess right clicking on the folder and 7zip'ing it, is the definitive solution, for now, until an automated process evolves. According to Christopher's answer that it seems to be an isolated incident, I'm wondering what is it about our particular systems that is causing this purge? I have it running on both W7 and W10 and it purges on both. Both OSs are clean installs. Both run the same EVO500...alongside a WD spinner. Both are Dell. It seems to me that the purge is triggered by some integral part of the software once it's updated. Like an auto purge feature. I'll be honest, I think most people are too lazy to sign up and post the issue, which makes it appear to be isolated incident, but I believe this is happening more often than we think. I'm on a lot of forums, and it's always the same people that help developers address bugs, by reporting them. Unless it's a functional problem, it goes unreported. All of you...know how lazy people are. With that said, I like the idea of an integral backup and restore of the settings.
    2 points
  38. As per your issue, I've obtained a similar WD M.2 drive and did some testing with it. Starting with build 3193 StableBit Scanner should be able to get SMART data from your M.2 WD SATA drive. I've also added SMART interpretation rules to BitFlock for these drives as well. You can get the latest development BETAs here: http://dl.covecube.com/ScannerWindows/beta/download/ As for Windows Server 2012 R2 and NVMe, currently, NVMe support in the StableBit Scanner requires Windows 10 or Windows Server 2016.
    2 points
  39. I used Z once, only to find that the printer with some media card slot wanted it itself or would not print at all. Same for some Blackberry devices caliming Z. So yeah, hi up but not Y and Z. I use P, Q and R.
    2 points
  40. It means the pool drive. And yeah... how Windows handles disk/partition/volume stuff is confusing... at best. For this ... take ownership of the folder, change it's permissions, and delete it (on the pool). Then resolve the issue. It should fix the issue, and shouldn't come back.
    2 points
  41. You could do it with a combination of a VPN, Drivepool pool(s), and Clouddrive using file share(s). Here's how I think it could work: The VPN connects all computers on the same local net. Each computer has a Pool to hold data, and the Pool drive shared so the local net can access it. Clouddrive has multiple file shares setup, one to each computer connected via VPN and sharing a Pool. Each local Pool can have duplication enabled, ensuring each local Clouddrive folder is duplicated locally X times. The file shares in Clouddrive are added to a new Drivepool Pool, essentially combining all of the remote computer storage you provisioned into one large volume. Note: this is just me brainstorming, though if I were attempting it I'd start with this type of scheme. You only need two machines with Drivepool installed and a single copy of Clouddrive to pull it off. Essentially wide-area-storage.
    2 points
  42. It's $10 off the normal price for a product you don't already own, but $15/each for products that you do.
    2 points
  43. "dpcmd remeasure-pool x:"
    2 points
  44. "Release Final" means that it's a stable release, and will be pushed out to everyone. Not that it's the final build. Besides, we have at least 7 more major features to add, before even considering a 3.0.
    2 points
  45. Alex

    check-pool-fileparts

    If you're not familiar with dpcmd.exe, it's the command line interface to StableBit DrivePool's low level file system and was originally designed for troubleshooting the pool. It's a standalone EXE that's included with every installation of StableBit DrivePool 2.X and is available from the command line. If you have StableBit DrivePool 2.X installed, go ahead and open up the Command Prompt with administrative access (hold Ctrl + Shift from the Start menu), and type in dpcmd to get some usage information. Previously, I didn't recommend that people mess with this command because it wasn't really meant for public consumption. But the latest internal build of StableBit DrivePool, 2.2.0.659, includes a completely rewritten dpcmd.exe which now has some more useful functions for more advanced users of StableBit DrivePool, and I'd like to talk about some of these here. Let's start with the new check-pool-fileparts command. This command can be used to: Check the duplication consistency of every file on the pool and show you any inconsistencies. Report any inconsistencies found to StableBit DrivePool for corrective actions. Generate detailed audit logs, including the exact locations where each file part is stored of each file on the pool. Now let's see how this all works. The new dpcmd.exe includes detailed usage notes and examples for some of the more complicated commands like this one. To get help on this command type: dpcmd check-pool-fileparts Here's what you will get: dpcmd - StableBit DrivePool command line interface Version 2.2.0.659 The command 'check-pool-fileparts' requires at least 1 parameters. Usage: dpcmd check-pool-fileparts [parameter1 [parameter2 ...]] Command: check-pool-fileparts - Checks the file parts stored on the pool for consistency. Parameters: poolPath - A path to a directory or a file on the pool. detailLevel - Detail level to output (0 to 4). (optional) isRecursive - Is this a recursive listing? (TRUE / false) (optional) Detail levels: 0 - Summary 1 - Also show directory duplication status 2 - Also show inconsistent file duplication details, if any (default) 3 - Also show all file duplication details 4 - Also show all file part details Examples: - Perform a duplication check over the entire pool, show any inconsistencies, and inform StableBit DrivePool >dpcmd check-pool-fileparts P:\ - Perform a full duplication check and output all file details to a log file >dpcmd check-pool-fileparts P:\ 3 > Check-Pool-FileParts.log - Perform a full duplication check and just show a summary >dpcmd check-pool-fileparts P:\ 0 - Perform a check on a specific directory and its sub-directories >dpcmd check-pool-fileparts P:\MyFolder - Perform a check on a specific directory and NOT its sub-directories >dpcmd check-pool-fileparts "P:\MyFolder\Specific Folder To Check" 2 false - Perform a check on one specific file >dpcmd check-pool-fileparts "P:\MyFolder\File To Check.exe" The above help text includes some concrete examples on how to use this commands for various scenarios. To perform a basic check of an entire pool and get a summary back, you would simply type: dpcmd check-pool-fileparts P:\ This will scan your entire pool and make sure that the correct number of file parts exist for each file. At the end of the scan you will get a summary: Scanning... ! Error: Can't get duplication information for '\\?\p:\System Volume Information\storageconfiguration.xml'. Access is denied Summary: Directories: 3,758 Files: 47,507 3.71 TB (4,077,933,565,417 File parts: 48,240 3.83 TB (4,214,331,221,046 * Inconsistent directories: 0 * Inconsistent files: 0 * Missing file parts: 0 0 B (0 ! Error reading directories: 0 ! Error reading files: 1 Any inconsistent files will be reported here, and any scan errors will be as well. For example, in this case I can't scan the System Volume Information folder because as an Administrator, I don't have the proper access to do that (LOCAL SYSTEM does). Another great use for this command is actually something that has been requested often, and that is the ability to generate audit logs. People want to be absolutely sure that each file on their pool is properly duplicated, and they want to know exactly where it's stored. This is where the maximum detail level of this command comes in handy: dpcmd check-pool-fileparts P:\ 4 This will show you how many copies are stored of each file on your pool, and where they're stored. The output looks something like this: Detail level: File Parts Listing types: + Directory - File -> File part * Inconsistent duplication ! Error Listing format: [{0}/{1} IM] {2} {0} - The number of file parts that were found for this file / directory. {1} - The expected duplication count for this file / directory. I - This directory is inheriting its duplication count from its parent. M - At least one sub-directory may have a different duplication count. {2} - The name and size of this file / directory. ... + [3x/2x] p:\Media -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media [Device 0] -> \Device\HarddiskVolume3\PoolPart.6a76681a-3600-4af1-b877-a31815b868c8\Media [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media [Device 2] - [2x/2x] p:\Media\commandN Episode 123.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 123.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 123.mov [Device 2] - [2x/2x] p:\Media\commandN Episode 124.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 124.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 124.mov [Device 2] ... The listing format and listing types are explained at the top, and then for each folder and file on the pool, a record like the above one is generated. Of course like any command output, it could always be piped into a log file like so: dpcmd check-pool-fileparts P:\ 4 > check-pool-fileparts.log I'm sure with a bit of scripting, people will be able to generate daily audit logs of their pool Now this is essentially the first version of this command, so if you have an idea on how to improve it, please let us know. Also, check out set-duplication-recursive. It lets you set the duplication count on multiple folders at once using a file pattern rule (or a regular expression). It's pretty cool. That's all for now.
    2 points
  46. I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version). The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this. But I'm always using ReFS instead of NTFS...
    2 points
  47. I need to do this daily, is there a way to auto authorize? otherwise i cant really use this app.
    2 points
  48. HellDiverUK

    Build Advice Needed

    Ah yes, I meant to mention BlueIris. I run it at my mother-in-law's house on an old Dell T20 that I upgraded from it's G3220 to a E3-1275v3. It's running a basic install of Windows 10 Pro. I'm using QuickSync to decode the video coming from my 3 HikVision cameras. Before I used QS, it was sitting at about 60% CPU use. With QS I'm seeing 16% CPU at the moment, and also a 10% saving on power consumption. I have 3 HikVision cameras, two are 4MP and one is 5MP, and are all running at their maximum resolution. I record 24/7 on to an 8TB WD Purple drive, with events turned on. QuickSync also seems to be used for transcoding video that's accessed by the BlueIris app (can highly recommend the app, it's basically the only way we access the system apart from some admin on the server's console). Considering Quicksync has improved greatly in recent CPUs (basically Skylake or newer), you should have no problems with an i7-8700K. I get great performance from a creaky old Haswell.
    2 points
  49. Surface scans are disabled for CloudDrive disks by default. But file system scans are not (as they can be helpful) You can disable this per disk, in the "Disk Settings" option. As for the length, that depends on the disk. aand no, there isn't really a way do speed this up.
    2 points
  50. Sure. So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD. To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD". Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files. Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have: Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool C: Pool A + Pool B, duplicated. So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.
    2 points
×
×
  • Create New...