Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 

Search the Community

Showing results for tags 'drivepool'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Found 45 results

  1. Hi, referring to this post and how DrivePool acts with different size drives, I found myself copy/pasting a file and DrivePool used the same drive for the entire operation (as the source and the destination). This resulted in a really poor performance (see attachments; just ignore drive "E:", there are some rules that exclude it from the general operations). Same thing extracting some .rar files. Is there a way to make DP privilege performance over "available space" rules? I mean, if there is available space on the other drives it can use the other drives instead of the "source" one. Thanks
  2. Notifications during Remote Control

    I have two networks, one which is connected to the internet (1), and one which is not (2). I have a machine (A) running drivepool and scanner on (2), and a machine (B) connected to both (1) and (2). Machine (B) is set up to remote monitor (A). Will any notifications generated on (A), which is being remote monitored by (B), be sent from machine (B) to my email? When sending a test message from Scanner on machine (A), since it is not connected to the internet, creates the error "Test email could not be sent to blank@blank.com. Object reference not set to an instance of an object." When sending a test message from Drivepool on machine (A), Drivepool claims to have successfully sent the test email. In both instances, I do not receive an email. When I attempt to send a test message from machine (B), I get the same response and error: Drivepool successful, and scanner "object reference" error, and I do not receive an email for either attempt. Aside from the "object reference" error, should I be able to funnel any notifications through another machine in the way that I am attempting to do it? Drivepool: 2.1.1.561 x64 Scanner: 2.5.1.3062 Thank you.
  3. Error Removing Drive

    Hi, I've got a drive in my pool that Scanner is reporting that the File system is not healthy, and repairs have been unsuccessful. So I added another drive the same size to the pool, and attempted to remove the damaged drive (thinking this would migrate the data, after which I can reformat the original drive and re-add to the pool). I'm getting "Error Removing Drive" though, and the detail info indicates "The file or directory is corrupted and unreadable". Yet I can reach into the PoolPart folder and can copy at least some* of the files (every test I've done copies fine). How do I migrate whatever I can recover manually? Or is there a step I'm missing to get DP to do this for me automatically?
  4. Down sizing my server

    I'm designing a replacement for my current server. Goals are modest. Less heat and power draw than my existing server. Roles are file server with drivepool, emby for media with very light encoding needs. Local backups are with Macroum Reflect. Off site backups with Spideroak. OS will be win 10 Pro. My current rig is here: New build: PCPartPicker part list: https://pcpartpicker.com/list/yhdVqk Price breakdown by merchant: https://pcpartpicker.com/list/yhdVqk/by_merchant/ Memory: Crucial - 8GB (2 x 4GB) DDR3-1600 Memory ($63.17 @ Amazon) Case: Fractal Design - Node 304 Mini ITX Tower Case ($106.72 @ Newegg) Power Supply: EVGA - B3 450W 80+ Bronze Certified Fully-Modular ATX Power Supply ($53.73 @ Amazon) Operating System: Microsoft - Windows 10 Pro OEM 64-bit ($139.99 @ Other World Computing) Other: ASRock Motherboard & CPU Combo Motherboards J3455B-ITX ($73.93 @ Newegg) Total: $437.54 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2018-01-16 10:30 EST-0500 Storage: LSI SAS 2 port HBA flashed to IT mode. Boot: PNY 120GB SSD Data Pool: 4 x 2TB WD Red Backup Pool: 1x5TB Seagate (Shelled USB drive) When this is built my old rig goes up for sale. My only thought is on the CPU. Could I go with a dual core Celeron for about 80 bucks? Will it handle a single encoding stream and not draw more power than the i3T? The Celeron will do fine as a file server.
  5. Onedrive drivepool using clouddrive

    Hi, I have an office 365 account where I get 5 - 1TB onedrives. I am trying to link 3 of them together using clouddrive and drivepool. I have on my pc as storage drives: x4 - Toshiba HDWE160 6TB x1 - Seagate ST3000DM001-9YN166 3TB I have all the drives pooled together using drivepool. When creating the OneDrive drivepool, how should I create the clouddrive cache? Should I put all 3 cache partitions on a single, dedicated, cache disk? Can I put a single cache partition on 3 different drives in the storage drivepool? Or do I need a dedicated cache drive for each of my onedrive cloud connections? What are your recommendations for this? I've tried putting the cache partitions on the same dedicated cache disk and get BSOD every time I write a lot of files to it. Thank you.
  6. In the advanced settings of Plex Media Server for transcoder, setting the Transcoder temporary directory to a folder in a DrivePool causes Plex to show an error on playback. I wanted to check if anyone else has that behaviour, if so, maybe include that in the limitations. Also, perhaps a general guideline would be to put "cache" folders outside of Drivepool? ____________________________ EDIT: It looks like it works, the problem were due to some other error that caused the DrivePool to go in read-only mode (which a reboot fixed).
  7. I have a bunch of older 3, 4 and 6TB drives that aren't currently in my main system that i want to make use of. What are some suggested options to externally adding these to a drive pool? I saw this in one post IB-3680SU3 and have seen something like this before as well Mediasonic H82-SU3S2 is there anything i should be aware of to make this work optimally? I'll also have a few drives in my main system that would be part of the pool as well.
  8. Blockchain Data OK on DrivePool?

    Is blockchain data OK to be stored on DrivePool? As opposed to directly on the hard disk (metal). Examples: Bitcoin Core Bitcoin Armory Ethereum Mist
  9. Hey, I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives. Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both. What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added). 1) 2)
  10. Google Drive & Cache Errors

    Hi, I'm kind of new to CloudDrive but have been using DrivePool and Scanner for a long while and never had any issues at all with it. I recently set up CloudDrive to act as a backup to my DrivePool (I dont even care to access it locally really). I have a fast internet connection (Google Fiber Gigabit), so my uploads to Google Drive hover around 200 Mbps and was able to successfully upload about 900GB so far. However my cache drive is getting maxed out. I read that the cache limit is dynamic, but how can I resolve this as I dont want CloudDrive taking up all but 5 GB of this drive. If I understand correctly all this cached data is basically data that is waiting to be uploaded? Any help would me greatly appreciated! My DrivePool Settings: My CloudDrive Errors: The cache size was set as Expandable by default, but when I try to change it, it is grayed out. The bar at the bottom just says "Working..." and is yellow.
  11. When using DrivePool in conjuction with CluodDrive I have a problem when uploading a large amount of data. I have the following setup. 3x 3tb Drives for Cache for CloudDrive with 500Gb dedicated and the rest expandable. I am in an EU Datacentre with 1Gb/s internet connectivity I have a DrivePool with 3 different accounts - 2x Google Drive and 1x Dropbox. I have another DrivePool for where my downloads go, that consists of space left over from the listed drives above. When I attempt to copy / move files from the Downloads DrivePool into the CloudDrive DrivePool, one drive always drops off, Randomly, never one in particular. But then DrivePool will mark the Drive as read only and I can't move media to the new drive. I would have thought cache would handle this temporary outage, I would also expect that the Local Drive cache should handle the sudden influx of files, and not knock off the CloudDrives. I also would think that DrivePool would still be usable and not mark the drive as read only? What am I doing wrong? - how do I fix this behaviour?
  12. NTFS compression and failed drive

    Hello everyone! Another happy user of Drivepool with a question regarding NTFS compression and file evacuation. A few days ago I started having reallocated sectors counters on one drive. Stablebit scanner ordered drivepool to evacuate all the files, but there was not enough space so some of them remained. I bought another drive, which I added to the pool, and tried to remove the existing one, getting an "Access is Denied" error. Afterwards, I tried to force evacuation of files from the damaged drive using the appropriate option in the Stablebit scanner. This triggered a rebalance operation which was going very well, but then I notice several hundreds of GB marked as "Other" not being moved. Then it stroked to me that the new drive has some files without NTFS compression, whereas the old drives in the pool did have. I think somehow since the checksums are not the same for compressed and uncompressed files this is somehow confusing the scanner. What I did so far (for consistency at least, hope this doesn't make things worse!!!) is to disable compression from all the folders I had it enabled (from the old drives, including the faulty one) and wait for the rebalance to complete. Is this the right approach? Is this also expected to happen when using NTFS compression? In drivepool is actually not worth the hassle to have it enabled? (I was getting not fantastic savings, but hey! every little helps, and wasn't noticing perf. degradation). Hope the post somehow makes sense and also hope my data is not compromised for taking the wrong steps! Thanks! DrivePool version: 2.2.0.738 Beta Scanner: 2.5.1.3062 OS: Windows 2016 Attached DrivePool screenshot as well
  13. Hi all. I'm testing out a setup under Server 2016 utilizing Drivepool and SnapRAID. I am using mount points to folders. I did not change anything in the snapraid conf file for hidden files: # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ When running snapraid sync it outputs that it is ignoring the covefs folder - WARNING! Ignoring special 'system-directory' file 'C:/drives/array1disk2/PoolPart.23601e15-9e9c-49fa-91be-31b89e726079/.covefs' Is it important to include this folder? I'm not sure why it is excluding it in the first place since nohidden is commented out. But my main question is if covefs should be included. Thanks.
  14. Files not distributing to new drive

    I've added a new drive to my pool and I'd like to spread the files in the pool over to the new drive. I realize this isn't the default behaviour so I've added the "Disk Space Equilizer" balancer into my system. At the moment it's the only balancer I have enabled and the system STILL isn't moving the files (and therefore balancing the space). Any idea what I'm doing wrong? Is there a log or something that I can check to see why it's ignoring the only balancer present?
  15. I've been having issues for the past few days with DrivePool showing "Statistics are Incomplete" in the GUI. It seems to be the issue is the CloudDrive I have in there because it is showing "Other" where there are actually duplicated files in. In this state, duplication and balancing are not working. I checked the PoolPart folder on that drive and see all the duplicated files and folders that DrivePool has placed on there and it's been working for a couple weeks until this week. I uninstalled CloudDrive and DrivePool and reinstalled both seeing if there was just a glitch but still no luck. I also enabled file system logging in DrivePool and had it "re-measure" but no errors in the logs that I can see. I just don't understand what could be the issue especially when I can see all the files in the PoolPart folder and it looks like it has also placed some new files over to it today and those are showing up. I'm currently using the following application versions: DrivePool: 2.2.0.651 BETA CloudDrive: 1.0.0.870 OS: Windows 10 Pro Here's a screen shot showing my current setup with the CloudDrive showing mostly other when all that's on it is the PoolPart folder and its contents. This screen shot is showing the files and folders correctly within the PoolPart folder.
  16. Greetings, I have two NAS devices, plus a few local hard drives, and I'd like to aggregate them all together in one Pool (ie, so that they show up as a single drive in Windows). From reading through the website and forum it seems like this may be possible, but in my initial experiments with DrivePool and CloudDrive I'm having trouble achieving this. I can create a pool of the local drives in DrivePool, and I can create a drive of a shared folder using CloudDrive. But I'm not seeing that CloudDrive location show up in DrivePool. From my initial looking I think I'd prefer to just use DrivePool if possible, as it seems to have the sorts of features I'm interested in (eg, I know that I want to use the Ordered File Placement plugin). Ideally I'd like to be able to just enter in a UNC path for a shared folder on each NAS in order to add it as a pooled data location in DrivePool. But I could be fine with mapping NAS drives as well, though that doesn't seem to do anything. I'm trying out DrivePool 2.1.1.561 x64 and CloudDrive 1.0.0.854 x64. The background on all of this is that I have CrashPlan installed on this computer, and I want to create a pooled location to point CrashPlan to for storing data that will be backed up TO this computer from another computer (using CrashPlan computer-to-computer backup). CrashPlan only supports selecting a single location for this scenario, but since I have several old NAS's, plus some local hard drives I'd like to pool them all into one drive to use as my CrashPlan repository. For those that see the value in multiple offsite backups you'll appreciate knowing that I also backup to CrashPlan's servers as well. Thanks in advance for any help or advice on all this!
  17. I recently had Scanner flag a disk as containing "unreadable" sectors. I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified. Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted. As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool. Meaning, a perfectly good copy of the file was sitting 1 disk over. SO... Is Scanner not aware of this file? What is the best way to handle this manually if the file cannot be recovered? Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool? Should I overwrite the bad file with the good one??? IN A PERFECT WORLD, I WOULD LOVE TO SEE... Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files). At this point, the user can take action with a list of options which, in one way or another, allow the user to: Flag the sectors-in-question as bad so no future data is written to them (remapped). Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation) Thanks for your ears and some really great software. Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.
  18. Team, I am current running (or was ...) PoolHD with one Drive Pool containing two physical disks. That Drive Pool contained two top level directories: "Duplicated" and "Non Duplicated" i.e. PoolHD balanced the non duplicated files across both disks and the duplicated files, duplicated across both disks. I have now upgraded to W10 and PoolHD no longer works - I expected this, as it is not supported in W10 - and I had always intended to migrate to DrivePool because Windows Storage Spaces requires the drives (that are to be added to a Storage Space) to be cleanly formatted, and of course, I can't do that, because the drives contain data. Now, just like DrivePool, PoolHD stores the files in standard NTFS directories - and even gives advice on how to migrate from DrivePool to PoolHD by changing directory names to match the DrivePool naming conventions. Before purchasing DrivePool, I have downloaded a trial and have created a new Pool, but DrivePool will only add physical disks to the DrivePool pool, that have not previously been in a PoolHD pool. i.e. DrivePool doesn't see the two physical disks that were part of a PoolHD pool, even though both the drives effectively only contain a standard NTFS file structure and PoolHD is uninstalled. Remembering that I effectively have two physical drives that contain two top level directories - one which balances the contents of that directory across both drives and the other (the duplicated directory) that has identical content on both drives, how can I add them to a DrivePool pool? [Note: I guess that the secret is in the naming of some directories in the root of each drive, that indicates to DrivePool that it should steer well clear, but these are only directory names, so quite happy to edit them as necessary.] Thanks in advance, Woody.
  19. Duplicate Later not working

    Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  20. DrivePool + CloudDrive Setup Questions

    Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  21. WHS 2011 Addin freezes when opened

    Hi all, I've been running Stablebit Drivepool for years with no problems, but last week my PC had a hard shutdown, since then I have this problem. I can see and access my Drivepool ok, (Drive I:), the SB Drivepool service is running, but when I try to access the SB Drivepool tab in the WHS 2011 Dashboard the Dashboard freezes. I've tried the following fixes: Rebooted the PC Run a repair of the Drivepool installation (from the Windows Uninstall programs control panel page) Restarted the SB Drivepool service Any ideas? I'd like to try removing and/or reinstalling Drivepool but I'm not sure if that's a good idea? Thanks for the help :-)
  22. [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: http://community.covecube.com/index.php?/topic/1865-howto-file-location-catalog/&do=findComment&comment=16553 DPFileList.zip
  23. Virtualization Rebuild; Thoughts?

    I have been running a "server" for a number of years with both Scanner and DrivePool being an integral part of it all (I LOVE these products!!!). I think it's time to redesign my current virtualization environment, and I wanted to know what you guys think: My current setup: "Host" (running Win10 Pro w/Client Hyper-V): - Scanner and DrivePool for media, backups, VMs, etc. - CrashPlan for offsite backups (~10 incoming clients) - Plex Media Server (doing occasional transcodes) - Multiple VMs (Win7 for WMC recording, Win10 testing, VPN appliance, UTM appliance, etc.) I feel like the current setup is getting a little too "top-heavy". My biggest pain points are the fact that I have to bring down the entire environment every time M$ deploys patches, and a lack of clean backups for the VMs that are getting more-and-more important. Being the budget is a big concern, I'm hoping to re-work my existing environment... My proposed setup: "Host" running Hyper-V Server, CrashPlan, Scanner and DrivePool VM for Plex Media Server, targeting shares on host VM for WMC TV recording, moving recordings to host share Etc., etc... I believe this design will allow the host to be more stable and help with uptime...what do you guys think of this? I know I can install and run CrashPlan, Scanner and DrivePool on Hyper-V server, but I've never done any long-term testing... Also, can anyone recommend a good, free way to backup those VMs with Hyper-V Server? If I can get a backup of those VMs onto the DrivePool array and sent offsite via CrashPlan, that would be perfect -Quinn
  24. GPT vs MBR

    Is there any advantage using GPT vs MBR in standard storage hdd's for use in a pool?
  25. ECC ram for a Drivepool server.

    Is there any advantage using ECC ram on a Windows Drivepool server? Heard terror stories about not using ECC on ZFS, people loosing their entire pools etc due to memory corruption and the way ZFS uses RAM for scrubbing, there's a lot of hype about ZFS which made me consider it, i played with it on a FreeBSD virtual machine, its powerfull stuff but somehow i feel safer with plain old NTFS and Drivepool, only the ECC ram question remains.
×