Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 

Search the Community

Showing results for tags 'drivepool'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Found 41 results

  1. Onedrive drivepool using clouddrive

    Hi, I have an office 365 account where I get 5 - 1TB onedrives. I am trying to link 3 of them together using clouddrive and drivepool. I have on my pc as storage drives: x4 - Toshiba HDWE160 6TB x1 - Seagate ST3000DM001-9YN166 3TB I have all the drives pooled together using drivepool. When creating the OneDrive drivepool, how should I create the clouddrive cache? Should I put all 3 cache partitions on a single, dedicated, cache disk? Can I put a single cache partition on 3 different drives in the storage drivepool? Or do I need a dedicated cache drive for each of my onedrive cloud connections? What are your recommendations for this? I've tried putting the cache partitions on the same dedicated cache disk and get BSOD every time I write a lot of files to it. Thank you.
  2. In the advanced settings of Plex Media Server for transcoder, setting the Transcoder temporary directory to a folder in a DrivePool causes Plex to show an error on playback. I wanted to check if anyone else has that behaviour, if so, maybe include that in the limitations. Also, perhaps a general guideline would be to put "cache" folders outside of Drivepool? ____________________________ EDIT: It looks like it works, the problem were due to some other error that caused the DrivePool to go in read-only mode (which a reboot fixed).
  3. I have a bunch of older 3, 4 and 6TB drives that aren't currently in my main system that i want to make use of. What are some suggested options to externally adding these to a drive pool? I saw this in one post IB-3680SU3 and have seen something like this before as well Mediasonic H82-SU3S2 is there anything i should be aware of to make this work optimally? I'll also have a few drives in my main system that would be part of the pool as well.
  4. Blockchain Data OK on DrivePool?

    Is blockchain data OK to be stored on DrivePool? As opposed to directly on the hard disk (metal). Examples: Bitcoin Core Bitcoin Armory Ethereum Mist
  5. Hey, I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives. Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both. What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added). 1) 2)
  6. Google Drive & Cache Errors

    Hi, I'm kind of new to CloudDrive but have been using DrivePool and Scanner for a long while and never had any issues at all with it. I recently set up CloudDrive to act as a backup to my DrivePool (I dont even care to access it locally really). I have a fast internet connection (Google Fiber Gigabit), so my uploads to Google Drive hover around 200 Mbps and was able to successfully upload about 900GB so far. However my cache drive is getting maxed out. I read that the cache limit is dynamic, but how can I resolve this as I dont want CloudDrive taking up all but 5 GB of this drive. If I understand correctly all this cached data is basically data that is waiting to be uploaded? Any help would me greatly appreciated! My DrivePool Settings: My CloudDrive Errors: The cache size was set as Expandable by default, but when I try to change it, it is grayed out. The bar at the bottom just says "Working..." and is yellow.
  7. When using DrivePool in conjuction with CluodDrive I have a problem when uploading a large amount of data. I have the following setup. 3x 3tb Drives for Cache for CloudDrive with 500Gb dedicated and the rest expandable. I am in an EU Datacentre with 1Gb/s internet connectivity I have a DrivePool with 3 different accounts - 2x Google Drive and 1x Dropbox. I have another DrivePool for where my downloads go, that consists of space left over from the listed drives above. When I attempt to copy / move files from the Downloads DrivePool into the CloudDrive DrivePool, one drive always drops off, Randomly, never one in particular. But then DrivePool will mark the Drive as read only and I can't move media to the new drive. I would have thought cache would handle this temporary outage, I would also expect that the Local Drive cache should handle the sudden influx of files, and not knock off the CloudDrives. I also would think that DrivePool would still be usable and not mark the drive as read only? What am I doing wrong? - how do I fix this behaviour?
  8. NTFS compression and failed drive

    Hello everyone! Another happy user of Drivepool with a question regarding NTFS compression and file evacuation. A few days ago I started having reallocated sectors counters on one drive. Stablebit scanner ordered drivepool to evacuate all the files, but there was not enough space so some of them remained. I bought another drive, which I added to the pool, and tried to remove the existing one, getting an "Access is Denied" error. Afterwards, I tried to force evacuation of files from the damaged drive using the appropriate option in the Stablebit scanner. This triggered a rebalance operation which was going very well, but then I notice several hundreds of GB marked as "Other" not being moved. Then it stroked to me that the new drive has some files without NTFS compression, whereas the old drives in the pool did have. I think somehow since the checksums are not the same for compressed and uncompressed files this is somehow confusing the scanner. What I did so far (for consistency at least, hope this doesn't make things worse!!!) is to disable compression from all the folders I had it enabled (from the old drives, including the faulty one) and wait for the rebalance to complete. Is this the right approach? Is this also expected to happen when using NTFS compression? In drivepool is actually not worth the hassle to have it enabled? (I was getting not fantastic savings, but hey! every little helps, and wasn't noticing perf. degradation). Hope the post somehow makes sense and also hope my data is not compromised for taking the wrong steps! Thanks! DrivePool version: 2.2.0.738 Beta Scanner: 2.5.1.3062 OS: Windows 2016 Attached DrivePool screenshot as well
  9. Hi all. I'm testing out a setup under Server 2016 utilizing Drivepool and SnapRAID. I am using mount points to folders. I did not change anything in the snapraid conf file for hidden files: # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ When running snapraid sync it outputs that it is ignoring the covefs folder - WARNING! Ignoring special 'system-directory' file 'C:/drives/array1disk2/PoolPart.23601e15-9e9c-49fa-91be-31b89e726079/.covefs' Is it important to include this folder? I'm not sure why it is excluding it in the first place since nohidden is commented out. But my main question is if covefs should be included. Thanks.
  10. Files not distributing to new drive

    I've added a new drive to my pool and I'd like to spread the files in the pool over to the new drive. I realize this isn't the default behaviour so I've added the "Disk Space Equilizer" balancer into my system. At the moment it's the only balancer I have enabled and the system STILL isn't moving the files (and therefore balancing the space). Any idea what I'm doing wrong? Is there a log or something that I can check to see why it's ignoring the only balancer present?
  11. I've been having issues for the past few days with DrivePool showing "Statistics are Incomplete" in the GUI. It seems to be the issue is the CloudDrive I have in there because it is showing "Other" where there are actually duplicated files in. In this state, duplication and balancing are not working. I checked the PoolPart folder on that drive and see all the duplicated files and folders that DrivePool has placed on there and it's been working for a couple weeks until this week. I uninstalled CloudDrive and DrivePool and reinstalled both seeing if there was just a glitch but still no luck. I also enabled file system logging in DrivePool and had it "re-measure" but no errors in the logs that I can see. I just don't understand what could be the issue especially when I can see all the files in the PoolPart folder and it looks like it has also placed some new files over to it today and those are showing up. I'm currently using the following application versions: DrivePool: 2.2.0.651 BETA CloudDrive: 1.0.0.870 OS: Windows 10 Pro Here's a screen shot showing my current setup with the CloudDrive showing mostly other when all that's on it is the PoolPart folder and its contents. This screen shot is showing the files and folders correctly within the PoolPart folder.
  12. Greetings, I have two NAS devices, plus a few local hard drives, and I'd like to aggregate them all together in one Pool (ie, so that they show up as a single drive in Windows). From reading through the website and forum it seems like this may be possible, but in my initial experiments with DrivePool and CloudDrive I'm having trouble achieving this. I can create a pool of the local drives in DrivePool, and I can create a drive of a shared folder using CloudDrive. But I'm not seeing that CloudDrive location show up in DrivePool. From my initial looking I think I'd prefer to just use DrivePool if possible, as it seems to have the sorts of features I'm interested in (eg, I know that I want to use the Ordered File Placement plugin). Ideally I'd like to be able to just enter in a UNC path for a shared folder on each NAS in order to add it as a pooled data location in DrivePool. But I could be fine with mapping NAS drives as well, though that doesn't seem to do anything. I'm trying out DrivePool 2.1.1.561 x64 and CloudDrive 1.0.0.854 x64. The background on all of this is that I have CrashPlan installed on this computer, and I want to create a pooled location to point CrashPlan to for storing data that will be backed up TO this computer from another computer (using CrashPlan computer-to-computer backup). CrashPlan only supports selecting a single location for this scenario, but since I have several old NAS's, plus some local hard drives I'd like to pool them all into one drive to use as my CrashPlan repository. For those that see the value in multiple offsite backups you'll appreciate knowing that I also backup to CrashPlan's servers as well. Thanks in advance for any help or advice on all this!
  13. I recently had Scanner flag a disk as containing "unreadable" sectors. I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified. Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted. As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool. Meaning, a perfectly good copy of the file was sitting 1 disk over. SO... Is Scanner not aware of this file? What is the best way to handle this manually if the file cannot be recovered? Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool? Should I overwrite the bad file with the good one??? IN A PERFECT WORLD, I WOULD LOVE TO SEE... Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files). At this point, the user can take action with a list of options which, in one way or another, allow the user to: Flag the sectors-in-question as bad so no future data is written to them (remapped). Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation) Thanks for your ears and some really great software. Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.
  14. Team, I am current running (or was ...) PoolHD with one Drive Pool containing two physical disks. That Drive Pool contained two top level directories: "Duplicated" and "Non Duplicated" i.e. PoolHD balanced the non duplicated files across both disks and the duplicated files, duplicated across both disks. I have now upgraded to W10 and PoolHD no longer works - I expected this, as it is not supported in W10 - and I had always intended to migrate to DrivePool because Windows Storage Spaces requires the drives (that are to be added to a Storage Space) to be cleanly formatted, and of course, I can't do that, because the drives contain data. Now, just like DrivePool, PoolHD stores the files in standard NTFS directories - and even gives advice on how to migrate from DrivePool to PoolHD by changing directory names to match the DrivePool naming conventions. Before purchasing DrivePool, I have downloaded a trial and have created a new Pool, but DrivePool will only add physical disks to the DrivePool pool, that have not previously been in a PoolHD pool. i.e. DrivePool doesn't see the two physical disks that were part of a PoolHD pool, even though both the drives effectively only contain a standard NTFS file structure and PoolHD is uninstalled. Remembering that I effectively have two physical drives that contain two top level directories - one which balances the contents of that directory across both drives and the other (the duplicated directory) that has identical content on both drives, how can I add them to a DrivePool pool? [Note: I guess that the secret is in the naming of some directories in the root of each drive, that indicates to DrivePool that it should steer well clear, but these are only directory names, so quite happy to edit them as necessary.] Thanks in advance, Woody.
  15. Duplicate Later not working

    Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  16. DrivePool + CloudDrive Setup Questions

    Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  17. WHS 2011 Addin freezes when opened

    Hi all, I've been running Stablebit Drivepool for years with no problems, but last week my PC had a hard shutdown, since then I have this problem. I can see and access my Drivepool ok, (Drive I:), the SB Drivepool service is running, but when I try to access the SB Drivepool tab in the WHS 2011 Dashboard the Dashboard freezes. I've tried the following fixes: Rebooted the PC Run a repair of the Drivepool installation (from the Windows Uninstall programs control panel page) Restarted the SB Drivepool service Any ideas? I'd like to try removing and/or reinstalling Drivepool but I'm not sure if that's a good idea? Thanks for the help :-)
  18. [HOWTO] File Location Catalog

    I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: http://community.covecube.com/index.php?/topic/1865-howto-file-location-catalog/&do=findComment&comment=16553 DPFileList.zip
  19. Virtualization Rebuild; Thoughts?

    I have been running a "server" for a number of years with both Scanner and DrivePool being an integral part of it all (I LOVE these products!!!). I think it's time to redesign my current virtualization environment, and I wanted to know what you guys think: My current setup: "Host" (running Win10 Pro w/Client Hyper-V): - Scanner and DrivePool for media, backups, VMs, etc. - CrashPlan for offsite backups (~10 incoming clients) - Plex Media Server (doing occasional transcodes) - Multiple VMs (Win7 for WMC recording, Win10 testing, VPN appliance, UTM appliance, etc.) I feel like the current setup is getting a little too "top-heavy". My biggest pain points are the fact that I have to bring down the entire environment every time M$ deploys patches, and a lack of clean backups for the VMs that are getting more-and-more important. Being the budget is a big concern, I'm hoping to re-work my existing environment... My proposed setup: "Host" running Hyper-V Server, CrashPlan, Scanner and DrivePool VM for Plex Media Server, targeting shares on host VM for WMC TV recording, moving recordings to host share Etc., etc... I believe this design will allow the host to be more stable and help with uptime...what do you guys think of this? I know I can install and run CrashPlan, Scanner and DrivePool on Hyper-V server, but I've never done any long-term testing... Also, can anyone recommend a good, free way to backup those VMs with Hyper-V Server? If I can get a backup of those VMs onto the DrivePool array and sent offsite via CrashPlan, that would be perfect -Quinn
  20. GPT vs MBR

    Is there any advantage using GPT vs MBR in standard storage hdd's for use in a pool?
  21. ECC ram for a Drivepool server.

    Is there any advantage using ECC ram on a Windows Drivepool server? Heard terror stories about not using ECC on ZFS, people loosing their entire pools etc due to memory corruption and the way ZFS uses RAM for scrubbing, there's a lot of hype about ZFS which made me consider it, i played with it on a FreeBSD virtual machine, its powerfull stuff but somehow i feel safer with plain old NTFS and Drivepool, only the ECC ram question remains.
  22. Drivepool Beta constant disk activity

    I have new windows 10 install with drivepool and scanner and moved my licences from my old computer to new. Set up two pools for storage and noticed something weird with scanner. It was showing a 4KB activity on the performance column every 2 seconds or so. This is causing the disks to keep pausing the scan due to activity and they are now never spinning down. I did some searching around and didn't see anything posted anywhere about it so I did some troubleshooting. Running Procmon shows the drivepool service writing to the drive and stopping the service stops the activity. Should I downgrade to the stable version? Is the drivepool service critical to the operation? Is there an advanced setting somewhere?
  23. Hello everyone! I'm new here trying to do as much research as I can before purchase. I'm liking all the information I've seen on the main site,manual, and the forums/old forums. I think I've caught a little information off Reddit to push me here. I'm hoping for loads of information and maybe this will help MANY people in the long run on what to do. So first off on the topic string. I would like to use StableBit's products only. So in doing so I gathered some can's and can not's. That the Drivepool with Scanner are a pair made to secure any deal. But I'm also worried about parity. My current pool is: 5x4TB Seagates 2x3TB Seagates The purpose of my pool is family movies / music and pictures. Besides the music and pictures being of small size, the movies range from 400MB-16GB. Here's some Reddit research that even put me on the research run about StableBit products. Ok in this I was told that : 1. Drivepool offers redundancy via duplication 2. Creator of StableBit products has a Youtube vLog channel (Couldn't find it but found Stablebit.com's and only had two videos no vLogs) 3. One user that spoke so highly of StableBit products (Has owned it for 4-5 years now) 4. Drivepools duplication works via client setting the folders or subfolders. To be duplicated 2x,x3 so on. I was confused on the duplication settings. And if there is a parity for at least one HDD failure or more depending on settings. I really love the way these products looks, the active community and the progressiveness of the Covecube staff for their products! I need to really strongly put it out here that I would really rather use StableBit's products less programs running and wouldn't have to worry about which one is or isn't causing problems. This is a two part thread so this is the end of the first research part. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Now for the second part of the researching. I've seen this in a few places doing a StableBit Drivepool for pooling the drives with FlexRaid (Raid-F) for the parity set up. But mostly using all the programs from StableBit where as setting and forgetting "almost" the FlexRaid setup. Here's the research I've dug up well what I could. Oddly I found a couple hints on the Flexraid forums but nothing saying where it was on the forums or what to search for or anything. So most of it was on the old Covecube forums that are read only. I would put links but I think I'll just select the little information I need so this thread doesn't get kicked. And the second part. Ok I read the information on the first thread above and that it talking about how it was possible. Saitoh183 posted a few times on that thread with more information on Drivepooling and Flexraiding. Goes through making sure everyone knows that you lose one or more drives (largest or equal size of a every drive"Not put together") for a parity disk or a PPU so called. In the second quote of research it is a small thread "explaining" how to setup the both of them. I know and understand that Saitoh183 said "It doesn't matter in which order you set them up. DP just merges your drives, Your performance comes from DP and Flexraid doesn't affect it. Flexraid is only relevant for performance if you were using the pooling feature of it. Since you aren't, the only performance you will see is when your parity drive is being updated. Also dont forget to not add your PPU to the pool" I know from what Saitoh183 it doesn't matter. But I figured you would make the StableBit Drivepool setup the drive letter. Now going to the FlexRaid: 1. Add new configuration 2. Name Cruise Control with Snapshot and Create 3. Open the new configuration to edit and open Drive Manager 4. Add the DRU's (Data drives) and one or more PPU for parity backup's (Snapshots) I've read a few setup guides and I've heard 1 PPU drive for every 6 and I've heard 1 PPU drive for every 10 both are fine. 5. Initialize the raid if data is on the DRU's it will now do a parity snapshot, now back to the home page for the named configuration and Start Storage Pool. Not sure what else to after that if it's even right. I don't think the FlexRaid should have a drive letter or it would make things more confusing than it already is using two programs. Please enlighten my with any information that can help this research that will help with my purchase and hopefully more people that decide to do this setup also. I would like to firstly so I appreciate everyone up front for there past help with others to even get me here with this information! Thanks again. Techtonic
  24. DrivePool keeps "checking"

    I'm using DP on a new Windows 2012 R2 Essentials install. My DP is 2.1.1.561 which, I believe, is the latest official release. I've noticed that every time I go into the Dashboard it says that DrivePool is "checking". These checks take a long time and appear to use a fair bit of disk throughput to complete so I'm wondering why they a happening all the time. My previous install (I just rebuilt the OS) didn't seem to do this although maybe I missed it. Any suggestions?
  25. I currently use FlexRAID RAID-F for snapshot RAID, which I'm pretty happy with. I also use the FlexRAID drive pooling feature to merge 8 HDDs, which I'm not particularly happy with since it doesn't work as I'd like it to (and is also, IMO, simply broken). I'd like to try migrating my current setup to use DrivePool in conjunction with snapshot RAID. My first question is: are there any known issues with such a setup? For example, does DrivePool do anything that would invalidate daily RAID snapshots? Secondly, I'd like confirmation that DrivePool can actually do what I want. Here's an example of the root directories on my drives: Drive 1: Data, Audio, Backups Drive 2: Video\TV Drive 3: Video\TV Drive 4: Video\Films Drive 5: Video\Films In this scenario, if I copied a file into the \Video\Films folder, I want it to go to either Drive 4 or Drive 5 (not really bothered which, but I suppose it'd make sense to fill one up before using the other). The only scenario that should lead to drives 1-3 having files from the \Video\Films folder on them is if Drives 4 & 5 are full. In addition, I'd like only drives 4 & 5 to spin up (ideally at the same time) when accessing the \Video\Films folder via a network share. At the moment, going to such a share takes ages since FlexRAID shoves files on random drives without being asked to and then spins all of them up one a time. Sometimes traversing subdirectories incurs additional delays as (presumably) even more drives are spun up. Finally, what kind of transfer speeds should I expect when copying files into the pool? With FlexRAID pooling, I get ~35 MB/s when writing to the pool, whereas writing to a drive directly yields 150+ MB/s. How about over a gigabit network (where I'll be doing most of my transfers)? Thanks for any advice!
×