Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'DrivePool'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 64 results

  1. Hi, referring to this post and how DrivePool acts with different size drives, I found myself copy/pasting a file and DrivePool used the same drive for the entire operation (as the source and the destination). This resulted in a really poor performance (see attachments; just ignore drive "E:", there are some rules that exclude it from the general operations). Same thing extracting some .rar files. Is there a way to make DP privilege performance over "available space" rules? I mean, if there is available space on the other drives it can use the other drives instead of the "source" one. Thanks
  2. I'm designing a replacement for my current server. Goals are modest. Less heat and power draw than my existing server. Roles are file server with drivepool, emby for media with very light encoding needs. Local backups are with Macroum Reflect. Off site backups with Spideroak. OS will be win 10 Pro. My current rig is here: New build: PCPartPicker part list: https://pcpartpicker.com/list/yhdVqk Price breakdown by merchant: https://pcpartpicker.com/list/yhdVqk/by_merchant/ Memory: Crucial - 8GB (2 x 4GB) DDR3-1600 Memory ($63.17 @ Amazon) Case: Fractal Design - Node 304 Mini ITX Tower Case ($106.72 @ Newegg) Power Supply: EVGA - B3 450W 80+ Bronze Certified Fully-Modular ATX Power Supply ($53.73 @ Amazon) Operating System: Microsoft - Windows 10 Pro OEM 64-bit ($139.99 @ Other World Computing) Other: ASRock Motherboard & CPU Combo Motherboards J3455B-ITX ($73.93 @ Newegg) Total: $437.54 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2018-01-16 10:30 EST-0500 Storage: LSI SAS 2 port HBA flashed to IT mode. Boot: PNY 120GB SSD Data Pool: 4 x 2TB WD Red Backup Pool: 1x5TB Seagate (Shelled USB drive) When this is built my old rig goes up for sale. My only thought is on the CPU. Could I go with a dual core Celeron for about 80 bucks? Will it handle a single encoding stream and not draw more power than the i3T? The Celeron will do fine as a file server.
  3. I have two networks, one which is connected to the internet (1), and one which is not (2). I have a machine (A) running drivepool and scanner on (2), and a machine (B) connected to both (1) and (2). Machine (B) is set up to remote monitor (A). Will any notifications generated on (A), which is being remote monitored by (B), be sent from machine (B) to my email? When sending a test message from Scanner on machine (A), since it is not connected to the internet, creates the error "Test email could not be sent to blank@blank.com. Object reference not set to an instance of an object." When sending a test message from Drivepool on machine (A), Drivepool claims to have successfully sent the test email. In both instances, I do not receive an email. When I attempt to send a test message from machine (B), I get the same response and error: Drivepool successful, and scanner "object reference" error, and I do not receive an email for either attempt. Aside from the "object reference" error, should I be able to funnel any notifications through another machine in the way that I am attempting to do it? Drivepool: 2.1.1.561 x64 Scanner: 2.5.1.3062 Thank you.
  4. Hi, I have an office 365 account where I get 5 - 1TB onedrives. I am trying to link 3 of them together using clouddrive and drivepool. I have on my pc as storage drives: x4 - Toshiba HDWE160 6TB x1 - Seagate ST3000DM001-9YN166 3TB I have all the drives pooled together using drivepool. When creating the OneDrive drivepool, how should I create the clouddrive cache? Should I put all 3 cache partitions on a single, dedicated, cache disk? Can I put a single cache partition on 3 different drives in the storage drivepool? Or do I need a dedicated cache drive for each of my onedrive cloud connections? What are your recommendations for this? I've tried putting the cache partitions on the same dedicated cache disk and get BSOD every time I write a lot of files to it. Thank you.
  5. In the advanced settings of Plex Media Server for transcoder, setting the Transcoder temporary directory to a folder in a DrivePool causes Plex to show an error on playback. I wanted to check if anyone else has that behaviour, if so, maybe include that in the limitations. Also, perhaps a general guideline would be to put "cache" folders outside of Drivepool? ____________________________ EDIT: It looks like it works, the problem were due to some other error that caused the DrivePool to go in read-only mode (which a reboot fixed).
  6. Is blockchain data OK to be stored on DrivePool? As opposed to directly on the hard disk (metal). Examples: Bitcoin Core Bitcoin Armory Ethereum Mist
  7. Hi, I'm kind of new to CloudDrive but have been using DrivePool and Scanner for a long while and never had any issues at all with it. I recently set up CloudDrive to act as a backup to my DrivePool (I dont even care to access it locally really). I have a fast internet connection (Google Fiber Gigabit), so my uploads to Google Drive hover around 200 Mbps and was able to successfully upload about 900GB so far. However my cache drive is getting maxed out. I read that the cache limit is dynamic, but how can I resolve this as I dont want CloudDrive taking up all but 5 GB of this drive. If I understand correctly all this cached data is basically data that is waiting to be uploaded? Any help would me greatly appreciated! My DrivePool Settings: My CloudDrive Errors: The cache size was set as Expandable by default, but when I try to change it, it is grayed out. The bar at the bottom just says "Working..." and is yellow.
  8. When using DrivePool in conjuction with CluodDrive I have a problem when uploading a large amount of data. I have the following setup. 3x 3tb Drives for Cache for CloudDrive with 500Gb dedicated and the rest expandable. I am in an EU Datacentre with 1Gb/s internet connectivity I have a DrivePool with 3 different accounts - 2x Google Drive and 1x Dropbox. I have another DrivePool for where my downloads go, that consists of space left over from the listed drives above. When I attempt to copy / move files from the Downloads DrivePool into the CloudDrive DrivePool, one drive always drops off, Randomly, never one in particular. But then DrivePool will mark the Drive as read only and I can't move media to the new drive. I would have thought cache would handle this temporary outage, I would also expect that the Local Drive cache should handle the sudden influx of files, and not knock off the CloudDrives. I also would think that DrivePool would still be usable and not mark the drive as read only? What am I doing wrong? - how do I fix this behaviour?
  9. Hello everyone! Another happy user of Drivepool with a question regarding NTFS compression and file evacuation. A few days ago I started having reallocated sectors counters on one drive. Stablebit scanner ordered drivepool to evacuate all the files, but there was not enough space so some of them remained. I bought another drive, which I added to the pool, and tried to remove the existing one, getting an "Access is Denied" error. Afterwards, I tried to force evacuation of files from the damaged drive using the appropriate option in the Stablebit scanner. This triggered a rebalance operation which was going very well, but then I notice several hundreds of GB marked as "Other" not being moved. Then it stroked to me that the new drive has some files without NTFS compression, whereas the old drives in the pool did have. I think somehow since the checksums are not the same for compressed and uncompressed files this is somehow confusing the scanner. What I did so far (for consistency at least, hope this doesn't make things worse!!!) is to disable compression from all the folders I had it enabled (from the old drives, including the faulty one) and wait for the rebalance to complete. Is this the right approach? Is this also expected to happen when using NTFS compression? In drivepool is actually not worth the hassle to have it enabled? (I was getting not fantastic savings, but hey! every little helps, and wasn't noticing perf. degradation). Hope the post somehow makes sense and also hope my data is not compromised for taking the wrong steps! Thanks! DrivePool version: 2.2.0.738 Beta Scanner: 2.5.1.3062 OS: Windows 2016 Attached DrivePool screenshot as well
  10. Hi all. I'm testing out a setup under Server 2016 utilizing Drivepool and SnapRAID. I am using mount points to folders. I did not change anything in the snapraid conf file for hidden files: # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ When running snapraid sync it outputs that it is ignoring the covefs folder - WARNING! Ignoring special 'system-directory' file 'C:/drives/array1disk2/PoolPart.23601e15-9e9c-49fa-91be-31b89e726079/.covefs' Is it important to include this folder? I'm not sure why it is excluding it in the first place since nohidden is commented out. But my main question is if covefs should be included. Thanks.
  11. I've added a new drive to my pool and I'd like to spread the files in the pool over to the new drive. I realize this isn't the default behaviour so I've added the "Disk Space Equilizer" balancer into my system. At the moment it's the only balancer I have enabled and the system STILL isn't moving the files (and therefore balancing the space). Any idea what I'm doing wrong? Is there a log or something that I can check to see why it's ignoring the only balancer present?
  12. Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already. The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds. Why is this important and how does it affect your pool? You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive. You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk. These are some very useful tricks to integrate the products, already. And if anyone else finds some neat tips or tricks, we'll add them here as well.
  13. I've been having issues for the past few days with DrivePool showing "Statistics are Incomplete" in the GUI. It seems to be the issue is the CloudDrive I have in there because it is showing "Other" where there are actually duplicated files in. In this state, duplication and balancing are not working. I checked the PoolPart folder on that drive and see all the duplicated files and folders that DrivePool has placed on there and it's been working for a couple weeks until this week. I uninstalled CloudDrive and DrivePool and reinstalled both seeing if there was just a glitch but still no luck. I also enabled file system logging in DrivePool and had it "re-measure" but no errors in the logs that I can see. I just don't understand what could be the issue especially when I can see all the files in the PoolPart folder and it looks like it has also placed some new files over to it today and those are showing up. I'm currently using the following application versions: DrivePool: 2.2.0.651 BETA CloudDrive: 1.0.0.870 OS: Windows 10 Pro Here's a screen shot showing my current setup with the CloudDrive showing mostly other when all that's on it is the PoolPart folder and its contents. This screen shot is showing the files and folders correctly within the PoolPart folder.
  14. I recently had Scanner flag a disk as containing "unreadable" sectors. I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified. Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted. As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool. Meaning, a perfectly good copy of the file was sitting 1 disk over. SO... Is Scanner not aware of this file? What is the best way to handle this manually if the file cannot be recovered? Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool? Should I overwrite the bad file with the good one??? IN A PERFECT WORLD, I WOULD LOVE TO SEE... Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files). At this point, the user can take action with a list of options which, in one way or another, allow the user to: Flag the sectors-in-question as bad so no future data is written to them (remapped). Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation) Thanks for your ears and some really great software. Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.
  15. Greetings, I have two NAS devices, plus a few local hard drives, and I'd like to aggregate them all together in one Pool (ie, so that they show up as a single drive in Windows). From reading through the website and forum it seems like this may be possible, but in my initial experiments with DrivePool and CloudDrive I'm having trouble achieving this. I can create a pool of the local drives in DrivePool, and I can create a drive of a shared folder using CloudDrive. But I'm not seeing that CloudDrive location show up in DrivePool. From my initial looking I think I'd prefer to just use DrivePool if possible, as it seems to have the sorts of features I'm interested in (eg, I know that I want to use the Ordered File Placement plugin). Ideally I'd like to be able to just enter in a UNC path for a shared folder on each NAS in order to add it as a pooled data location in DrivePool. But I could be fine with mapping NAS drives as well, though that doesn't seem to do anything. I'm trying out DrivePool 2.1.1.561 x64 and CloudDrive 1.0.0.854 x64. The background on all of this is that I have CrashPlan installed on this computer, and I want to create a pooled location to point CrashPlan to for storing data that will be backed up TO this computer from another computer (using CrashPlan computer-to-computer backup). CrashPlan only supports selecting a single location for this scenario, but since I have several old NAS's, plus some local hard drives I'd like to pool them all into one drive to use as my CrashPlan repository. For those that see the value in multiple offsite backups you'll appreciate knowing that I also backup to CrashPlan's servers as well. Thanks in advance for any help or advice on all this!
  16. Team, I am current running (or was ...) PoolHD with one Drive Pool containing two physical disks. That Drive Pool contained two top level directories: "Duplicated" and "Non Duplicated" i.e. PoolHD balanced the non duplicated files across both disks and the duplicated files, duplicated across both disks. I have now upgraded to W10 and PoolHD no longer works - I expected this, as it is not supported in W10 - and I had always intended to migrate to DrivePool because Windows Storage Spaces requires the drives (that are to be added to a Storage Space) to be cleanly formatted, and of course, I can't do that, because the drives contain data. Now, just like DrivePool, PoolHD stores the files in standard NTFS directories - and even gives advice on how to migrate from DrivePool to PoolHD by changing directory names to match the DrivePool naming conventions. Before purchasing DrivePool, I have downloaded a trial and have created a new Pool, but DrivePool will only add physical disks to the DrivePool pool, that have not previously been in a PoolHD pool. i.e. DrivePool doesn't see the two physical disks that were part of a PoolHD pool, even though both the drives effectively only contain a standard NTFS file structure and PoolHD is uninstalled. Remembering that I effectively have two physical drives that contain two top level directories - one which balances the contents of that directory across both drives and the other (the duplicated directory) that has identical content on both drives, how can I add them to a DrivePool pool? [Note: I guess that the secret is in the naming of some directories in the root of each drive, that indicates to DrivePool that it should steer well clear, but these are only directory names, so quite happy to edit them as necessary.] Thanks in advance, Woody.
  17. The only topics I've seen on this are a little old. It seems to me that it might work to enable Server's Data Deduplication tasks against the raw disk volumes instead of the main drivepool partition? does this sound like it would work? Thanks! -everyonce
  18. Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  19. I have been running a "server" for a number of years with both Scanner and DrivePool being an integral part of it all (I LOVE these products!!!). I think it's time to redesign my current virtualization environment, and I wanted to know what you guys think: My current setup: "Host" (running Win10 Pro w/Client Hyper-V): - Scanner and DrivePool for media, backups, VMs, etc. - CrashPlan for offsite backups (~10 incoming clients) - Plex Media Server (doing occasional transcodes) - Multiple VMs (Win7 for WMC recording, Win10 testing, VPN appliance, UTM appliance, etc.) I feel like the current setup is getting a little too "top-heavy". My biggest pain points are the fact that I have to bring down the entire environment every time M$ deploys patches, and a lack of clean backups for the VMs that are getting more-and-more important. Being the budget is a big concern, I'm hoping to re-work my existing environment... My proposed setup: "Host" running Hyper-V Server, CrashPlan, Scanner and DrivePool VM for Plex Media Server, targeting shares on host VM for WMC TV recording, moving recordings to host share Etc., etc... I believe this design will allow the host to be more stable and help with uptime...what do you guys think of this? I know I can install and run CrashPlan, Scanner and DrivePool on Hyper-V server, but I've never done any long-term testing... Also, can anyone recommend a good, free way to backup those VMs with Hyper-V Server? If I can get a backup of those VMs onto the DrivePool array and sent offsite via CrashPlan, that would be perfect -Quinn
  20. Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  21. Hi all, I've been running Stablebit Drivepool for years with no problems, but last week my PC had a hard shutdown, since then I have this problem. I can see and access my Drivepool ok, (Drive I:), the SB Drivepool service is running, but when I try to access the SB Drivepool tab in the WHS 2011 Dashboard the Dashboard freezes. I've tried the following fixes: Rebooted the PC Run a repair of the Drivepool installation (from the Windows Uninstall programs control panel page) Restarted the SB Drivepool service Any ideas? I'd like to try removing and/or reinstalling Drivepool but I'm not sure if that's a good idea? Thanks for the help :-)
  22. I have new windows 10 install with drivepool and scanner and moved my licences from my old computer to new. Set up two pools for storage and noticed something weird with scanner. It was showing a 4KB activity on the performance column every 2 seconds or so. This is causing the disks to keep pausing the scan due to activity and they are now never spinning down. I did some searching around and didn't see anything posted anywhere about it so I did some troubleshooting. Running Procmon shows the drivepool service writing to the drive and stopping the service stops the activity. Should I downgrade to the stable version? Is the drivepool service critical to the operation? Is there an advanced setting somewhere?
  23. grimpr

    GPT vs MBR

    Is there any advantage using GPT vs MBR in standard storage hdd's for use in a pool?
  24. Is there any advantage using ECC ram on a Windows Drivepool server? Heard terror stories about not using ECC on ZFS, people loosing their entire pools etc due to memory corruption and the way ZFS uses RAM for scrubbing, there's a lot of hype about ZFS which made me consider it, i played with it on a FreeBSD virtual machine, its powerfull stuff but somehow i feel safer with plain old NTFS and Drivepool, only the ECC ram question remains.
  25. Hello everyone! I'm new here trying to do as much research as I can before purchase. I'm liking all the information I've seen on the main site,manual, and the forums/old forums. I think I've caught a little information off Reddit to push me here. I'm hoping for loads of information and maybe this will help MANY people in the long run on what to do. So first off on the topic string. I would like to use StableBit's products only. So in doing so I gathered some can's and can not's. That the Drivepool with Scanner are a pair made to secure any deal. But I'm also worried about parity. My current pool is: 5x4TB Seagates 2x3TB Seagates The purpose of my pool is family movies / music and pictures. Besides the music and pictures being of small size, the movies range from 400MB-16GB. Here's some Reddit research that even put me on the research run about StableBit products. Ok in this I was told that : 1. Drivepool offers redundancy via duplication 2. Creator of StableBit products has a Youtube vLog channel (Couldn't find it but found Stablebit.com's and only had two videos no vLogs) 3. One user that spoke so highly of StableBit products (Has owned it for 4-5 years now) 4. Drivepools duplication works via client setting the folders or subfolders. To be duplicated 2x,x3 so on. I was confused on the duplication settings. And if there is a parity for at least one HDD failure or more depending on settings. I really love the way these products looks, the active community and the progressiveness of the Covecube staff for their products! I need to really strongly put it out here that I would really rather use StableBit's products less programs running and wouldn't have to worry about which one is or isn't causing problems. This is a two part thread so this is the end of the first research part. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Now for the second part of the researching. I've seen this in a few places doing a StableBit Drivepool for pooling the drives with FlexRaid (Raid-F) for the parity set up. But mostly using all the programs from StableBit where as setting and forgetting "almost" the FlexRaid setup. Here's the research I've dug up well what I could. Oddly I found a couple hints on the Flexraid forums but nothing saying where it was on the forums or what to search for or anything. So most of it was on the old Covecube forums that are read only. I would put links but I think I'll just select the little information I need so this thread doesn't get kicked. And the second part. Ok I read the information on the first thread above and that it talking about how it was possible. Saitoh183 posted a few times on that thread with more information on Drivepooling and Flexraiding. Goes through making sure everyone knows that you lose one or more drives (largest or equal size of a every drive"Not put together") for a parity disk or a PPU so called. In the second quote of research it is a small thread "explaining" how to setup the both of them. I know and understand that Saitoh183 said "It doesn't matter in which order you set them up. DP just merges your drives, Your performance comes from DP and Flexraid doesn't affect it. Flexraid is only relevant for performance if you were using the pooling feature of it. Since you aren't, the only performance you will see is when your parity drive is being updated. Also dont forget to not add your PPU to the pool" I know from what Saitoh183 it doesn't matter. But I figured you would make the StableBit Drivepool setup the drive letter. Now going to the FlexRaid: 1. Add new configuration 2. Name Cruise Control with Snapshot and Create 3. Open the new configuration to edit and open Drive Manager 4. Add the DRU's (Data drives) and one or more PPU for parity backup's (Snapshots) I've read a few setup guides and I've heard 1 PPU drive for every 6 and I've heard 1 PPU drive for every 10 both are fine. 5. Initialize the raid if data is on the DRU's it will now do a parity snapshot, now back to the home page for the named configuration and Start Storage Pool. Not sure what else to after that if it's even right. I don't think the FlexRaid should have a drive letter or it would make things more confusing than it already is using two programs. Please enlighten my with any information that can help this research that will help with my purchase and hopefully more people that decide to do this setup also. I would like to firstly so I appreciate everyone up front for there past help with others to even get me here with this information! Thanks again. Techtonic
×
×
  • Create New...