Jump to content

Search the Community

Showing results for tags 'Drivepool'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. Hello everyone! Another happy user of Drivepool with a question regarding NTFS compression and file evacuation. A few days ago I started having reallocated sectors counters on one drive. Stablebit scanner ordered drivepool to evacuate all the files, but there was not enough space so some of them remained. I bought another drive, which I added to the pool, and tried to remove the existing one, getting an "Access is Denied" error. Afterwards, I tried to force evacuation of files from the damaged drive using the appropriate option in the Stablebit scanner. This triggered a rebalance operation which was going very well, but then I notice several hundreds of GB marked as "Other" not being moved. Then it stroked to me that the new drive has some files without NTFS compression, whereas the old drives in the pool did have. I think somehow since the checksums are not the same for compressed and uncompressed files this is somehow confusing the scanner. What I did so far (for consistency at least, hope this doesn't make things worse!!!) is to disable compression from all the folders I had it enabled (from the old drives, including the faulty one) and wait for the rebalance to complete. Is this the right approach? Is this also expected to happen when using NTFS compression? In drivepool is actually not worth the hassle to have it enabled? (I was getting not fantastic savings, but hey! every little helps, and wasn't noticing perf. degradation). Hope the post somehow makes sense and also hope my data is not compromised for taking the wrong steps! Thanks! DrivePool version: 2.2.0.738 Beta Scanner: 2.5.1.3062 OS: Windows 2016 Attached DrivePool screenshot as well
  2. Hi all. I'm testing out a setup under Server 2016 utilizing Drivepool and SnapRAID. I am using mount points to folders. I did not change anything in the snapraid conf file for hidden files: # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR\" # Format: "exclude \PATH\FILE" # Format: "exclude \PATH\DIR\" exclude *.unrecoverable exclude Thumbs.db exclude \$RECYCLE.BIN exclude \System Volume Information exclude \Program Files\ exclude \Program Files (x86)\ exclude \Windows\ When running snapraid sync it outputs that it is ignoring the covefs folder - WARNING! Ignoring special 'system-directory' file 'C:/drives/array1disk2/PoolPart.23601e15-9e9c-49fa-91be-31b89e726079/.covefs' Is it important to include this folder? I'm not sure why it is excluding it in the first place since nohidden is commented out. But my main question is if covefs should be included. Thanks.
  3. I've added a new drive to my pool and I'd like to spread the files in the pool over to the new drive. I realize this isn't the default behaviour so I've added the "Disk Space Equilizer" balancer into my system. At the moment it's the only balancer I have enabled and the system STILL isn't moving the files (and therefore balancing the space). Any idea what I'm doing wrong? Is there a log or something that I can check to see why it's ignoring the only balancer present?
  4. Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already. The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds. Why is this important and how does it affect your pool? You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive. You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk. These are some very useful tricks to integrate the products, already. And if anyone else finds some neat tips or tricks, we'll add them here as well.
  5. I recently had Scanner flag a disk as containing "unreadable" sectors. I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified. Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted. As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool. Meaning, a perfectly good copy of the file was sitting 1 disk over. SO... Is Scanner not aware of this file? What is the best way to handle this manually if the file cannot be recovered? Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool? Should I overwrite the bad file with the good one??? IN A PERFECT WORLD, I WOULD LOVE TO SEE... Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files). At this point, the user can take action with a list of options which, in one way or another, allow the user to: Flag the sectors-in-question as bad so no future data is written to them (remapped). Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation) Thanks for your ears and some really great software. Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.
  6. Greetings, I have two NAS devices, plus a few local hard drives, and I'd like to aggregate them all together in one Pool (ie, so that they show up as a single drive in Windows). From reading through the website and forum it seems like this may be possible, but in my initial experiments with DrivePool and CloudDrive I'm having trouble achieving this. I can create a pool of the local drives in DrivePool, and I can create a drive of a shared folder using CloudDrive. But I'm not seeing that CloudDrive location show up in DrivePool. From my initial looking I think I'd prefer to just use DrivePool if possible, as it seems to have the sorts of features I'm interested in (eg, I know that I want to use the Ordered File Placement plugin). Ideally I'd like to be able to just enter in a UNC path for a shared folder on each NAS in order to add it as a pooled data location in DrivePool. But I could be fine with mapping NAS drives as well, though that doesn't seem to do anything. I'm trying out DrivePool 2.1.1.561 x64 and CloudDrive 1.0.0.854 x64. The background on all of this is that I have CrashPlan installed on this computer, and I want to create a pooled location to point CrashPlan to for storing data that will be backed up TO this computer from another computer (using CrashPlan computer-to-computer backup). CrashPlan only supports selecting a single location for this scenario, but since I have several old NAS's, plus some local hard drives I'd like to pool them all into one drive to use as my CrashPlan repository. For those that see the value in multiple offsite backups you'll appreciate knowing that I also backup to CrashPlan's servers as well. Thanks in advance for any help or advice on all this!
  7. Team, I am current running (or was ...) PoolHD with one Drive Pool containing two physical disks. That Drive Pool contained two top level directories: "Duplicated" and "Non Duplicated" i.e. PoolHD balanced the non duplicated files across both disks and the duplicated files, duplicated across both disks. I have now upgraded to W10 and PoolHD no longer works - I expected this, as it is not supported in W10 - and I had always intended to migrate to DrivePool because Windows Storage Spaces requires the drives (that are to be added to a Storage Space) to be cleanly formatted, and of course, I can't do that, because the drives contain data. Now, just like DrivePool, PoolHD stores the files in standard NTFS directories - and even gives advice on how to migrate from DrivePool to PoolHD by changing directory names to match the DrivePool naming conventions. Before purchasing DrivePool, I have downloaded a trial and have created a new Pool, but DrivePool will only add physical disks to the DrivePool pool, that have not previously been in a PoolHD pool. i.e. DrivePool doesn't see the two physical disks that were part of a PoolHD pool, even though both the drives effectively only contain a standard NTFS file structure and PoolHD is uninstalled. Remembering that I effectively have two physical drives that contain two top level directories - one which balances the contents of that directory across both drives and the other (the duplicated directory) that has identical content on both drives, how can I add them to a DrivePool pool? [Note: I guess that the secret is in the naming of some directories in the root of each drive, that indicates to DrivePool that it should steer well clear, but these are only directory names, so quite happy to edit them as necessary.] Thanks in advance, Woody.
  8. The only topics I've seen on this are a little old. It seems to me that it might work to enable Server's Data Deduplication tasks against the raw disk volumes instead of the main drivepool partition? does this sound like it would work? Thanks! -everyonce
  9. Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  10. I have been running a "server" for a number of years with both Scanner and DrivePool being an integral part of it all (I LOVE these products!!!). I think it's time to redesign my current virtualization environment, and I wanted to know what you guys think: My current setup: "Host" (running Win10 Pro w/Client Hyper-V): - Scanner and DrivePool for media, backups, VMs, etc. - CrashPlan for offsite backups (~10 incoming clients) - Plex Media Server (doing occasional transcodes) - Multiple VMs (Win7 for WMC recording, Win10 testing, VPN appliance, UTM appliance, etc.) I feel like the current setup is getting a little too "top-heavy". My biggest pain points are the fact that I have to bring down the entire environment every time M$ deploys patches, and a lack of clean backups for the VMs that are getting more-and-more important. Being the budget is a big concern, I'm hoping to re-work my existing environment... My proposed setup: "Host" running Hyper-V Server, CrashPlan, Scanner and DrivePool VM for Plex Media Server, targeting shares on host VM for WMC TV recording, moving recordings to host share Etc., etc... I believe this design will allow the host to be more stable and help with uptime...what do you guys think of this? I know I can install and run CrashPlan, Scanner and DrivePool on Hyper-V server, but I've never done any long-term testing... Also, can anyone recommend a good, free way to backup those VMs with Hyper-V Server? If I can get a backup of those VMs onto the DrivePool array and sent offsite via CrashPlan, that would be perfect -Quinn
  11. Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  12. Hi all, I've been running Stablebit Drivepool for years with no problems, but last week my PC had a hard shutdown, since then I have this problem. I can see and access my Drivepool ok, (Drive I:), the SB Drivepool service is running, but when I try to access the SB Drivepool tab in the WHS 2011 Dashboard the Dashboard freezes. I've tried the following fixes: Rebooted the PC Run a repair of the Drivepool installation (from the Windows Uninstall programs control panel page) Restarted the SB Drivepool service Any ideas? I'd like to try removing and/or reinstalling Drivepool but I'm not sure if that's a good idea? Thanks for the help :-)
  13. I have new windows 10 install with drivepool and scanner and moved my licences from my old computer to new. Set up two pools for storage and noticed something weird with scanner. It was showing a 4KB activity on the performance column every 2 seconds or so. This is causing the disks to keep pausing the scan due to activity and they are now never spinning down. I did some searching around and didn't see anything posted anywhere about it so I did some troubleshooting. Running Procmon shows the drivepool service writing to the drive and stopping the service stops the activity. Should I downgrade to the stable version? Is the drivepool service critical to the operation? Is there an advanced setting somewhere?
  14. grimpr

    GPT vs MBR

    Is there any advantage using GPT vs MBR in standard storage hdd's for use in a pool?
  15. Is there any advantage using ECC ram on a Windows Drivepool server? Heard terror stories about not using ECC on ZFS, people loosing their entire pools etc due to memory corruption and the way ZFS uses RAM for scrubbing, there's a lot of hype about ZFS which made me consider it, i played with it on a FreeBSD virtual machine, its powerfull stuff but somehow i feel safer with plain old NTFS and Drivepool, only the ECC ram question remains.
  16. Hello everyone! I'm new here trying to do as much research as I can before purchase. I'm liking all the information I've seen on the main site,manual, and the forums/old forums. I think I've caught a little information off Reddit to push me here. I'm hoping for loads of information and maybe this will help MANY people in the long run on what to do. So first off on the topic string. I would like to use StableBit's products only. So in doing so I gathered some can's and can not's. That the Drivepool with Scanner are a pair made to secure any deal. But I'm also worried about parity. My current pool is: 5x4TB Seagates 2x3TB Seagates The purpose of my pool is family movies / music and pictures. Besides the music and pictures being of small size, the movies range from 400MB-16GB. Here's some Reddit research that even put me on the research run about StableBit products. Ok in this I was told that : 1. Drivepool offers redundancy via duplication 2. Creator of StableBit products has a Youtube vLog channel (Couldn't find it but found Stablebit.com's and only had two videos no vLogs) 3. One user that spoke so highly of StableBit products (Has owned it for 4-5 years now) 4. Drivepools duplication works via client setting the folders or subfolders. To be duplicated 2x,x3 so on. I was confused on the duplication settings. And if there is a parity for at least one HDD failure or more depending on settings. I really love the way these products looks, the active community and the progressiveness of the Covecube staff for their products! I need to really strongly put it out here that I would really rather use StableBit's products less programs running and wouldn't have to worry about which one is or isn't causing problems. This is a two part thread so this is the end of the first research part. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- Now for the second part of the researching. I've seen this in a few places doing a StableBit Drivepool for pooling the drives with FlexRaid (Raid-F) for the parity set up. But mostly using all the programs from StableBit where as setting and forgetting "almost" the FlexRaid setup. Here's the research I've dug up well what I could. Oddly I found a couple hints on the Flexraid forums but nothing saying where it was on the forums or what to search for or anything. So most of it was on the old Covecube forums that are read only. I would put links but I think I'll just select the little information I need so this thread doesn't get kicked. And the second part. Ok I read the information on the first thread above and that it talking about how it was possible. Saitoh183 posted a few times on that thread with more information on Drivepooling and Flexraiding. Goes through making sure everyone knows that you lose one or more drives (largest or equal size of a every drive"Not put together") for a parity disk or a PPU so called. In the second quote of research it is a small thread "explaining" how to setup the both of them. I know and understand that Saitoh183 said "It doesn't matter in which order you set them up. DP just merges your drives, Your performance comes from DP and Flexraid doesn't affect it. Flexraid is only relevant for performance if you were using the pooling feature of it. Since you aren't, the only performance you will see is when your parity drive is being updated. Also dont forget to not add your PPU to the pool" I know from what Saitoh183 it doesn't matter. But I figured you would make the StableBit Drivepool setup the drive letter. Now going to the FlexRaid: 1. Add new configuration 2. Name Cruise Control with Snapshot and Create 3. Open the new configuration to edit and open Drive Manager 4. Add the DRU's (Data drives) and one or more PPU for parity backup's (Snapshots) I've read a few setup guides and I've heard 1 PPU drive for every 6 and I've heard 1 PPU drive for every 10 both are fine. 5. Initialize the raid if data is on the DRU's it will now do a parity snapshot, now back to the home page for the named configuration and Start Storage Pool. Not sure what else to after that if it's even right. I don't think the FlexRaid should have a drive letter or it would make things more confusing than it already is using two programs. Please enlighten my with any information that can help this research that will help with my purchase and hopefully more people that decide to do this setup also. I would like to firstly so I appreciate everyone up front for there past help with others to even get me here with this information! Thanks again. Techtonic
  17. I'm using DP on a new Windows 2012 R2 Essentials install. My DP is 2.1.1.561 which, I believe, is the latest official release. I've noticed that every time I go into the Dashboard it says that DrivePool is "checking". These checks take a long time and appear to use a fair bit of disk throughput to complete so I'm wondering why they a happening all the time. My previous install (I just rebuilt the OS) didn't seem to do this although maybe I missed it. Any suggestions?
  18. I currently use FlexRAID RAID-F for snapshot RAID, which I'm pretty happy with. I also use the FlexRAID drive pooling feature to merge 8 HDDs, which I'm not particularly happy with since it doesn't work as I'd like it to (and is also, IMO, simply broken). I'd like to try migrating my current setup to use DrivePool in conjunction with snapshot RAID. My first question is: are there any known issues with such a setup? For example, does DrivePool do anything that would invalidate daily RAID snapshots? Secondly, I'd like confirmation that DrivePool can actually do what I want. Here's an example of the root directories on my drives: Drive 1: Data, Audio, Backups Drive 2: Video\TV Drive 3: Video\TV Drive 4: Video\Films Drive 5: Video\Films In this scenario, if I copied a file into the \Video\Films folder, I want it to go to either Drive 4 or Drive 5 (not really bothered which, but I suppose it'd make sense to fill one up before using the other). The only scenario that should lead to drives 1-3 having files from the \Video\Films folder on them is if Drives 4 & 5 are full. In addition, I'd like only drives 4 & 5 to spin up (ideally at the same time) when accessing the \Video\Films folder via a network share. At the moment, going to such a share takes ages since FlexRAID shoves files on random drives without being asked to and then spins all of them up one a time. Sometimes traversing subdirectories incurs additional delays as (presumably) even more drives are spun up. Finally, what kind of transfer speeds should I expect when copying files into the pool? With FlexRAID pooling, I get ~35 MB/s when writing to the pool, whereas writing to a drive directly yields 150+ MB/s. How about over a gigabit network (where I'll be doing most of my transfers)? Thanks for any advice!
  19. I'm building a new server, Running now 2 HP MicroServer N54L (Nice and small and low power usage) And want to switch to 1 more powerful. (I think 1 bigger server will be cheaper in usage then 2 small ones) The server holds all my music/movies/tv-shows and shares it between my 3-4 Kodi instances with a shared MySQL what also runs on the server. (2x WD RED 4TB, 5x 2TB Samsung HD204UI, 1x Seagate 250 GB 7200rpm, 1 x Samsung 830 SSD) The SSD drive I use for the OS (WSE2012R2). The 7200 rpm drive as work drive for torrents before move and UseNet RAR/PAR2 creation. At this moment I’m only using "Disk Space Equalizer" set on 90% so all files are spread over all drives equally. No duplicate enable. Now I’m wondering if it would be smarter to use another balancer option if I’m going to move my server over. But I don't want to lose performance. (The movies I play over the network are Blu-Ray Remuxs of 20-40 GB). Would it be smarter to use "Ordered File Placement" so that the drives only get waken up when I access a file on that drive or during "StableBit DriveScan" what also perhaps results in lesser wear on the older drives and less power usage because the drives will be more in sleep state? "SSD Optimizer" wouldn't work for me because the SSD is a small drive of 60 GB.
  20. Okay, so my dashboard had gotten so that I could not open it at all. On a whim, I decided to uninstall SBDP version 1.3.7585, but had to do so through Control Panel>Add/Remove Programs because of the borked Dashboard. It uninstalled, and so I uninstalled the Dashboard Fix as well. Not thinking about it, I got the SBDP 2.x installer and ran it. Yay, SBDP is working and so is my Dashboard! But there's a hitch... The Add-Ins tab on Dashboard shows version 1.3.x is installed. What do I need to do to fix it? Or, should I even worry about it? It seems to be working fine, so I don't want to have to wipe and reinstall WHS2011. I did that once when the Dashboard wouldn't run, and after reinstalling SBDP 1.X I was back right where I started. Thanks in advance for your guidance. Jason
  21. I was reading the changlog and I read; * [issue #13517] When the "dedup" file system filter is installed, "Bypass file system filters" is overridden and disabled. This allows pooled drives that are utilizing data deduplication to work correctly with StableBit DrivePool. Is this for the Windows Server 2012r2 Data Deduplication Service? It's been a while and I've been using mhddfs and I was looking into swapping my server back to Windows for ADDC, and some other stuff that is easier to do in windows. Right now I do a hardlink of my Finished folder to another folder called Sort. I then move all the Finished data to Deletable. I can then sort the files in Sort into other directories without issues caused by renaming. This lets me seed to 2.0 in peace without space waist. I was wondering how DP would handle this. So how does DP handle hardlinks right now? Before it gets asked, I'm using Snapraid instead of DP's duplication system. A 2-disk parity saves a lot of space. If I can hardlink and/or use Deduplication that would further save space.
  22. Hi I am new to DrivePool, having just re-built my WHS V1 Server into a Win10 Server. One the whole the experience some far has been good, without issue, except one weird minor irritant. As you can see from the attached screen cap, the pool looks fine, but 4 of the 5 disks also appear in the non-pooled area - all of each disk is assigned to the pool. The pool works fine so there is no real issue, it's just annoying. Any suggestions? Cheers​
  23. Just upgraded my home server from WHS 1 to WHS 2011 and installed Drivepool and Scanner. I have 3 x 2TB drives and a portion of the system drive configured as a drive pool. All disks are formatted as NTFS. I've configured all of the server's shared folders to live on the Drivepool. I'm able to transfer files of any size from a Windows 8.1 client to the server but when I attempt to copy a file from a MacPro (running OSX Yosemite) I get the following error message: The item “XXXX” can’t be copied because it is too large for the volume's format I've seen this message before when using FAT formatted USB disks but not with a network share. Any ideas?
  24. Due to how StableBit DrivePool and certain software works, there are some known limitations. Stable Release: 2.2.3.1019 Public Beta Release: N/A VSS Support. This includes anything that strictly requires VSS support to work. This includes (but is not limited to): Windows Backup You can back up the pooled drives, but not the pool directly. Windows Server Backup You can back up the pooled drives, but not the pool directly. Previous Versions System Restore Dynamic Disks. Because of the added complexity of Dynamic disk, we don't support them being added to a pool. Specifically, we take into account the physical disk when determining where duplicates reside, and that adds a lot more complexity and overhead when you start dealing with Dynamic Disks due to the complex arrays that you can create with them. Plex Media Server's database. This database relies on hard linking files together, instead of using the database to point to the same file for redundant files. Because we don't support hard links on the Pool, you may notice missing images in Plex if the database is being stored on the pool. OneDrive for Windows 8.X. Windows 8 uses a new type of file system link to link the files to the cloud (and locally), in a seamless way. Unfortunately, we don't support this file system link only the pool current. USN Journaling . This may affect how certain apps detect changes on the pool. NOTE: Issues with "Change notifications" has been resolved in the beta builds (2.2.0.651 and up). This affects apps like DropBox. Antivirus Scanners. This is a broad generalization. But due to how they work, they can adversely affect the pool and it's performance, and can even cause some pretty bad behavior (eg, locking the system up). If you suspect that this is the case, version 2.2 and up has the option to disable the "bypass file system filters" feature (located in "Performance", under Pool Options), which may fix this. Otherwise, we recommend completely uninstalling any antivirus program that you suspect is causing issues (disabling isn't enough as the real time scanner is actually still active). Data Deduplication. This mainly applies to the Windows Server role, rather than 3rd party software. This feature works by identifying identical blocks of data. It converts the files into a special kind of reparse point, and carves out the duplicate data. This duplicate data is placed into the "System Volume Information" folder (and may be handled as part of VSS). When you access the data, a file system filter splices the data back together. While not officially supported, it should work on the underlying disks. The public beta build (2.2.0.651 and up) add some handling for Deduplication, so that the data is detected "intact". Deduplication is NOT supported on the pool, and because it appears to use the same data structure as VSS, it may never be supported. TrueCrypt/VeraCrypt. This software bypasses the normal disk API, meaning that it does not show up for StableBit DrivePool, and cannot be used. ReFS. At this time, full support does not exist. By "Full support", we mean registering the Pool drive as ReFS instead of NTFS, integrity stream support, and potentially additional handling for ReFS (such as checking integrity streams, and reduplicating files for "bad" files). It should be noted that ReFS only enables integrity streams for drive metadata. If you want to enable it for all files, you need to run "Set-FileIntegrity X:\ -enable $true" from a powershell prompt Long File Paths. StableBit DrivePool doesn't have a problem with log paths, and actually supports long paths (up to 64k character paths, I believe). Explorer, however uses win32 API and does not support long paths. There is, however an issue where file names (not paths) longer than 200 characters cause issues. This is resolved in 2.2.0.684. Pool is Removable. This is default behavior, but causes issues with some software, and may be a problem for some people. The Public Beta build (2.2.0.651) disables this behavior by default, and introduces an advanced setting that can be used to toggle the setting. Files not showing up in directory list. If you are experiencing this issue, then the issue may be network related. By default, Windows caches network information, such as directory searches. This can cache the incomplete list. In this case, the fix is a simple registry tweak: http://wiki.covecube.com/StableBit_DrivePool_Q7420208 This will increase network usage slightly, but should not adversely affect the network. Cannot add VHD(X) based disks to the pool. Errors out when trying to add a drive that is a mounted VHD(X) file to the pool. Error or long delay when renaming folders on the network. This applies to Windows 10 Anniversary update (version 1607) as the client, and Windows Server 2012R2, Windows 8.1 and up as the share server. There is an issue when the share has been indexed locally on the server by Windows Search. This is NOT a StableBit DrivePool bug, but a bug with Microsoft code. For now, the solution is to disable the Windows Search service. However, if you're using Windows Server Essentials, this will adversely affect the built int streaming options, as well as the file list for the Remote Access Website. Microsoft KB Article: https://support.microsoft.com/en-us/kb/3198614 You can "fix" this issue here: Q542215 Cumulative update that addresses this issue is available here: Link Microsoft Store Apps have issues with the pool. This is most likely due to some feature not supported by the Pool. In particular, VSS, USN Journaling, or some other file system command/check. Windows Subsystem for Linux. The file system commands cause issues when navigating the pool. This applies to both v1 and v2 of WSL. This list is by no means comprehensive, and it's contents may change as these issues are investigated. Internal Beta builds can be found here: http://dl.covecube.com/DrivePoolWindows/beta/download/ The latest, safe build is 2.2.3.1019 Full change log can be found here: changes.txt Don't download the latest build unless instructed to, especially if it's not a listed numbered build in the change log, as these are interim development builds and may not be stable.
  25. I'm currently looking to install Drivepool on my Windows 8.1 headless server. This is mainly used for media (TV, movies, music), photos (which are also replicated to Google Drive), user files (direct to server), as well as backups from one other windows machine. I currently have my media spread across two disks 1 x 2tb and 1 x 1.5tb. These are pretty much full. I've purchased 2 x 3tb drives to provide for some redundancy as well as future headroom. I have a few questions before I jump in and start. 1 - Is there any need to re-organise the data on the disks before pooling them. For example currently on one disk there are movies and the other TV shows. Is there any benefit to bringing these onto as single 3tb disk or should I just leave them as they are? 2 - One the drive are pooled and new files are added are they just scattered anywhere on the disks in the pool - so for example if you pulled out the disks (say because of a machine failure) and you put them into another machine would you then see (say movies) across any of the 4 drives under Windows Explorer. 3 - I use SabNzb, Sickbeard & Plex for all of my video downloads and management - when configuring when using Drivepool do I just use the Drive and Folder structure in Drivepool? Sorry if some of these questions are a bit basic - I'm just trying to get my head around the concept of pooling from a simple practical perspective.
×
×
  • Create New...