Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'Drivepool'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 70 results

  1. Hello all, I have a Windows 2012R2 server with a 4TB RAID1 mirror (2 x 4TB HDD, using basic Windows software RAID, not storage spaces - can't boot from storage spaces pool) and some other non-mirrored data disks (1x 8TB, 2 x 3TB etc.) This is configured with the OS and "Important Data" on the 4TB mirror, and "Less Important Data" on the other non-redundant disks. The 4TB mirror is now nearly full, so I intend to either replace this mirror with a larger mirror, or replace this mirror with two larger non-mirrored drives, and use DrivePool to duplicate some data across them. I'm evaluating whether the greater flexibility of using DrivePool is worth moving to that, instead of using a basic mirror as I am currently. Current config is this: C: - redundant bootable partition mirrored on Disk 1 and Disk 2 D: - redundant "Important Data" partition mirrored on Disk 1 and Disk 2, and backed up by CrashPlan E:, F:, etc. - non-redundant data partitions for "Less Important Data" on Disks 3, 4, etc. If I moved to DrivePool I guess the configuration would be this: C: - non-redundant bootable partition on Disk1 D: - drivepool "Important Data" pool duplicated on Disk1 and Disk2, and backed up by CrashPlan E: - drivepool "Less Important Data" pool spread across Disks 3, 4 etc. - or - C: - non-redundant bootable partition on Disk1 D: - drivepool pool spread across all disks, with "Important Data" folders set to only be stored and also duplicated on Disk1 and Disk2, and backed up by CrashPlan (or something similar using hierarchical pools) I have a few questions about this: Does it make sense to use DrivePool in this scenario, or should I just stick to using a normal RAID mirror? Will DrivePool handle real-time duplication of large in-use files such as 127GB VM VHD's, and if so, is there a write-performance decrease, compared to a software mirror? All volumes use Windows' data-deduplication. Will this continue to work with DrivePool? I understand the pool drive itself cannot have deduplication enabled, but will the drive/s storing the pool continue to deduplicate the pool data correctly? Related to this, can DrivePool be configured in such a way that if I downloaded a file, then copied that file to multiple different folders, it would most likely store those all on one physical disk so that it can be deduplicated by the OS? CrashPlan is used to backup some data. Can this be used to backup the pool drive itself (it needs to receive update notifications from the filesystem)? I believe it also uses VSS to backup in-use files, but I as this is mostly static data storage I think files should not be in use so I may be able to live without that. Alternatively, I could backup the data on the underlying disks themselves? Are there any issues or caveats I haven't thought of here? How does DrivePool handle long-path issues? Server 2012R2 doesn't have long-path support and I do occasionally run into path-length issues. I can only assume this is even worse when pooled data is stored in a folder, effectively increasing the path-length of every file? Thanks!
  2. Hi, Sorry if this is not in a suitable location, but I couldn't find a suggestions option. Would it be possible to implement a auto-tiering option in some way? E.g. automatically use fast storage as primary and slow storage for secondary duplicates? That way something should always come off say NVMe first, followed by SSD, SSHD and traditional HDDs. I only ask, because by default my duplication goes across HDDs and the performance is... as you would expect! Alternatively, some way of using the faster disks as "caching" drives - perhaps based on frequency of file access or just by file access timestamps? Just some thoughts I had today whilst trying to make best usage of my varied storage devices.
  3. I have a Drivepool with 2x folder duplication for my projects. I was initially blaming VS 2019 for possible bugs. However, after I moved the project to my SSD C Drive not using Drivepool, everything works again. Things which I found broken are: - Slow to no detection of syntax errors - Quick actions for code refactoring are doing nothing. System Infomation Microsoft Windows 10 Pro Version 2004 10.0.19041 Build 19041 Drivepool 2.2.3.1019 Microsoft Visual Studio Community 2019 Version 16.7.1 VisualStudio.16.Release/16.7.1+30406.217 Microsoft .NET Framework Version 4.8.04084 Update I have found a walkaround to share the drive on the network and work from the network share.
  4. I am a complete noob and forgetful on top of everything. I've had one pool for a long time and last week I tried to add a new pool using clouddrive. I added my six 1TB onedrive accounts in cloudpool and I may or may not have set them up in drivepool (that's my memory for you). But whatever I did was not use drivepool to combine all six clouddrives into a single 6TB drive and I ran out of space something was duplicating data from one specific folder in my pool to the clouddrive. So I removed the clouddrive accounts, added them back, and did use drivepool to combine the six accounts into one drive. And without doing anything else, data from the one folder I had and want duplicated to the clouddrive started duplicating again - but I don't know how and I'd like to know how I did that. I've read nested pools, hierarchy duplication topics but I'm sure that's not something I set-up because it would have been too complicated for me. Is there another, simpler way that I might have made this happen? I know this isn't much information to go on and I don't expect anyone to dedicate a lot of time to this, but I've spent yesterday and today trying to figure out what I did and I just can't. I've checked folder duplication under the original pool, and the folder is 1x because I don't want it duplicated on the same pool so I assume that's not what I did.
  5. Hi, I got a question: Ok, Here's what I am starting with: Local Machine D: 2 GB E: 5 GB F: 3 GB Local Network Server Windows Fire Share Lots of space. Network: 10 gbit network. Now, what I want to do: (already tried, actually) 1st pool: Local Pool D: E: F: No duplication, just 1 big 10 GB pool. "Local Pool Drive" is the name. 2nd pool: Cloud Drive, Windows File Share Provider, 10 GB disk, no encryption. "File Share Drive" is the name. 3rd Final Pool +"Local Pool Drive" +"Fire Share Drive" Some folders like my Steam Games I don't want to duplicate because Steam is my 'backup' I set this up, but something went backwards as it seems to be trying to put the un-duplicated files on File Share Drive, which is not what I want. I am currently taking my files back off to be able to rebuild this. I think I've set some stuff up wrong. I understand the basics, but I am missing something in the details. Now, my thinking was to allow me to have a local 10 GB drive, and a 10 GB mirror on my file server. So, I get the best of both worlds. it seems it can "Technically" do this. What I would like to know is, when reading files, will I get local copy reads, or is everything going to grind to a halt with Server Reads? Is there a way to force local reads? I expect a cache would hide the uploads unless I went crazy on it. (hint, I copied all my files on it, it went crazy.) but, I was hoping for a Quazi live backup. So, I get full use of my local disk sizes and have a live backup if a drive ever failed or I wanted to upgrade a disk. Is this one of those "yea, it can, but it doesn't work like you think" or "yea, your just doing it wrong" things. What are the best options for Could Drive (pinning, network IO settings, cache size, drive setup settings) What are the best Local Disk options, D: E: F: and the Local Disk oh, and as a nutty side note, I tend to play games that like SSDs, I have a C: SSD but it's limited in size, (yea, I know, buy a new one, ... eventually I will) so, I would like to setup those games along side my others and add C: to D: E: and F: and tell it no files go on C: cept for the ones I say and EVERYTHING just looks like a D: drive because the actual drive letters will be hidden in disk manager. so, If I want to move a game into or off of my C:, Drive pool SHOULD make it as easy as clicking a couple of boxes right? That's a lot so summary: Can I force local reads when using a pool of Local Pool and Could Drive Pool together? Can I force certain files into specific drives only? (seems I can, just want to confirm) Can specific folders not be duplicated? (again, I think I know how, but there's a larger context here.) Can I do the above 3 at once? Thanks!
  6. I added two new 12 TB drives to my pool yesterday and hit Re-Balance, and it is taking forever. As you can see in the pictures, it is currently moving data from one of my 8 TB drives to one of my new 12 TB drives. It has been doing so all night at about 10 to 20 MBps. At this rate (2% completed in one day) it will take about 50 days. Also shown in the pictures is me bypassing DrivePool by manually moving some files within the PoolParts using Windows Explorer from a 10 TB drive to the other new 12 TB drive. Stablebit Scanner shows that these speeds are normal, around 200 MBps. They are all on the same HBA, so the problem can only be caused by DrivePool. I know the obvious solution would be to manually move all the files to the new drives myself like I demonstrated in the pictures, but that would still be several days of me babysitting the process. I paid for DrivePool, and would like to be able to use it so I can just hit the Balance button and walk away. I tried checking and unchecking the Bypass File System Filters and Network I/O Boost settings but that had no effect, and I am not using file duplication. Is there anything else I can try?
  7. Hi All, I purchased drivepool about a month ago and I am loving it. I have 44TB pooled up with an SSD cache it works great until yesterday. I downloaded backup and sync to sync my google drive to the pool and it synced the files I needed to the SSD then the SSD moved the data to the archive drives. Then backup and sync gave me an issue that the items are not synced and synced it again. It seems like backup and sync cannot sync to the pool for some reason. Did anyone else have this problem? Is this why we have clouddrive? I am really disappointed as If i knew it wouldn't work I wouldn't have bough drivepool or I would've bough the whole package at a discount instead of buying them one by one.
  8. Hey, I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives. Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both. What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added). 1) 2)
  9. Hello - I have tried searching the forums a bit and I did find this helpful post - But I was hoping for some clarification before I start work on my pools. 1. Since DrivePool uses UNC, are MaxPath restrictions not an issue if programs are accessing the pool drive directly AND if the pool drive's file length is under the maxpath length limit (256 characters)? By that, I mean - the PoolPart path length is around 45 characters, so if I have a file within the PoolPart folder that has a path length of 280, then it's over the maxpath limit. But, roughly 45 characters of that is due to the PoolPart foldername. DrivePool has no problems moving it around etc because DP is written using UNC. Within the pooled drive however, that same file will have a path length of roughly 235 (280-45). Does this mean that as long as I only access the files via the pooled drive, there will be no issues with Win32 API from programs that don't support UNC? 2. If the answer to #1 is yes, if I need to evacuate a drive from the pool and use the data somewhere else, can I use robocopy or another UNC compatible program to copy the data OUT of the PoolPart folder? The issue I have is that, like Drashna mentions in his post above, I'm meticulous with my folder and filenaming, but I'm trying in general to stay under the 256 limit and not regedit the maxpath restriction away. So I'm planning path lengths in a pool that will be in the 240s/250s, which will go over the limit in the PooPart folders, but still be copacetic in theory in the actual pooled drive itself. Sorry if I'm overcomplicating things and thanks Christopher/Drashna and the CoveCube team for all of your work on this amazing product.
  10. Hello all, So i will try to summarize as much as i can. I run my drivepool on WD RED drives, about 5 of them, i had 1 fail on me in the past and now a 2nd seems to have done the same. Im not sure why they are failing so much, i thought they were the best but the desktop pc i have them all cramped in is probably not the best place for and i dont have the vibration absorbing screws or anything special so maybe thats why. Is his failure message a result of drivepool and the drives are actually not damaged ? i will post a screen shot, my drives show lots of space, but 1 drive is completely empty !!! I thought it was not balancing the drives correctly, but im guessing due to the HDD failure message drivepool evacuated all data from that bad disk ? now i have no space to even download a file, not sure what to do. I also have things in my drivepool that i dont have to be duplicated to save space on pool, can i manually disable duplication on certain folders without effecting anything else ? Please advise Thank you all. (Side note this drivepool houses ONLY my PLEX media collection and is used for nothing else)
  11. I have reached the limit of installable drives: I am pretty sure you know this situation My motherboard can handle 6 drives and I installed one SSD and five WD Reds. Due to the fact that my data appetite is still vast, I have to buy me my first storage controller I already read a lot of threads and reviews and decided to invest my money into a controller with a LSI chipset. By the way, I just need a "simple" storage controller, no RAID or other fancy functions are needed. I am fine in investing some extra money in the newest chipset generation, because I want to use the controller for a lot of years So far so good, I am completely new to SAS, HBA, JBOD etc., therefore I need an expert advice. I already figured out two potential storage controllers: [1] Intel RAID Controller RS3WC080 | Specs > http://ark.intel.com/products/77345/ [sorry cannot link them] [2] Intel RAID Controller RS3UC080 | Specs > http://ark.intel.com/products/76066/ [sorry cannot link them] As far as I understood I have to look for a storage controller with JBOD mode, because I do not need the RAID functions and simply want to attach new drives. Am I right that [2] is the right decision for my setup? Are there any issues/problems I might face in installing the storage controller? Do I you have to take something into account when I add the new drive into my (lovely) pool? Thanks so much for your help and patience!
  12. version 2.2.2.934 DP Error: "Error Adding Drive: cannot add the same disk to the pool twice" Hello, Need help resolving this error so that the disk with drive letter "I:" (as shown by the attached screenshot) may be returned to the pool. The error occurs when clicking the "+ Add" 'link'. Drive "I:" had stopped being recognized by DP after the system came out of sleep. Thank You
  13. HI Folks: I am running drivepool and scanner - one of the drives dropped out of the pool according to Drivepool, but I got no notification from either Drivepool or Scanner that there was a problem although notifications were set up. I removed the drive from the pool under drivepool and checked it under disk management (Windows 10) - it seesm to want to be reinitialized? attempted this as it should be a mostly emplty drive, but get an error saying there is an I/O error and cant initialize it. Is this a drivepool / scanner issue or more likely a mother board issue? Anny assistance / comments would be appreciated - especially wondering why I got no notification from Drive Pool that the drive was missing (although it indicated it was) … until I removed it from the pool. It then shortly thereafter said it was not missing - but the drive was requiring initialization - which can't seem to be accomplished? Regards, Dave Melnyk
  14. I've been seeing quite a few requests about knowing which files are on which drives in case of needing a recovery for unduplicated files. I know the dpcmd.exe has some functionality for listing all files and their locations, but I wanted something that I could "tweak" a little better to my needs, so I created a PowerShell script to get me exactly what I need. I decided on PowerShell, as it allows me to do just about ANYTHING I can imagine, given enough logic. Feel free to use this, or let me know if it would be more helpful "tweaked" a different way... Prerequisites: You gotta know PowerShell (or be interested in learning a little bit of it, anyway) All of your DrivePool drives need to be mounted as a path (I chose to mount all drives as C:\DrivePool\{disk name}) Details on how to mount your drives to folders can be found here: http://wiki.covecube.com/StableBit_DrivePool_Q4822624 Your computer must be able to run PowerShell scripts (I set my execution policy to 'RemoteSigned') I have this PowerShell script set to run each day at 3am, and it generates a .csv file that I can use to sort/filter all of the results. Need to know what files were on drive A? Done. Need to know which drives are holding all of the files in your Movies folder? Done. Your imagination is the limit. Here is a screenshot of the .CSV file it generates, showing the location of all of the files in a particular directory (as an example): Here is the code I used (it's also attached in the .zip file): # This saves the full listing of files in DrivePool $files = Get-ChildItem -Path C:\DrivePool -Recurse -Force | where {!$_.PsIsContainer} # This creates an empty table to store details of the files $filelist = @() # This goes through each file, and populates the table with the drive name, file name and directory name foreach ($file in $files) { $filelist += New-Object psobject -Property @{Drive=$(($file.DirectoryName).Substring(13,5));FileName=$($file.Name);DirectoryName=$(($file.DirectoryName).Substring(64))} } # This saves the table to a .csv file so it can be opened later on, sorted, filtered, etc. $filelist | Export-CSV F:\DPFileList.csv -NoTypeInformation Let me know if there is interest in this, if you have any questions on how to get this going on your system, or if you'd like any clarification of the above. Hope it helps! -Quinn gj80 has written a further improvement to this script: DPFileList.zip And B00ze has further improved the script (Win7 fixes): DrivePool-Generate-CSV-Log-V1.60.zip
  15. Teknokill

    Lost everything

    I went to open Calibre and an error message popped up and said that the calibre library location isn't available. I then noticed that there was no drivepool. I opened Stablebit drivepool and noticed that one disk went bad, but instead of having all of my other 7 disks I don't even have a drivepool. The Stablebit DRivepool program was asking me to start a new drivepool, which should not have happened. I decided to just begin another drivepool, but as I added the disks to the drivepool, none of the folders and files were being shown. The disks look like they were all wiped clean. Windows Disk management reflects the same thing. I would like to recreate my drivepool, but I guess the pool configuration information cannot be located or indexed. Is this fixable? The disks only show a poolpart file and nothing else. I have a very good hard drive restore program, should I just scan each disk and try to recover all of my files. This looks like a nightmare at this point. I did notice that when I discovered I no longer had my files I did notice that the stablebit service was not running. I re-started the service, which still resulted in not being able to see any of my files or folders.
  16. Hi there, Is it possible to use a DrivePool made up an SSD (1TB) and an HDD (10TB) as the cache drive for a CloudDrive? I would love to use the SSD Optimizer plugin with DrivePool to create a fast and large cache drive. This would give the best of both worlds, newer and more frequently used files would be stored on the SSD, older files would slowly be moved to the HDD, you would only need to download from the CloudDrive if you are accessing an older file that is not in the cache... Is this possible?
  17. I've just finished coding a new balancing plugin for StableBit DrivePool, it's called the SSD Optimizer. This was actually a feature request, so here you go. I know that a lot of people use the Archive Optimizer plugin with SSD drives and I would like this plugin to replace the Archive Optimizer for that kind of balancing scenario. The idea behind the SSD Optimizer (as it was with the Archive Optimizer) is that if you have one or more SSDs in your pool, they can serve as "landing zones" for new files, and those files will later be automatically migrated to your slower spinning drives. Thus your SSDs would serve as a kind of super fast write buffer for the pool. The new functionality of the SSD Optimizer is that now it's able to fill your Archive disks one at a time, like the Ordered File Placement plugin, but with support for SSD "Feeder" disks. Check out the attached screenshot of what it looks like Notes: http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt Download: http://stablebit.com/DrivePool/Plugins Edit: Now up on stablebit.com, link updated.
  18. Hello, I had asked this question in the cloud drive forum... the short version is, can you pool two machines together? and it was suggested to try my luck here instead. I'm sure the answer is "no", but can I pool two different machines together into one pool? I know I can do this: create pool --> place pool in virtual cloud drive --> place cloud drive into pool on second machine because I did exactly that. However, if I have existing data on the pool, that does not get reflected in the cloud, it can only use the empty space to create the virtual drive. If it is possible to see the existing data in a pool in a cloud drive, I'd be interested in knowing how to do that. Thanks, TD
  19. Hello, I have had Drivepool for about a year now and have just installed a second copy on another physical machine. Each machine has ~12TB of data on it. is it possible to use CloudDrive to put the two pools together if data already exists on each? I am thinking of creating a 1TB cloud on machine ONE, pointing it to a Windows share on machine TWO. Then, I want to add the CloudDrive to to the pool on machine ONE... will that show me the existing data from TWO in the pool on ONE? Can the clouddrive be smaller than the data on TWO? Thanks for any help you can provide! TD
  20. hi! i've got the package, so i get the more details in Drivepool when highlighting a drive. the latest drive i added, it started in my USB3 dock whilst i set it up [do the checks before adding in to the pool] and its now in the server on SATA connection. but it still says "this is an external drive" as per the attached image. is there any way to correct this? regard,
  21. Painted

    Error Removing Drive

    Hi, I've got a drive in my pool that Scanner is reporting that the File system is not healthy, and repairs have been unsuccessful. So I added another drive the same size to the pool, and attempted to remove the damaged drive (thinking this would migrate the data, after which I can reformat the original drive and re-add to the pool). I'm getting "Error Removing Drive" though, and the detail info indicates "The file or directory is corrupted and unreadable". Yet I can reach into the PoolPart folder and can copy at least some* of the files (every test I've done copies fine). How do I migrate whatever I can recover manually? Or is there a step I'm missing to get DP to do this for me automatically?
  22. Removing a pool apparently causes my server to reboots, thus i have to repeat the removal process.. I'm positive cause it happens third time already, I have checked all 3 options on remove menu, but it still reboots my PC, so what should i do? I also tried searching and deleting files inside the pool but it wont help, the drive currently looks like this, this is after the reboot I have also unmark the both options in drive usage limiter plugin after i post this but i havent tried removing again
  23. Hello everyone! I plan to integrate DrivePool with SnapRAID and CloudDrive and require your assistance regarding the setup itself. My end goal is to pool all data drives together under one drive letter for ease of use as a network share and also have it protected from failures both locally (SnapRAID) and in the cloud (CloudDrive) I have the following disks: - D1 3TB, D: drive (data) - D2 3TB, E: drive (data) - D3 3TB, F: drive (data) - D4 3TB, G: drive (data) - D5 5TB, H: drive (parity) Action plan: - create LocalPool X: in DrivePool with the four data drives (D-G) - configure SnapRAID with disks D-G as data drives and disk H: as parity - do an initial sync and scrub in SnapRAID - use Task Scheduler (Windows Server 2016) to perform daily synchs (SnapRAID.exe sync) and weekly scrubs (SnapRAID.exe -p 25 -o 7 scrub) - create CloudPool Y: in CloudDrive, 30-50GB local cache on an SSD to be used with G Suite - create HybridPool Z: in DrivePool and add X: and Y: - create the network share to hit X: Is my thought process correct in terms of protecting my data in the event of a disk failure (I will also use Scanner for disk monitoring) or disks going bad? Please let me know if I need to improve the above setup or if there is soemthing that I am doing wrong. Looking forward to your feedback and thank you very much for your assistance!
  24. Hello. I ihink that I might have encountered a bug in DrivePool behavior when using torrent. Here shown on 5x1TB pool. When on disk (that is a part of Pool) is created a new file that reserve X amount of space in MFT but does not preallocate it, DrivePool adds that X amount of space to total space avaliable on this disk. DrivePool is reporting correct HDD capacity (931GB) but wrong volume (larger than possible on particular HDD). To be clear, that file is not created "outside" of pool and then moved onto it, it is created on virtual disk (in my case E:\Torrent\... ) on that HDD where DrivePool decide to put it. Reported capacity goes back to normal after deleting that file:
  25. Hi, I decided to give Drivepool a try, see how I like it, and almost immediately after installing and setting up a trial activation, and fixing an issue I had with my bios settings, I get a notification saying I must activate to continue. All it says is "Get a trial license or enter your retail license code" and "Get a license", with all other UI blocked. Clicking Get a license shows me a screen with "Your trial license expires in 29.7 days" and asks for an Activation ID. I tried uninstalling and reinstalling, and same thing. Uninstalling failed to remove the pooled drive, so I'm not really sure what's going on there.
×
×
  • Create New...