Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'question'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 6 results

  1. Hello all, I have a Windows 2012R2 server with a 4TB RAID1 mirror (2 x 4TB HDD, using basic Windows software RAID, not storage spaces - can't boot from storage spaces pool) and some other non-mirrored data disks (1x 8TB, 2 x 3TB etc.) This is configured with the OS and "Important Data" on the 4TB mirror, and "Less Important Data" on the other non-redundant disks. The 4TB mirror is now nearly full, so I intend to either replace this mirror with a larger mirror, or replace this mirror with two larger non-mirrored drives, and use DrivePool to duplicate some data across them. I'm evaluating whether the greater flexibility of using DrivePool is worth moving to that, instead of using a basic mirror as I am currently. Current config is this: C: - redundant bootable partition mirrored on Disk 1 and Disk 2 D: - redundant "Important Data" partition mirrored on Disk 1 and Disk 2, and backed up by CrashPlan E:, F:, etc. - non-redundant data partitions for "Less Important Data" on Disks 3, 4, etc. If I moved to DrivePool I guess the configuration would be this: C: - non-redundant bootable partition on Disk1 D: - drivepool "Important Data" pool duplicated on Disk1 and Disk2, and backed up by CrashPlan E: - drivepool "Less Important Data" pool spread across Disks 3, 4 etc. - or - C: - non-redundant bootable partition on Disk1 D: - drivepool pool spread across all disks, with "Important Data" folders set to only be stored and also duplicated on Disk1 and Disk2, and backed up by CrashPlan (or something similar using hierarchical pools) I have a few questions about this: Does it make sense to use DrivePool in this scenario, or should I just stick to using a normal RAID mirror? Will DrivePool handle real-time duplication of large in-use files such as 127GB VM VHD's, and if so, is there a write-performance decrease, compared to a software mirror? All volumes use Windows' data-deduplication. Will this continue to work with DrivePool? I understand the pool drive itself cannot have deduplication enabled, but will the drive/s storing the pool continue to deduplicate the pool data correctly? Related to this, can DrivePool be configured in such a way that if I downloaded a file, then copied that file to multiple different folders, it would most likely store those all on one physical disk so that it can be deduplicated by the OS? CrashPlan is used to backup some data. Can this be used to backup the pool drive itself (it needs to receive update notifications from the filesystem)? I believe it also uses VSS to backup in-use files, but I as this is mostly static data storage I think files should not be in use so I may be able to live without that. Alternatively, I could backup the data on the underlying disks themselves? Are there any issues or caveats I haven't thought of here? How does DrivePool handle long-path issues? Server 2012R2 doesn't have long-path support and I do occasionally run into path-length issues. I can only assume this is even worse when pooled data is stored in a folder, effectively increasing the path-length of every file? Thanks!
  2. I have a Drivepool with 2x folder duplication for my projects. I was initially blaming VS 2019 for possible bugs. However, after I moved the project to my SSD C Drive not using Drivepool, everything works again. Things which I found broken are: - Slow to no detection of syntax errors - Quick actions for code refactoring are doing nothing. System Infomation Microsoft Windows 10 Pro Version 2004 10.0.19041 Build 19041 Drivepool 2.2.3.1019 Microsoft Visual Studio Community 2019 Version 16.7.1 VisualStudio.16.Release/16.7.1+30406.217 Microsoft .NET Framework Version 4.8.04084 Update I have found a walkaround to share the drive on the network and work from the network share.
  3. Hello, I'm a total noob and I haven't even built my pool yet but I have a question that's going to determine if Drivepool is going to work for me or not. I know that hardlinks are not supported by Drivepool because of how hardlinks work and how the files must be on the same drive and the same volume for the links not to break. My question now is this. Is it possible for me to use ordered file placement plugin to ensure that files I have the and their links stay on the same physical drive till it gets filled up and then move to the next drive and do the same thing and so on? Most of the time the files and the hardlinks get created at the same time so I don't see why this wouldn't work but I want to make sure if it will and exactly how do I set this up. Thanks
  4. Hi Guys, So recently one of my 6Tb drives in my pool died. I had 2 x 6Tb drives and lost half my data. So i have now installed 3 x 10Tb drives in my NAS(HP Gen 8). I was originally looking at raid 5 . so 2 x 10TB + 1 drive for parity. But i was suggested to use Drivepool to do this. How exactly would i set this up so i have drive failure redundancy? My Setup is as below 3 x 10TB WD Red 1 x 500GB SSD Boot drive 16GB RAM
  5. Hey everyone, New to the drivepool game here, absolutely loving this software. Has made storage and backup so much easier. Sorry if these are stupid questions here. 1. I seem to be a bit confused how the "Folder duplication" process works. I have one of my movies folders set to duplicate, it's around 2.5tb. However, when I turn on duplication, the duplication across all my drives is around ~5.7tb. I don't have any other folders set for duplication, why would this figure be so high? I guess I was expecting duplication to be around the same size as the folder I was duplicating (2.5tb). Is the duplication figure the size of original folder plus the size of the duplication? 2. Am I able to safely add my C:/ drive to my pool? It's a 1TB SSD, I was hoping to harness some of its speed as a cache drive as it's mostly empty right now. Is this safe/possible? Thanks again for the phenomenal product, probably the happiest I've been with a software purchase in a long time.
  6. lateworm

    macOS port?

    Could there be a macOS version of this at some point? I could see it being really useful to a lot of the developers who are on Macbooks. Even with fusion drives and SSD's, I often hear complaints about lack of space, speed and having to bring external storage because people have big, complex dependency trees that use absolute paths and are very tedious to sync.
×
×
  • Create New...