Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Login issues   11/07/17

      If you have issues with logging in, make sure you use your display name and not the "username" or email.  Or head here for more info.   http://community.covecube.com/index.php?/topic/3252-login-issues/  
    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 

All Activity

This stream auto-updates     

  1. Today
  2. Yesterday
  3. Recommended server backup method?

    Wow, that is really clever, and a great way to work around your 2TB backup limitation. Not all my data is duplicated, and not everything is in the pool. I'll have to give your approach some thought to see if it will (easily) work for me. Do you know if there's any performance hit with adding an additional level of drive abstraction, like that? Thanks for sharing!
  4. Thanks for the great reply. I'll return here if I need more help in further optimizing the setup!
  5. Wanting to build a server 40-45 Bay

    Ok, sorry. They did work for me.
  6. Drivepool with Local Disks and Parity via CloudDrive

    Yes, absolutely! http://wiki.covecube.com/StableBit_DrivePool_Q4142489 for reads, update to the latest beta, it will do this automatically. http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.878_x64_BETA.exe As for writes, there isn't a good way to do this, aside from the SSD Optimizer. The reason is that StableBit DrivePool writes all copies out, at the same time (real time duplication), so there isn't a way to really stop this. However, it shouldn't be a problem, IMO.
  7. Hard Shutdown and Re-Measuring

    Well, first, I'd recommend opening a ticket. And do thishttp://wiki.covecube.com/StableBit_DrivePool_System_Freeze
  8. Wanting to build a server 40-45 Bay

    Shhh, we don't talk about those. Mostly, because they have nothing to prevent or reduce vibrations. Which can shorten the lifespan. And for the price, I'd rather buy Supermicro or similar,
  9. My Rackmount Server

    hahaha, the one time that I don't proofread.... That, and I'm in the process of switching to the Colemak keyboard layout. S and R are right next to each other.
  10. Thanks Drashna, I made that now. Can I cut & paste files from my first drivepool and put it under the PoolPart folder that was now created inside it (fast), or do I need to move them from drive to drive the usual way (slow)? Second question: If now copy my data into this "hybridpool", it seems to be spreading partly in the localpool and partly in the cloudpool. How can I have the cloudpool part only be there for parity and have disk reads and initial writes from the localpool side (so that basicly the cloudpool is just for the copies and nothing more).
  11. Hard Shutdown and Re-Measuring

    Occasionally my server does not shut down completely. It gets to a black screen but the power never actually goes off without holding the power button. When this happens, on reboot DP goes through the lengthy re-measuring/checking of the pool(s) again. I have not determined the root cause of why the system does not shut down cleanly; I don't see anything obvious in the logs, and don''t make any software updates I can relate to this; Windows Update is turned off. Is there something I can do before shutdown to ensure DP gets stopped cleanly? Can I stop the Stablebit service then shut down? Would that help? I'm guessing not? Is it the file system itself that is marking that the pool needs re-measured. Win 7 x64 SP1 DP 2.2.0.852
  12. Wanting to build a server 40-45 Bay

    http://www.45drives.com/products/ https://www.backuppods.com/collections/backblaze-storage-pod-6-0
  13. My Rackmount Server

    Well - was adding discs but whatever works for you!
  14. Thanks for the reply Sir. 5 download threads, and 5 upload threads, or 5 total?
  15. Drivepool with Local Disks and Parity via CloudDrive

    You need two drives for "duplicated". Otherwise, you're telling the system that there is only one disk that is valid for both copies of the data. The best way to do this would be go create two pools. One with F:, E: and D:. Add this pool and I: to another pool. Enable duplication on that pool, but not the pool in this pool
  16. Feature request: Multiple cache drives

    No benefit to doing this, based on how the cache works (see below) ....... I'd recommend RAID10, then. Or something that won't lose all if the data if it fails, because there may be data to upload. No point. StableBit CloudDrive stores the cache in one, large file. It isn't/can't be spread between disks.
  17. My Rackmount Server

    Every time I add a new dick disk >:( And I've posted about it on the MS forums, "sorry, we cannot reproduce it"......
  18. Windows 10: BSOD caused by covefs.sys

    Yup. Otherwise, it's more complicated Yup. Win8 and up .... it's much harder to get into, in part because the OS boots so bloody fast. And ghe pool still loads in safe mode. So ... it may not help.
  19. Using potential faulty disk only for duplicated files

    Sorry. The logs don't show anything specific. If you could, enable logging again and make sure to trigger the error. Note and report the time that it iccurs at. Also, make sure you're on the latest beta before doing this. http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.878_x64_BETA.exe
  20. OneDrive for Business question - Drive or SP / Thread limit?

    IIRC, there isn't really a difference other than how you interface with it. As for threads, 5 is probably a good number. If you start getting throttled, then it's too high. Also, make sure you use this version: http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.0.2.957_x64_BETA.exe
  21. Last week
  22. I imagine what might be happening, is that my SSD "buffer" gets full quite fast, and the overflow is just randomly shoved to remaining disks and maybe later balanced to respect set rules (instead of never utilizing the cloud drive called "parity")
  23. Hey, I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives. Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both. What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added). 1) 2)
  24. Recommended server backup method?

    Sure. So DP supports pool hierarchies, i.e., a Pool can act like it is a HDD that is part of a (other) Pool. This was done especially for me. Just kidding. To make DP and CloudDrive (CD) work together well (but it helps me too). In the CD case, suppose you have two HDDs that are Pooled and you use x2 duplication. You also add a CD to that Pool. What you *want* is one duplicate on either HDD and the other duplicate on the CD. But there is no guarantee it will be that way. Both duplicated could end up at one of the HDDs. Lose the system and you lose all as there is no duplicate on CD. To solve this, add both HDDs to Pool A. This Pool is not duplicated. You also have CD (or another Pool of a number of HDDs) and create unduplicated Pool B witrh that. If you then create a duplicated Pool C by adfding Pool A and Pool B, then DP, through Pool C will ensure that one duplicate ends up at (HDDs) in Pool A and the other duplicate will en up at Pool B. This is becuase DP will, for the purpose of Pool C, view Pool A and Pool B as single HDDs and DP ensures that duplicates are not stored on the same "HDD". Next, for backup purposes, you would backup the underlying HDDs of Pool A and you would be backing up only one duplicate and still be certain you have all files. Edit: In my case, this allows me to backup a single 4TB HDD (that is partitioned into 2 2TB partitions) in WHS2011 (which onyl supports backups of volumes/partitions up to 2TB) and still have this duplicated with another 4TB HDD. So, I have: Pool A: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool B: 1 x 4TB HDD, partitioned into 2 x 2TB volumes, both added, not duplicated Pool C: Pool A + Pool B, duplicated. So, every file in Pool C is written to Pool A and Pool B. It is therefore, at both 4TB HDDs that are in the respective Pools A and B. Next, I backup both partitions of either HDD and I have only one backup with the guarantee of having one copy of each file included in the backup.
  25. Recommended server backup method?

    @Umfriend: I'm with @ikon in that I'm intrigued by your suggestion. Would you mind explaining it a bit further?
  26. Feature request: Multiple cache drives

    I'm going to test putting the cache into a drivepool that combines 3 physical drives.
  27. My Rackmount Server

    I Still have to refer to this thread when adding a new drive to my WS2012r2 server to get the Dashboard working (so I'll put all the bits in so I can find it next time!!!): 1) Add Drive(s) 2) From an elevated CMD Prompt --> "wbadmin delete catalog" 3) Restart WseMgmtSvc 4) Run Dashboard and re-setup the server backup
  28. Feature request: Multiple cache drives

    You could always stripe (RAID-0) them…
  1. Load more activity
×