Jump to content
Covecube Inc.
  • Announcements

    • Christopher (Drashna)

      Getting Help   11/07/17

      If you're experiencing problems with the software, the best way to get ahold of us is to head to https://stablebit.com/Contact, especially if this is a licensing issue.    Issues submitted there are checked first, and handled more aggressively. So, especially if the problem is urgent, please head over there first. 

Christopher (Drashna)

Administrators
  • Content count

    8999
  • Joined

  • Last visited

  • Days Won

    232

Everything posted by Christopher (Drashna)

  1. Folder Placement Rules - Who would've thought!

    Glad to hear it! Honestly, I couldn't give you a good answer here. For two reasons. Because of how Read Striping works. It doesn't always read from both disks. Sometimes, it will read from a single disk, if one of the disks is "busier". So it may not be able to fully stripe here. And even when it is, there are a number of factors that can affect this, so it can be very system and load dependant. I only access stuff over the network, which is 1gb networking for me. So the max is 125MB/s, not counting overhead. Most of my drives can hit 180-200MB/s reads, especially since I'm using 64kb clusters (and I'm on ReFS). I can do some testing, but again, this is system and load dependant. So "results will always vary". And you can read a bit more about the read striping here: http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Performance Options haha, yeah. That's how/where they're stored. But getting them working is a heck of a lot mor difficult. Also, you shouldn't use SYMLINKs. These are meant to be resolved on the client side. So if a SYMLINK points to "C:\Windows\System32", the client resolves that, NOT the server. You may see how this could cause IMMEDIATE issues. Junctions are what you want to use 99% of the time. As for the slowdown, I think the limit was much higher, because of how they're handled, driver side. But I'd have to ask Alex (the Dev) about that. Sadly, no hard links though. So no plex db on the pool.
  2. drives not balancing

    It should be an MSI file. Run it, and it will install. Then open StableBit DrivePool UI, click on "Pool Options"/"Manage Pool", and then select "Balancing...". This will open up a new window. Open the "Balancers" tab on that window, and enable the balancer in question
  3. "Error Removing Drive - There is not enough space on the disk"

    Welcome!
  4. Issue with Beta 890

    Ah, okay. So it may have been a weird issue that was fixed, as well. If it does come back, please do let us know.
  5. Drive not showing up to add to pool

    I'm glad to hear it! And yeah, I think I've seen similar on my system before.
  6. Brand new CT500 SSD's - one keeps triggering Scanner error

    I've created a ticket/bug for this: https://stablebit.com/Admin/IssueAnalysis/27745 Unfortunately, I don't have an ETA on this.
  7. Drive not showing up to add to pool

    Run the StableBit troubleshooter, and then restart the stableBit DrivePool service. http://wiki.covecube.com/StableBit_Troubleshooter Use "3473" for the Contact ID. If that doesn't, then reset the settings. http://wiki.covecube.com/StableBit_DrivePool_Q2299585B
  8. Drive just went RAW

    Glad to hear it! this is part of why we recommend duplication, in fact. if something does go wrong, it's a simple fix 90% of the time. If it's pushed/highlighted/etc, then that's what you want. If it's not scanning every 30 days, then it should tell you why. Or it is outside of the work window.
  9. Brand new CT500 SSD's - one keeps triggering Scanner error

    Could you let me know what the bitFlock id is? Either here, or in a PM? and yeah, it's possible that the firmware does something odd with the SMART data. This is super common on SSDs
  10. Folder Placement Rules - Who would've thought!

    glad to hear it! And yeah, StableBit DrivePool uses kernel/UNC paths for the drives. So it doesn't care about the mount points. It's nice, isn't it? That said, we do recommend mounting to folder paths, so that the drives are easy to recognize and can be accessed easily (or more specifically, so you can easily run CHKDSK, as it does allow for folder mount paths) Fantastic! And yeah, it's not going to get RAID like speeds, because the IO isn't completely split between drives. But glad to hear that it is a nice boost.
  11. Issue with Beta 890

    To make sure, build 890 has this issue, but 904 does not?
  12. "Error Removing Drive - There is not enough space on the disk"

    903 should be fine then. 904 is using a different cert for stuff, but isn't a significant change otherwise. http://dl.covecube.com/DrivePoolWindows/rc/download/changes.txt
  13. Keep a defined amount of files on SSD drive?

    It may be the balancing settings. it gets it "close enough" Make sure that you have no other balancers enabled (except maybe the StableBit Scanner balancer), and set the ratio (on the main page) to "100%". That should help.
  14. Unknown HardDiskVolume number

    Okay. I recommend CHKDSK, as it can fix "odd issues" sometimes. and since not everyone does this, it's worth saying, IMO. Yeah, but these volume numbers may not be what are used elsewhere. For instance, try running "fltmc volumes" on the system, and check that. i'd post mine, but with 30 disks .... it's cluttered. that's normal, because of how the disks work. IIRC, it's a timing issue. The kernel needs a disk size when all of the disks are mounting, and since we can't query the other disks quite yet, we need an arbitrary number. Everywhere else, the value is correctly displayed, even when this isn't.
  15. Confused about Duplication

    If you haven duplication enabled, and no "higher" levels, then it's "x2", so you'd need 2x SSDs. If you have folders set to x4 duplication, then you'd need 4x SSDs, ideally. as for the UI, yeah, that's been a common complaint. It's been flagged already, so maybe in the next release.
  16. Unknown HardDiskVolume number

    Oh, man, the whole disk/volume/partition identification stuff is a nightmare. And a large part of that is because it's inconsistent with other parts of the OS. So determining what is what can lead to serious headaches. That said, i think the volume in question was the pool drive, actually. IIRC, msinfo32 exposes some of this. And you can run "fltmc volumes" to see this as well. But using GoodSync probably overwrote the files and reset permissions, and fixed things. So .... I'm glad to hear it's sorted out. however, just in case, i would recommend checking all the disks (via CHKDSK). Or if you have StableBit Scanner installed, then wait for it to run a file system scan).
  17. Duplication Inconsistent

    That's ... odd. but to be honest, everything with how this folder and it's contents are handled vary from "a bit odd" to "WTF". so .... as long as it's working now, I'm glad to hear it!
  18. drives not balancing

    Awesome, glad to hear it! And you're very welcome!
  19. Confused about Duplication

    Okay, so, you do want to do what I've posted above. Actually, the RC version of StableBit Drivepool will automatically prefer local disks (and "local" pools) over "remote" drives for reads. So nothing you need to do here. As for writes, if real time duplication is enabled, there isn't anything we can really do here. In this case, both copies are written out at the same time. So, to the local pool and the CloudDrive disk, in this case. But the writes happen to the cache, and then are uploaded. There are some optimizations there to help prevent inefficient uploads. No, this is to make sure that unduplicated data is kept local instead of remote. as for the drives size, it will ALWAYS report the raw capacity, no matter the duplication settings. So this pool WILL report 80TB. We don't do any processing of the drive size, because .... there is no good way to do this. Since each folder can have different duplication levels (off, x2, x3 ,x5, x10, etc), and the contents may drastically vary in size, there is no good way to compute this, other than showing the raw capacity. There isn't a (good) way to fix this. You could turn off real time duplication, which would do this. But it means that new data wouldn't be protected for up to 24 hours, .... or more. Also, files that are in use cannot be duplicated, in this config. So, it leaves your data more vulnerable, which is why we recommend against it. The other option is to add a couple of small drives and use the "SSD Optimizer" balancer plugin. You would need a number of drives equal to the highest duplication level, and the drives don't need to be SSDs. New files would be written to the drives marked as "SSD"s, and then moved to the other drives later (depending on the balancing settings for that pool).
  20. Confused about Duplication

    That's not what will happen here. This would put files that are not duplicated on the local disks. But duplicated data will only be allowed fo be placed on the CloudDrive. And then since is has no valid target, elsewhere. You need two+ drives for duplicated data. Because by "duplicated", we mean any data that is duplicated. We don't differentiate between original and copy. Both files are valid targets, and handled identically. The settings you've configured show that you want unduplicated data on the local disks (eg, files with only one copy of the file in the pool), and that you want "both" copies of duplicated data on the CloudDrive disk. I'm not sure what you want here, so it's hard to tell what would be best. But ... Assuming I am reading this right, You want to keep one copy local, and one on the CloudDrive? Or have some data duplicated to the CloudDrive data, and all of the unduplicated data stored local. If so, then remove the CloudDrive disk from the pool. Then create a new pool. In this new pool, add the existing pool and the CloudDrive disk to this pool (is should have only two "disks" in it. Then in the Balancing settings for this "top level" pool, uncheck ONLY the "Unduplicated" option for the CloudDrive disk. And save. This should place all of the duplicated data on both the disks, but only place unduplicated data on the local disk. if this isn't what you want to do, please explain what exactly you're trying to do.
  21. drives not balancing

    Honestly, this sounds "fine". It shouldn't rebalance here, since it's not outside of any settings. However, if you want, you can download and install the "Disk Usage Equalizer" balancer, as this will cause the pool to rebalance so it's using roughly equal space on each disk. https://stablebit.com/DrivePool/Plugins
  22. Scanner causing hardlocks

    Okay, sorry to hear that. And let me know when you've uploaded the memory dump
  23. Duplication Inconsistent

    Do this: http://wiki.covecube.com/StableBit_DrivePool_Q5510455 But for the "System Volume Information" folder, rather than the entire disk. Then delete the folder. And then rerun the duplication stuff.
  24. Unknown HardDiskVolume number

    Could you run the StableBit Troubleshooter: http://wiki.covecube.com/StableBit_Troubleshooter And use "3470" for the Contact ID request?
  25. Cant change any setting in I/O performace

    What version are you on? And what settings are you attempting to change? And on which provider(s)?
×