-
Posts
905 -
Joined
-
Last visited
-
Days Won
86
Reputation Activity
-
Shane reacted to dominator99 in drivepool access
ps
I don't like the term 'duplication' which suggests 2 copies of files/folders are created. I store 3 copies therefore 'replication' is a more appropriate term for more than 2 copies of files/folders. Call me pedantic!
-
Shane reacted to Sonicmojo in Removing CREATOR OWNER permissions from Drivepool root drive locks out Admin users when creating folders for shares
Found out the cause. On Windows Server 2022 (and presumably 2019) - if you are logged in with ANY account that is NOT the actual "Administrator" - even if that account is a member of the Administrators group and has admin rights -it appears that File Explorer de-escalates itself - and requires "approval" for simple stuff like my screencap above.
I was chatting with a user on Reddit who asked me - if you click Continue on my screencap - does the folder get renamed? And yes it does. So this is simply a User Account Control thing. Which BTW - I never saw on my other 2019 server - because you guessed it - I am running using the real Administrator account on that box.
So now I just did the same on this new server. This is a home network so I am not worried about the "enterprise best practices" of never using the Admin account.
One less box to click though a few hundred times is good enough for me.
Solved.
S
-
Shane reacted to DenSataniskeHest in disk not shown in drivepool suddenly
Thanks for the answer. I manage dto get it converted back with a free tool: https://www.hdd-tool.com/download.html
without dataloss..
Mabye that can help someone else
-
Shane reacted to KeithA in DrivePool causing Windows 11 to hang at logon
My issue is fixed!
I had an accumulation of reparse files in the .covefs folder. The quantity must have gotten to a number that the program could no longer deal with causing the hangs. The root cause being symbolic links. Once I moved the folder out of the hidden one everything started acting as it should. The links no longer work but that is fine. I will figure a way around it. I stumbled on to this thanks to another forum post;
The problem he was facing sounded all too familiar. Thanks for those who responded to my issue. Since these files can cripple a system when they get to a certain quantity it may be good idea for the program to throw an error as the number is reaching a level that can cause the system to become non-functional. Maybe an entry in the Wiki? I guess it might just be an edge case...
Thanks
-
Shane reacted to one-liner in Expand functionality of file protection / duplication feature
This applies to full disk array backup. But I chose specific folders to backup in SnapRAID config. So the parity file would be as large as the size of those folders + some overhead due to block size -> I chose here the smallest possible: 32k.
-
Shane got a reaction from Rorick in Is it possible to balance files between two hubs
The only con/issue that comes to mind is that there's no "one-click" way of splitting an existing pool into two pools let alone into a pool of two pools.
If you stick to using the GUI, you have to remove half your drives (problematic if your pool is over half full!), create a new pool using them, then create the new super pool that adds the other two pools, then transfer all your content from the original pool into the super pool (which, because the pools are separate drives as far as Windows is concerned, involves the slow copy-across-drives-then-delete-original process rather than the fast moving-within-a-drive process). But if you don't mind the wait involved in basically emptying half of the drives into the other half and then filling them back up again, it is definitely the simplest procedure.
Alternatively, if you're comfortable "opening the hood" and messing with hidden folders, you can manually seed the super pool instead - it is much quicker but also more fiddly (and thus a risk of making mistakes).
Note also that nested duplication is multiplicative; if the super pool folder that will show up in your hub pools when setting per-folder duplication is x2 and your super pool is itself x2, your total duplication of files in the super pool will be x4. So I'd suggest setting each hub pool's super pool folder to x1, setting the super pool itself to x2 and only then commencing the transfer of your content from the hub pools to the super pool. I hope that makes sense.
-
Shane reacted to phhowe17 in Duplication - schedule a specific start time
I am aware that I can set duplication to be delayed, which sets it to happen at "night". This system is a backup target so most activity happens overnight. Is it possible to schedule the duplication tasks to a specific start time e.g. 8 AM when the system is idle?
[edit]
I see that the settings.json file located in C:\ProgramData\StableBit DrivePool\Service contains
"FileDuplication_DuplicateTime": {
"Default": "02:00",
"Override": null
}
Which might be my solution.
Thanks.
-
Shane got a reaction from one-liner in Expand functionality of file protection / duplication feature
Re 1, DrivePool does not scrub duplicates at the content level, only by size and last modified date; it relies on the file system / hardware for content integrity. Some users make use of SnapRaid to do content integrity.
Re 2, DrivePool attempts to fulfill any duplication requirements when evacuating a bad disk from the pool. It appears to override file placement rules to do so (which I feel is a good thing, YMMV). However, your wording prompted me to test the Drive Usage Limiter balancer (I don't use it) and I found that it overrides evacuation by the StableBit Scanner balancer even when the latter is set to a higher priority. @Christopher (Drashna)
Re 3, I'd also like to know *hint hint*
-
Shane got a reaction from CyberSimian in Is Windows 7 still supported?
Hi, I'd guess the answer is that if it seems to be working - e.g. you were able to create a pool and it shows up in Explorer and you copied a file to it okay - then you likely won't have problems (besides having to use Windows). That doesn't guarantee that future updates to the program will work on 7 down the line, so you might want to avoid updating unless it's necessary / carefully check the changelog / be ready to revert.
-
Shane reacted to hammy in I’m not too proud to beg - Stablebit software on Linux?
I know the chance of this is near zero, but the most recent Windows screenshotting AI shenanigans is the last straw for me.
The incredible suite of Stablebit tools is the ONLY thing that has kept me using Windows (seriously, it’s the best software ever - I don’t even think that’s an exaggeration).
I will pay for the entire suite again, or hell Stablebit can double the price for Linux, I’ll pay it. Is there ANY chance a Linux version of DrivePool/Scanner would be developed?
-
Shane reacted to MikeRotch in File distribution not optimal. How to see which file(s) are affected?
The issue seemed to resolve itself. I did not have any rules set on D:\System Volume Information. But I didn't have to do the troubleshooting steps you recommended either.
This is very strange. Anyway, thanks for all your help.
-
Shane reacted to Vittorio Zamparella in "Error Adding Drive": cannot add the same disk to the pool twice
It worked!
I did three things:
I evacuated my poolpart.uuid folder (moving content to another folder)
Deleted everything in C:\ProgramData\StableBit DrivePool\Service\Store\Json.
Deleted everything in %AppData%\StableBit DrivePool
I think what made it was the last step because I had already reset the settings from withing the gui and I think that has to do with the configuration store in Json folder.
Now I'm adding my data back. Hopefully everything will go smooth from now on.
Thank you!
-
Shane got a reaction from Thronic in Plex buffers when nzbget is downloading to pool
If Plex is accessing the files in the pool via network and the others are local, you could try enabling Manage Pool -> Performance -> Network I/O boost.
If your pool has duplication enabled you could try enabling Manage Pool -> Performance -> Read striping, if it isn't already. Some old hashing utilities have trouble with it, your mileage may vary.
There is also Manage Pool -> Performance -> Bypass file system filters, but that can cause issues with other software (read the tooltip carefully and then decide).
You might also wish to check if Windows 11 is doing some kind of automatic content indexing, AV scanning or other whatever on the downloaded files that you could turn off / exclude?
-
Shane got a reaction from xelu01 in New to drivepool, good use case
Hi xelu01. DrivePool is great for pooling a bunch of mixed size drives; I wouldn't use windows storage spaces for that (not that I'm a fan of wss in general; have not had good experiences).
As for duplication it's a matter of how comfortable/secure you feel. One thing to keep in mind with backups is that if you only have one backup, then if/when your primary or your backup is lost you don't have any backups until you get things going again. Truenas's raidz1 means your primary can survive one disk failure, but backups are also meant to cover for primary deletions (accidental or otherwise) so I'd be inclined to have duplication turned on, at least for anything I was truly worried about (you can also set duplication on/off for individual folders). YMMV.
Regarding the backups themselves, if you're planning to back up your nas as one big file then do note that DrivePool can't create a file that's larger than the largest free space of its individual volumes (or in the case of duplication, the two largest free spaces) since unlike raidz1 the data is not striped (q.v. this thread on using raid vs pool vs both). E.g. if you had a pool with 20TB free in total but the largest free space of any given volume in it was 5TB then you couldn't copy a file larger than 5TB to the pool.
-
Shane got a reaction from BIFFTAZ in Is it ok to use space on HDD outside of pool?
That's ok to do, won't cause any harm.
-
Shane reacted to dan66215 in Unable to remove clouddrive(s) from a drivepool because I lost write access
I also got a little brave/creative/foolish and had the thought that maybe I could kick off the duplication process on P manually via command line since the GUI wouldn't come up and it wasn't duplicating on it's own from the nightly job.
So I did a dpcmd check-pool-fileparts P:\ 1 which completed successfully. When I checked F in clouddrive, the usage has increased and there's 1TB+ queued. So it looks like it's duplicating. That's one good thing so far!
-
Shane reacted to MikeRotch in File distribution not optimal. How to see which file(s) are affected?
Applied some more Google-Fu and found this command
GWMI -namespace root\cimv2 -class win32_volume | FL -property DriveLetter, DeviceID which gave me different GUIDs than from diskpart. I was able to match the volume from the logs to the GUIDs provided by that command. I think I have corrected the permissions from https://community.covecube.com/index.php?/topic/5810-ntfs-permissions-and-drivepool/
and will try a balancing pass.
Thanks again,
-
Shane reacted to dan66215 in Unable to remove clouddrive(s) from a drivepool because I lost write access
The detach was successful. I had to do a reboot between the detach and the re-attach. However when I re-attached F, the configuration screen came up, I selected another physical drive on which to put the cache. That was some time ago and it has not yet appeared in the GUI. I believe it's adding it, but maybe very slowly, as the Clouddrive app is using about 40% of the CPU. The other detached clouddrives (formerly M & K) are visible in the GUI but have lost their drive letters. Also the Drivepool GUI will not come up, though I have access to all the lettered drives via windows explorer and command prompt.
I guess I'll give it overnight and hope that F: gets re-attached and that is the cause of the Drivepool GUI not responding.
-
Shane got a reaction from Slaughty101 in Wrong SSD size report in DrivePool
I'd suggest opening a support ticket with StableBit.
-
Shane reacted to Wendo in Drivepool not balancing
For anyone that finds this later. The system is now balancing for free space on each drive.
At this point the only thing I can assume was that because one drive was over 90% full it just completely broke balancing with the Drive Space Equalizer plugin, and that failure cascaded and broke balancing for all plugins. I had the Prevent Drive Overfill set to not go over 90%, but I'm wondering if any plugin will actually bring a drive under 90% usage (if that's what it's set too) once it's already gone over. All the text seems to imply it will just stop it going over 90%, nothing about recovering from such a situation.
I disabled all plugins except the disk usage limiter and told it not to put any unduplicated data on DRIVE2. That ran successfully and got DRIVE2 under 90%, and after that and enabling the other plugins it started working exactly as I'd expected it to do so for the last year.
-
Shane reacted to RetroG in Need to replace MediaSonic ProBox that died
regarding power:
to use a PSU without a motherboard An "ATX Jumpstarter" AKA an ATX connector that connects Pin 14 to ground.
you can also tie this to a power switch but it must be a latching type (most cases have push button style power buttons)
if you want a little future expandability (and the ability to use less and longer cables) you can replace the SAS breakout brackets with a SAS expander. my personal pick would be the AEC-82885T (although that uses Mini-SAS HD connectors, 8643+8644).
this would allow you to use SATA disks using longer cables (SATA cables use a lower signaling voltage) and allow you to connect just one SAS cable to each JBOD. (yes, you are connecting 8 disks using one 4 lane 6Gb or 12Gb link, but it's barely noticeable for spinning drives)
-
Shane reacted to Damionix in Help with using Sans Digital EliteSTOR ES316X6+B with WorkStation (Dell Precision 7920)
Thank You so much for you response. I wish it came earlier. But I did get it done and configured and so far all has been working great.
-
Shane reacted to RetroG in Help with using Sans Digital EliteSTOR ES316X6+B with WorkStation (Dell Precision 7920)
it's an IT mode controller, it will work fine, even with 4Kn and TRIM. provided the expander in the JBOD is sane it should literally be plug and play. (I use a custom built jbod this way, on my windows desktop)
but for cables... electrically they are both identical, the shape of the connector is the only difference. you can buy cables that go from 8644 to 8088 too. and honestly I would just go with the 9300, the 9200 series is quite ancient now and the price difference isn't much at all.. and provides an easy upgrade path if you want to someday install a better expander or a jbod that supports 12Gb/s (which you can daisychain the elitestore off of). the only gotcha is that the 9300 is going to run warmer.
TLDR SAS hardware is insanely flexible, if you can think of it it'll probably work.
-
Shane got a reaction from Shooter3k in Drivepool balancing
The only ways I can think of to do that would be to #1, have a pool exclusively for just those drives and that folder (whether by itself or as part of a bigger pool), #2 use File Placement and allow only that folder to use those drives (and I'm not sure if the disk space balancing algorithm is smart enough to figure out what you want from that), or #3 turn off automatic balancing and manually spread the files in that folder across the poolparts of those disks yourself.
-
Shane reacted to Christopher (Drashna) in Scanner does not save progress when switching drives?
To add to this, yes, the progress is saved. The "scanning" percentage is based on what needs to be scanned currently, not on the whole disk. Which can be misleading.
That said, you can see how the drive is tracked here:
https://stablebit.com/Support/Scanner/2.X/Manual?Section=Disk Scanning Panel#Sector Map
And you may note that each sector has a date when it was last scanned. Each region is tracked, and different regions on the same disk can and will have different dates and times. Over time, this should actually help the software to "learn" when to scan the drives, by picking times when they are much less likely to be throttled.