Jump to content

Shane

Moderators
  • Posts

    958
  • Joined

  • Last visited

  • Days Won

    95

Everything posted by Shane

  1. Hi, DrivePool doesn't have the option to move files to a particular drive by their access times. However if you know there are particular files or folders that you will regularly access you could set a File Placement rule to have them kept on the SSD. Otherwise you'd need to try a third-party tool (e.g. PrimoCache?) to use the SSD as an independent cache.
  2. Hi, I'd guess the answer is that if it seems to be working - e.g. you were able to create a pool and it shows up in Explorer and you copied a file to it okay - then you likely won't have problems (besides having to use Windows). That doesn't guarantee that future updates to the program will work on 7 down the line, so you might want to avoid updating unless it's necessary / carefully check the changelog / be ready to revert.
  3. Hi, of StableBit's products I'm least familiar with CloudDrive, but it does need to read metadata and directory data, perform integrity checks, etc. It may be indicating upload verification if you have that enabled. There is also the possibility of external cause (e.g. Windows deciding to access the drive for whatever reason). In any case I'd suggest adjusting the settings under Drive Options -> Performance, particularly enabling both metadata pinning and directory pinning, and increasing your cloud drive's local cache's size (if the cache is too small you will definitely get more cloud reads, and pinning uses cache too).
  4. Can you try starting with one drive in that enclosure and adding more to see when it starts dropping out? Might be a power issue to/with the enclosure?
  5. See if there's any matching events in Windows Event Viewer at those times that could indicate a cause?
  6. Shane

    Can't remove drives

    You can open a command prompt as an administrator and enter the following command: dpcmd ignore-poolpart pooldriveletter poolpartfoldername where pooldriveletter is the drive letter (including the colon) of the pool - not the poolpart - and poolpartfoldername is the name of the poolpart folder that is on the drive you wish to remove. This will "tag" the poolpart folder so that DrivePool will immediately and indefinitely disconnect the poolpart until the reverse command (dpcmd unignore-poolpart poolpartfullpathname) is used. Either way, DrivePool will consider the poolpart drive to be missing and it can be Removed without further issue - except that any unduplicated data on that drive will NOT be evacuated, so if there is any such data on the damaged drive you will need to attempt to copy it manually or restore from a backup. ALSO, since you mention there are multiple drives with damage, even if you have duplication enabled the level would need to be higher than the number of damaged drives to guarantee duplicated data is present on non-damaged drives; if not, you would also need to attempt a manual copy/restore.
  7. I'd look at services that interact with the disks or file system at a low level - e.g. programs that add virtual drives, programs such as antivirus that install file filters, that sort of thing - particularly anything that might have had an update around that time. E.g. if you uninstall any 3rd party antivirus, reboot and then DrivePool finds the drives right away... In the Windows Event Viewer, under Windows Logs -> System, you could check if any services are having trouble starting or only finish starting at around the time that the poolparts finish arriving (you can correlate this with DrivePool's log, which is measured in time since bootup, by looking for Event ID 6013 which logs uptime since boot). You could also try installing a different version of DrivePool (e.g. 2.3.7.1570 if you're running 2.3.8.1600) to see if that makes a difference; if installing an earlier version then uninstall your current version first.
  8. Sorry, been very busy. Create a new, empty folder in the existing drive you wish to use as the mount for your cloud drive. Open the Windows Disk Management. Right-click the cloud drive you wish to mount as a folder and choose "Change Drive Letter and Paths". Click "Add". Select "Mount in the following empty NTFS folder" and then click "Browse" and select the empty folder you had created. Click "OK" until you are back to the "Change Drive Latter and Paths" window. You can now access the cloud drive via that mounted folder, and at this point you can also select the cloud drive's old letter and "Remove" it (you can also do this before adding the mount folder if you desire). If you desire to create a small volume on an existing disk just for keeping multiple mount folders in the one place, away from the boot volume and others, you can use Disk Management (or another partition manager tool) by right-clicking the Disk you wish to create the volume on and choosing "New Simple Volume". If this is greyed out because the disk is already filled by an existing volume you can right-click a volume already on that disk and choose to Shrink it (obviously, do not Shrink a volume unless it's one you don't mind having less space). Enter a small amount (e.g. between 16 and 1024 MB) and proceed; you will then be able to create your new simple volume inside the freed up space and format it as NTFS with its own drive letter so you can create folder mounts inside it (so if you have already run out of drive letters than you will want free at least one up first - remember that you can remove cloud drive letters before mounting folders to them, you just won't be able to access them until you give them a letter or path again). Note: if you are also using DrivePool, do NOT mount a cloud drive as a folder on a pool drive that is using that cloud drive directly or indirectly as a pool disk. Recursive pathing is BAD.
  9. If you installed any other software (including 3rd party drivers) between installing Windows and installing DrivePool, I would try installing DrivePool immediately after Windows and seeing if the problem still occurs before installing the others one by one?
  10. If you disconnect the disks of a pool created on one machine with DrivePool to connect them to another machine with DrivePool, the pool will be recognised. Duplication levels will carry over, balancing and placement settings will not.
  11. Correct, DrivePool does not stripe data across disks like RAID 0, 5, etc. Files are stored as files on each volume. Read-striping, when enabled via Manage Pool -> Performance, will use different access methods depending on file size, disk type, etc, as described here. E.g. if a file is stored on two HDD it may interleave blocks from each (or stick to one if the other is being used to read/write a different file) while for a file stored on a SSD and a HDD it will prefer to read from the SSD volume all else being equal. Duplication level always overrides file placement; e.g. if you have duplication set to use 3 disks and you have a placement rule that limits a file to 2 disks, it will still use 3 disks (and the Pool Organization bar in the GUI will report it cannot satisfy the placement rules). Unfortunately the SSD Optimizer balancer and the File Placement rules do not play well together; the only way to achieve "empty all files from the SSD disks into the Archive disks EXCEPT certain ones per my File Placement rules" is to tick "Balancing plug-ins respect file placement rules" and un-tick "Unless the drive is being emptied". This will result in the desired behaviour (for new files; for old files you may have to redo the placement rules) HOWEVER the pool will complain that the file distribution is not optimal AND it will also prevent the Scanner plug-in from being able to evacuate the placed files (since both the Scanner and the SSD Optimizer uses the emptying function). Possible workarounds include: third-party tools that can use a SSD as a cache (I've heard of PrimoCache, Intel RST and AMD StoreMi but haven't tried them myself; EDIT: I have an AMD system but based on some googling I'd be wary of trying StoreMi without a system backup ready to restore, consensus seems to be that PrimoCache while not free is the safest option). third-party tools that can automate moving folders/files between SSD and HDD based on their date (I know such exist, never tried any). setting up sub-pools of SSD only and HDD only such that a set number of the duplicates of any given file will always be on SSD. if your pool has a lot of disks attached via RAID controller(s), setting them up in groups of small arrays (e.g. if you had 18 disks you might do six sets of three-disk RAID 5) so you can take advantage of RAID 5's striped performance and parity protection.
  12. You can use Windows Disk Management to mount a cloud drive as a folder attached to an existing (non-cloud, non-virtual) drive/volume. I do NOT recommend attaching it to your boot drive (e.g. personally I create a small and otherwise empty volume on a second internal drive and mount to folders on that).
  13. You could try increasing the number and priority of the threads that DrivePool uses for remeasuring, as per here, so that it at least takes less time to measure each pool. However I'm not aware of any way to have it measure more than one at a time. Perhaps a feature request?
  14. Yes, that should be safe.
  15. Just in case, note that while Everything will detect a DrivePool pool as a NTFS volume, it won't see any files in it as DrivePool's NTFS emulation doesn't include what Everything needs to parse it; you can either use Everything to search via the underlying poolpart drives or add your pool under Everything's Options -> Indexes -> Folders. But your screenshot and checking my own pools gives me a clue; check your File Placement rules to see if "D:\System Volume Information" (where D is the letter of your pool drive) has any rules set on it - if it or any subfolders have a green symbol it needs to be reverted to default (all drives checked, New drives unchecked, Overflow allowed, 90%); go to File Placement -> Rules and see if there is anything there that might be influencing it. If that's not the case, you may need to delete your "D:\System Volume Information" folder (where D is the drive letter of your pool). This is somewhat involved: Open the Properties of the folder. Click on Security and then Advanced. Choose "Change" to change the current owner. Enter the exact username of your administrator account (you must already be logged in as that account). Tick to Replace owner on subcontainers and objects. OK; confirm that you're replacing the directory permissions. Open a command prompt run as Administrator. Use the following command: rmdir "D:\System Volume Information" /s /q (where D is the the letter of your pool) It should run without giving you an Access Denied message. Restart your computer.
  16. Manually, I suppose one could look for any 12 byte files that are in a poolpart drive other than one where your File Placement rules say it should be placed (e.g. I know Voidtools' Everything utility can use size:12B as a search term) and then try moving it yourself to the "correct" poolpart and running a Remeasure. @Christopher (Drashna) any ideas where to look in the Service log or file logs?
  17. That's very concerning that a drive "fell out" un-noticed, as the pool (is supposed to) turn read-only when a drive goes missing. Please try each of the following (that you haven't already): Remeasured the pool (Manage Pool -> Remeasure) Entered the command "dpcmd refresh-all-poolparts" in a command prompt run as administrator. Reset the DrivePool settings as per this https://wiki.covecube.com/StableBit_DrivePool_Q2299585B (read carefully; try the Alternative Steps if necessary). If none of the above work, I would recommend opening a support ticket with StableBit so they can figure out what is going on.
  18. If Plex is accessing the files in the pool via network and the others are local, you could try enabling Manage Pool -> Performance -> Network I/O boost. If your pool has duplication enabled you could try enabling Manage Pool -> Performance -> Read striping, if it isn't already. Some old hashing utilities have trouble with it, your mileage may vary. There is also Manage Pool -> Performance -> Bypass file system filters, but that can cause issues with other software (read the tooltip carefully and then decide). You might also wish to check if Windows 11 is doing some kind of automatic content indexing, AV scanning or other whatever on the downloaded files that you could turn off / exclude?
  19. 1. If you are using the SSD Optimizer to regularly empty the SSD drives and the File Placement to keep a folder in those drives then there will be a conflict. In the GUI -> Manage Pool -> Balancing -> Settings, under "File placement settings", you could try the following: "File placement rules respect real-time file placement set by the balancing plug-ins" -> unticked "Balancing plug-ins respect file placement rules" -> ticked "Unless the drive is being emptied" -> unticked (caution: this may interfere with the Scanner plug-in evacuating faulty drives) Note that you may still receive warnings that the file distribution is not optimal even after you do this, but if the files are now going / being kept where you want you can ignore it. If you want Scanner to be able to evacuate the SSDs if they become faulty, then this isn't the solution for you. 2. If you are using Ordered placement to fill the SSD drives first and the File Placement rules to keep a folder's files in those drives then there might be an issue if the SSD drives have no room to accept new content you want kept in that folder. In the GUI -> Manage Pool -> Balancing -> Settings, under "File placement settings", you could try the following: "File placement rules respect real-time file placement set by the balancing plug-ins" -> unticked "Balancing plug-ins respect file placement rules" -> ticked "Unless the drive is being emptied" -> ticked You may instead have to ensure there is enough space by moving other folders' files out of the SSDs (e.g. by additional Placement rules). As an alternative to using Ordered placement + File Placement, you could try using just File Placement: a higher priority rule for "thatfolder\*" to use the SSDs and never any other disks a lower priority rule for "\*" to use the SSDs and allowed to go on others if the SSDs are too full. I hope this helps.
  20. Try the following in a command prompt run as administrator? dpcmd refresh-all-poolparts
  21. "For duplication, is it like raid 1 or does it work more like mirrored where it makes an exact copy of each file?" DrivePool's duplication is basically mirroring but file-based rather than block-based; if you set (or change) the duplication on a pool or folder to N copies then DrivePool will try to maintain N identical copies of the pool's or folder's files across the drives in the pool, one copy per drive. E.g. if you have a pool of five drives and you set pool duplication to x3, then DrivePool will keep one copy of each file on each of any three drives of the five drives in the pool. You have the option of real-time or nightly duplication (the latter will also run as a safeguard even if you've selected the former). "My only thing about not duplicating is I heard the scanner would move files off the dying drive to the other drives if it has space, is this true? Would it be good to go to that and the cloud as well to make sure nothing happens? Like get it to move automatically my data if a drive is dying or get notifications if they are. Will the move on their own the data on the bad drive, or would I need to initiate it?" StableBit DrivePool does have an option to evacuate drives if StableBit Scanner flags them as bad (e.g. via SMART warning); if it's enabled it happens automatically (unless it runs out of space, yes). Note that if a drive fails abruptly then it won't have time to evacuate the drive, which is where duplication is useful. DrivePool and Scanner can provide notifications about failed and failing drives respectively (including via email if configured). If a drive in a pool does fail, the pool will become read-only until the drive is manually replaced (or removed). In a business environment, good practice for backups (IMO) is usually considered to be at least three copies of your files (the working copy and two backups) in at least three locations (your work machine, your onsite backup machine and another backup that's secured somewhere else). YMMV.
  22. Hi Darkness2k. As the game is in open beta I'd suggest reporting the crashing to its developers (assuming you haven't already); they can then test it against DrivePool directly. If that doesn't progress, you can also open a support ticket with StableBit.
  23. Hi xelu01. DrivePool is great for pooling a bunch of mixed size drives; I wouldn't use windows storage spaces for that (not that I'm a fan of wss in general; have not had good experiences). As for duplication it's a matter of how comfortable/secure you feel. One thing to keep in mind with backups is that if you only have one backup, then if/when your primary or your backup is lost you don't have any backups until you get things going again. Truenas's raidz1 means your primary can survive one disk failure, but backups are also meant to cover for primary deletions (accidental or otherwise) so I'd be inclined to have duplication turned on, at least for anything I was truly worried about (you can also set duplication on/off for individual folders). YMMV. Regarding the backups themselves, if you're planning to back up your nas as one big file then do note that DrivePool can't create a file that's larger than the largest free space of its individual volumes (or in the case of duplication, the two largest free spaces) since unlike raidz1 the data is not striped (q.v. this thread on using raid vs pool vs both). E.g. if you had a pool with 20TB free in total but the largest free space of any given volume in it was 5TB then you couldn't copy a file larger than 5TB to the pool.
  24. In the file "C:\ProgramData\StableBit DrivePool\Service\Settings.json" use an "Override" value greater than -2 for "DrivePool_BackgroundTasksPriority". The maximum possible is 15 but I would both recommend against that and suggest incrementing gradually (e.g. start with 0 or 2) and checking if it causes any problems for other processes.
  25. Preventing: Do regular (monthly? YMMV) full reads of drives (so that their own firmware can find/refresh iffy blocks) if you don't already have something doing that for you (e.g. RAID scheduled scrubbing or StableBit Scanner or the like). Don't use SSDs as long-term offline storage, it might work but they're meant to be powered on at least semi-regularly to keep the data charge fresh (at least until someone invents a better SSD). Use ECC RAM (and a mainboard+cpu that actually supports it, which can be a pain). Don't overclock. Circumventing: Use some form of parity-based checking, e.g. RAID 5 disk array or a file system that does parity (e.g. zfs for linux) or utility software such as SnapRAID or MultiPar etc. Keep backups! TLDR: best (IMO) is use ECC RAM on a mainboard+cpu that supports it, use RAID5 arrays with a monthly scheduled scrub, have backups. Since I'm firmly not in the "money no object" category, however, I mostly just rely on duplication (3x for critical docs), backups, Scanner and MultiPar (I keep meaning to use SnapRAID...). "If I go Drivepool route, I need to pull everything out of RAIDS first?" No. DrivePool can add a hardware RAID array to the pool as if it was a single disk (because that's how arrays present themselves to the OS), so you don't need to pull everything out of an existing array if you're happy to just add the array to the pool.
×
×
  • Create New...