Jump to content

HTWingNut

Members
  • Posts

    47
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by HTWingNut

  1. OK, I finally downloaded PrimoCache and started the trial with a spare 240GB SSD I had laying around. The SSD has good performance, much better than any hard drive at least. In any case, it doesn't seem like it's making a difference. I included the volumes in the cache task that I'm copying files to. I set up 16GB of my 32GB RAM for L-1 cache. I set up my 240GB SSD using all its capacity as an L-2 cache. I set it to use 100% for write caching only. I did a large file transfer (80 GB, 25 files), and zero activity on the SSD, and performance was same as just a regular hard drive. My hard drive was tapped out at 100% but no sign of any cache activity. Any suggestions? Thanks. EDIT: It seems that you NEED to enable "Defer-Write" otherwise it acts as a read only cache. I guess I just assumed that it would act as a cache from RAM to SSD to HDD in real-time. But I guess you need to enable "Defer-Write" to have it act as an actual write cache. Weird. I thought "Defer-write" was just to add a delay to write to hard drives, but apparently not.
  2. Thanks. So either partition my SSD or go with something like PrimoCache. I think I will look into Primocache. That looks pretty slick. I may want to bump up my RAM to 64GB then.
  3. With the SSD cache plugin, if I have multiple pools does it require a separate SSD per pool? Or can I use the same SSD?
  4. I wouldn't set the sleep to 1 minute. That will kill the drive. Best to set it idle for like 20 minutes because if there's any drive side maintenance being done, it will shut down after 1 minute, wake, shut down, wake, shut down. Also, are you using stablebit scanner? Does scanner happen to be running at the moment? I would check task manager to see if drives are active. I'd be concerned with cooling in your system. What case are you using? I was running eight 12TB in my case at idle about 35-37C with slight fan movement. At load never exceeded 42C.
  5. Should the "Ordered File Placement" plug-in override the default balancing of DrivePool? I have "Ordered File Placement" placed at the top of the balancing list. Does "Do not balance automatically" need to be checked when using the "Ordered File Placement" plug-in? Because if I don't turn off auto-balancing it seems default balancing rules (span across available drives) will automatically override the ordered file placement plugin. If I manually click "re-balance" even with "do no balance automatically" selected, it also does not respect the "Ordered File Placement" plugin rule. Thanks.
  6. Is it possible to add the option for thermal throttle control based on whether it's an HDD or SSD? SSD can run at higher temps compared with HDD. I'd rather let my SSD max out at 65C and HDD max out at 50C. Thanks.
  7. Ok, I will open a ticket. Problem is it's not consistent and it's somewhat difficult to reproduce. It seems to happen more often with large files. I ended up with a few dozen files while getting checksum of over a couple hundred thousand files. This occurred with sha1 md5 blake2. The strange thing is that if I get an SHA1 hash of a file on a known good drive (not in DrivePool), for simplicity say checksum equals "123456ab". If I copy that file to DrivePool and with read-striping off, hash equals "123456ab". If I enable read striping and this anomaly occurs hash is now say "412453bf". But if I copy that file even with read-striping on to another external drive, the hash is equal to "123456ab". So it's not like the file changes. After turning off read-striping, I ran another checksum of all the files and everything was fine.
  8. After much angst, I realized that READ STRIPING can affect the calculated checksum hash value of a file. Turning off read striping enables consistent hash checksum values. IS THERE ANY WAY TO DISABLE AND ENABLE READ STRIPING FROM COMMAND LINE? I run a batch script for my backups every evening and recently added a hash check. It looks like I need to disable read striping otherwise hash checksum values are affected. If the read striping could be toggled from a command line, it could be turned off before backup and back on after backup. Otherwise I have to leave read-striping disabled if I want to use any kind of integrity check.
  9. Thanks. Yeah, I set up a specific email already for any home networking issues that arise that gives me notification on my phone. But ability to have it run a script would be great. I'll have to see if there's a way to intercept the email and have it automated to run a script, but that's likely beyond my technical abilities.
  10. I have a RAID 5 USB DAS for backups. The QNAP TR-004 with 4 x 12TB drives. Scanner will scan the DAS volume, but how exactly does that work? It's RAID 5 so only only 36 of 48TB are actually available. Is it even worth scanning? I can't see the drives individually through Windows, only using the QNAP software.
  11. I like the idea that drives are automatically evacuated in the event of a Scanner noted failure with the Scanner balancer. But it would be nice to have the option to execute a user-defined script at the same time. To turn off scheduled backups and stop other tasks reading/writing to/from the pool.
  12. I have been using DrivePool for some time now with a registered copy on my home server. I installed a copy of DrivePool (and Scanner) on my laptop and desktop PC. Trial version only. It was successfully able to connect to my server for remote access no problem. Same with Scanner, was able to connect to my home server. I recently purchased a license for Scanner for both my laptop and desktop. Since I upgraded Scanner, I can no longer connect either Scanner or DrivePool to my server. Also, just for clarification, I understand that I don't need a DrivePool license if all I want to do on my other PC's is remote control DrivePool app on my server, correct?
  13. I had a licensed copy of Scanner installed on my home server for quite some time. Worked great. I recently got additional licenses for three other PC's. I used them on a trial basis for a bit an everything was fine. Since I activated the license, the other computers no longer show. Every Scanner device has "remote control" selected. I can't see my server from DrivePool any more either. I am only using trial for DrivePool because I only want to DrivePool on the server. As I understand it you don't need a registered copy of DrivePool to remote manage another machine, is this correct? EDIT: I used advanced feature to reset to original settings on client machines (laptop and desktop). That seemed to work. Except that now, if I select my server from my laptop, I can't go back to my local laptop unless I try to connect to my desktop first... Video here: https://drive.google.com/file/d/18OBzzyRYL75LdxSvtvzS8d6tUlOa2BxU/view?usp=sharing
  14. Check the SMART status and see if there's any reallocated sectors.
  15. Anything to indicate that a drive has gone bad? If a drive starts throwing SMART errors, DP can lock your pool. I would send a support request to them though ASAP. Their response is likely to be more quick than on this forum.
  16. I just bought and installed a new hard drive. Brand new. 0 power on hours in SMART, only 4 starts. Scanner says it was scanned 61 days ago. How does Scanner identify drives? This happened to me once before too.
  17. Thanks. I ended up just setting up two 8TB HDD's mirrored outside of DrivePool through Disk Management. I don't see a need to add it to the DrivePool.
  18. Thanks. But where is this "bypass file system filters" located? edit: nevermind... I see it's under "Manage Pool / Performance"
  19. Is "Duplicated" amount that is shown on the main UI screen under "Manage Pool" equal to total amount used by duplicated files, so if all duplicated files are 2x then actual file space is half that? For example, if I specify 5TB of data to be duplicated 2x, then the "Duplicated" value should read 10TB? Also, I'm trying to wrap my head around how file duplication works. A duplicated file is seen as a single file by Windows. Let's say I have a hard drive that dies, motor stops, head crash with platter, whatever (god forbid, please no). If I pulled a drive from the machine and hooked it up to another Windows PC, will I be able to see/copy/move those duplicated files? After a drive crash, will it start to automatically duplicate those files again with the remaining good drives?
  20. I use Macrium Reflect for my home PC backups to my DrivePool. Part of Reflect is a service called "Image Guardian" that prevents files to be opened or moved by any program. This triggers constant flags in Image Guardian that DrivePool is trying to move the files. Maybe this needs to be a Macrium fix, but is there anything I can do in the meantime? I have locked my PC Backup folder to a single drive for now and see how that goes. But it would be nice to have to just let DrivePool do its thing. Thanks.
  21. Take a screenshot of your drivepool app and post here. If you have/had existing data on the drive when you created the pool, it needs to be moved to the "PoolPart.bunchofrandomnumbersandletters" folder. But you need to stop the DrivePool service first. Steps are here: https://wiki.covecube.com/StableBit_DrivePool_Q4142489
  22. I gave up on SnapRAID. It's too finicky for changed files. And most features with DrivePool require automatic balancing enabled, which can limit the effectiveness of SnapRAID. If you basically store data and rarely ever add new or change existing, it's fine. Otherwise, not really an option. It would be really nice for DrivePool to have some sort of background hash check with generated log that you can compare with backups would be great. It could even run during idle times like when duplication or balancing happens.
  23. All I can say is I have two 14TB WD and three 12TB WD drives all shucked (mix of Elements, My Book, Easystore), in my pool and recognized just fine. I have two 12TB and two 8TB shucked drives in an external USB enclosure and work fine too. I did have to tape the 3.3V pin on most of the drives though, but your system wouldn't even recognize it if that was the problem. If you can't get it to work I would consider exchanging it.
  24. Depends on your use. If you don't do a lot of writes, fill it up as much as you can, reading the data isn't affected much by it. If you do frequent writes, deletes, rewrites, it may be a bit more sluggish. I usually shoot for 100GB free. Then again, once I reach 80% full, I usually just add a new drive, or upgrade to a fatter one.
×
×
  • Create New...