Jump to content
Covecube Inc.

HTWingNut

Members
  • Content Count

    40
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by HTWingNut


  1. Ok, I will open a ticket.

    Problem is it's not consistent and it's somewhat difficult to reproduce. It seems to happen more often with large files. I ended up with a few dozen files while getting checksum of over a couple hundred thousand files. This occurred with sha1 md5 blake2.

    The strange thing is that if I get an SHA1 hash of a file on a known good drive (not in DrivePool), for simplicity say checksum equals "123456ab". If I copy that file to DrivePool and with read-striping off, hash equals "123456ab".  If I enable read striping and this anomaly occurs hash is now say "412453bf". But if I copy that file even with read-striping on to another external drive, the hash is equal to "123456ab". So it's not like the file changes. After turning off read-striping, I ran another checksum of all the files and everything was fine.


  2. After much angst, I realized that READ STRIPING can affect the calculated checksum hash value of a file. Turning off read striping enables consistent hash checksum values.

    IS THERE ANY WAY TO DISABLE AND ENABLE READ STRIPING FROM COMMAND LINE?

    I run a batch script for my backups every evening and recently added a hash check. It looks like I need to disable read striping otherwise hash checksum values are affected.

    If  the read striping could be toggled from a command line, it could be turned off before backup and back on after backup. Otherwise I have to leave read-striping disabled if I want to use any kind of integrity check.

     


  3. I have a RAID 5 USB DAS for backups. The QNAP TR-004 with 4 x 12TB drives. 

    Scanner will scan the DAS volume, but how exactly does that work? It's RAID 5 so only only 36 of 48TB are actually available. Is it even worth scanning? I can't see the drives individually through Windows, only using the QNAP software.


  4. I have been using DrivePool for some time now with a registered copy on my home server.

    I installed a copy of DrivePool (and Scanner) on my laptop and desktop PC. Trial version only. It was successfully able to connect to my server for remote access no problem.

    Same with Scanner, was able to connect to my home server. I recently purchased a license for Scanner for both my laptop and desktop. Since I upgraded Scanner, I can no longer connect either Scanner or DrivePool to my server.

    Also, just for clarification, I understand that I don't need a DrivePool license if all I want to do on my other PC's is remote control DrivePool app on my server, correct?


  5. I had a licensed copy of Scanner installed on my home server for quite some time. Worked great. I recently got additional licenses for three other PC's. I used them on a trial basis for a bit an everything was fine. Since I activated the license, the other computers no longer show. Every Scanner device has "remote control" selected.

    I can't see my server from DrivePool any more either. I am only using trial for DrivePool because I only want to DrivePool on the server. As I understand it you don't need a registered copy of DrivePool to remote manage another machine, is this correct?

    EDIT: I used advanced feature to reset to original settings on client machines (laptop and desktop). That seemed to work. Except that now, if I select my server from my laptop, I can't go back to my local laptop unless I try to connect to my desktop first... Video here: https://drive.google.com/file/d/18OBzzyRYL75LdxSvtvzS8d6tUlOa2BxU/view?usp=sharing

     


  6. Is "Duplicated" amount that is shown on the main UI screen under "Manage Pool" equal to total amount used by duplicated files, so if all duplicated files are 2x then actual file space is half that?

    For example, if I specify 5TB of data to be duplicated 2x, then the "Duplicated" value should read 10TB?

    Also, I'm trying to wrap my head around how file duplication works. A duplicated file is seen as a single file by Windows.

    Let's say I have a hard drive that dies, motor stops, head crash with platter, whatever (god forbid, please no).

    If I pulled a drive from the machine and hooked it up to another Windows PC, will I be able to see/copy/move those duplicated files?

    After a drive crash, will it start to automatically duplicate those files again with the remaining good drives?


  7. I use Macrium Reflect for my home PC backups to my DrivePool. Part of Reflect is a service called "Image Guardian" that prevents files to be opened or moved by any program. This triggers constant flags in Image Guardian that DrivePool is trying to move the files. Maybe this needs to be a Macrium fix, but is there anything I can do in the meantime? I have locked my PC Backup folder to a single drive for now and see how that goes. But it would be nice to have to just let DrivePool do its thing. Thanks.


  8. I gave up on SnapRAID. It's too finicky for changed files. And most features with DrivePool require automatic balancing enabled, which can limit the effectiveness of SnapRAID. If you basically store data and rarely ever add new or change existing, it's fine. Otherwise, not really an option.

    It would be really nice for DrivePool to have some sort of background hash check with generated log that you can compare with backups would be great. It could even run during idle times like when duplication or balancing happens.


  9. All I can say is I have two 14TB WD and three 12TB WD drives all shucked (mix of Elements, My Book, Easystore), in my pool and recognized just fine. I have two 12TB and two 8TB shucked drives in an external USB enclosure and work fine too. I did have to tape the 3.3V pin on most of the drives though, but your system wouldn't even recognize it if that was the problem. If you can't get it to work I would consider exchanging it.


  10. Depends on your use. If you don't do a lot of writes, fill it up as much as you can, reading the data isn't affected much by it. If you do frequent writes, deletes, rewrites, it may be a bit more sluggish. I usually shoot for 100GB free. Then again, once I reach 80% full, I usually just add a new drive, or upgrade to a fatter one.


  11. I recently created a pool with existing data on it, successfully added the data to the pool.

    I set it to balance the drives evenly across all drives with the "Disk Space Equalizer" plugin.

    I have about 16TB of data spread across a 64TB pool of 5 drives (2x14TB + 3x12TB). A bulk of the data is on two drives. I'd rather the data be spread across the drives so in event of a drive failure there won't be a 14TB drive nearly full of data it will have to evacuate compared with the data spread across 5 drives (current data pool would be a bit over 3TB per drive).

    Problem is I am looking at about 11-12 minutes per 0.1% of data moved. Task Manager is showing only about 10-12MB/sec transfer speeds for the balancing. My math shows this will take over a week to balance the data. Is there any way to improve this?

    I'm tempted to just delete the data and restore from backups, because that will at least transfer at average 100 MB/sec or so which would "ONLY" take about two days. But I'm hesitant to do that because I don't want to delete any full set of data if I can help it.

    I have not set any duplication yet, and I don't want to set file duplication until data has been balanced either. But maybe I just do that, I'm assuming that will take priority over balancing data.


  12. Thanks for the feedback. By "just like any other large file" does it copy it on the block level changes or full file change? Because if that file is changing regularly, doesn't seem very efficient to copy a full 3TB every time it changes. Which of course would be torture.

    Either way, my VHD is on its own 8TB drive right now and I have a spare 8TB and think I will just mirror it. Probably safest solution.

    Thanks!


  13. Has anyone used duplication with very large files in DrivePool and with duplication enabled? How does DP manage large files? The VDI image would change relatively frequently since it's basically operating like an OS hard disk. Would the full file need to be duplicated every time or does it transfer just blocks of data? I'm talking about a virtual drive image that can be 3-4TB in size (or maybe more eventually).


  14. On 2/22/2020 at 12:51 PM, aBe_FX35 said:

    Thank you all for your feedback, yes its very weird that 2 of those drives failed on me, im upset about it. spent lots of money on the WD reds, i think my case i have them in is just not the best place for them, i need to build a standalone drivepool/plex media pc, any recommendations on a very functional and ventilated case for this purpose ? yes i do monitor the temps but not all the time. here is a screen shot of my temps, is this regular/normal temps ? :

    i tried to attach a simple snip it clip png screenshot and site is telling me im only allowed 10kb ???!!!?? lol

    I've come to like the Fractal Design cases. I just upgraded from a six-bay Node 304 to a 8 drive bay (+2 more mounted elsewhere in case) Node 804 case. Good ventilation, and anti-vibration pads. Just maybe add a second fan for improved airflow. Even under load all my drives are under 35C. If you want more drive space, then they offer the Define 7 that can accommodate 14 drives, and the Define 7 XL for 18.

    The Stablebit DriveScanner can alert you to high temps too.

    Use imgur.com if you want to post a screenshot.

×
×
  • Create New...