Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  1. Hello. I believe there is no such option is Scanner, but it could be useful for some (at leas for me). I would like to open a request to add option to enable dumping SMART attributes to log files (ideally one file per physical disk) with all parameters (even unrecognized) on schedule set in options - like every 5 minutes. It would be great if it was a nice CSV file but clean TXT would also do. Layout could be like: columns of parameters, rows of dump/log time and values. So it would be easy to plot charts form it. I ask about it as I have a small 128GB sandisk SSD, it its sh
  2. @gd2246 I asked similar question some time back. No, there is no way to force equal "%" used on every disk (for example 10% used on 1000GB and 500GB = 100GB and 50GB) in real time. I am also hoping for introducing such feature
  3. Hello. I ihink that I might have encountered a bug in DrivePool behavior when using torrent. Here shown on 5x1TB pool. When on disk (that is a part of Pool) is created a new file that reserve X amount of space in MFT but does not preallocate it, DrivePool adds that X amount of space to total space avaliable on this disk. DrivePool is reporting correct HDD capacity (931GB) but wrong volume (larger than possible on particular HDD). To be clear, that file is not created "outside" of pool and then moved onto it, it is created on virtual disk (in my case E:\Torren
  4. Snapraid does parity filebased, so you can defrag your HDD, delete/add files and it will calculate parity only for these changes. You can also force it to recalculate parity for all files if you want. For me, on 5x 1TB pool it's going 300-500MB/s, not too fast as each drive can do between 120-180MB/s but my NTFS is compressed so its additional CPU load (+ I got a lot of small files, so HDD can't hit their max transfer speed). Some people hit 1500MB/s when calculating parity and 2500MB/s when checking hash on pools with multiple HDDs.
  5. @Umfriend I have a random collection of hard drives, from a 80 to 2000GB. I would like to put them all into one pool. One reason is that SnapRAID is faster with more drives, as it is calculating parity from all physical HDD at once. Second, if data is distributed between more disks, if something goes wrong - less of it will be lost. Third - I just "feel better" (yes, that is totally personal preference) when all of hdd in pool are used instead of spinning for nothing. Default file placement is great if all of your HDDs are the same size. It is great for my 5x 1TB pool, b
  6. But there is not any way I could change from absolute to %? The only way is wait for developer to add it I guess.
  7. Thanks for your answer @Christopher (Drashna) Would it be possible for developers, to create a new file placing strategy, to replicate behavior of Drive Space Equalizer but in real time? So users could switch between "place on HDD with most space" or "use equal % of space on every HDD".
  8. Maybe I am just simply misunderstanding this plugin and its is not able to equalize disks in real time? Only after being triggered (manually or automatically). If that is the case, I would be a bit disappointed. I hope @Christopher (Drashna) will be able to confirm or deny this ability of Disk Space Equalizer plugin. If it can not do it in real time, maybe it would be a nice feature to consider? Basically, I would like DrivePool to fill all my Pooled disks to the same % of space used. If new disks are added, all new data copied into Pool will go onto these new HDD unt
  9. Yeah, I think that's what Disk Space Equalizer was made for - to equalize space between HDDs with different capacity. I suspect that this plugin might be a bit outdated and thus don't work as expected.
  10. @Jaga Tried that settings-reset but didn't work. As you can see, (copying from D: to DrivePool) it will only put data on those five 1TB disks. The only difference is that 500GB HDD is attached via SAS Perc H310 flashed to IT mode, but that shouldn't make any difference. (Don't mint that HDS723020BLA642 MN6OA5R0 as it's a snapraid parity drive for Pool so it is only used when syncing). I will wait, maybe one of developers/administrators will be able to provide more information on this error I encounter. I really like DrivePool and it would be perfect for my needs, b
  11. @Jaga No, it doesent and that is the problem. I can use manual balancing, to equalize all (old and new disks), but if I start copying new data onto Pool, these new drives wont recieve any of it. All data will be distributed only between old disks. So, to make it clear: 1. I create Pool with 5 drives. 2. I copy some data to pool, Disk Space Equalizer Plugin is equalizing placing of data between all 5 disks in real time. 3. I add new disk to this old Pool. 4. I start to copy new data again, but DSEP is still placing it only on 5 disks that were used to create Pool.
  12. @Jaga I do not have any file placement rules as I don't mind on which HDD files are placed. When I disable all plugins except Disk Space Equalizer nothing changes. I made small experiment with small partitions. I started another pool with 2 partitions (G & F), filled them with some data, all was balanced +/- OK. Then I added another small partition ( i ) and it was not "noticed" by equalizer plugin neither before or after manually balancing. Manually balancing is working with Disk Space Equalizer - however it is not able to work correctly in real time with added disks/partition
  13. Hello! I will post here instead of creating a new topic as I experience similar issue. I use Disk Space Equalizer plugin with "Equalize by percent used" setting. I had 5x 1TB pool, and it was distributing new files equally onto those 5 HDDs on the fly. However, after adding one new 500GB HDD and manually balancing the pool, all new files are still placed onto those five 1TB hdd and nothing goes into 500GB HDD so I have to balance it manually. No settings were changed and it stopped equalizing files properly after adding that new 500GB HDD. One workaround is to use automat
  • Create New...