Jump to content
Covecube Inc.


  • Posts

  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling

abudfv2008's Achievements


Member (2/3)



  1. By the way. Another bug. Drivepool stops duplication/rebalancing process in case some file is occupied. I replaced faulty drive and started rebalance/duplication it in the evening. But in the morning it had done only 2% - it stopped because I had mremoteng opened and Drivepool wasn't able to duplicate it's files. I think - it can just skip those files and duplicate all other. Later I can restart duplication and only those skipped files will need duplication/rebalancing.
  2. I think it's a bug. 4 disks 2 + 3 + 3 + 4Tb in size. All files are set to be dulicated. (Except very small portion in 1 folder). Only 2x duplication is used. There are no huge file sizes like 1 or 2 Tb. Mostly photos and videos. But somehow 1.59Tb is supposed to be unusable for duplication. WTF - it's almost the size of smallest HDD. That happened after replacing 1 faulty disk. Before - everything was fine.
  3. I think it's some bug.
  4. Does it mean that now I can do following thing: I have 5 hdd in a pool with different duplication for different folders (no dup,2x and 3x). Can I make an upper level pool with: 1 ssd (or 1 ramdrive) + my pool. To use ONLY one ssd(ram) drive for write cache purposes?
  5. Don't tink it is good option/ As far as Crashplan doesn't work instantly, and usually is very slow, It is possible to get weird mess of files, i,e, in case you ordered Drive Pool to rebalance, by changing plugins order or their activity.
  6. Hello. I've used Drivepool for several months already. For ordinary operations the performance is ok, but for heavy software such as Adobe Lightroom the performance impact is noticable. The performance decrease is because i duplicate files on different drives, some of them are slower (much) then others. And as far as i have tested i receive the performance of the slowest drive. It should be some stripping or else, but i haven't noticed any. Even tried to put "xmp" files on separate drive, so that no write operations would interrupt reading - no effect. So I'd like to speed up my Lightroom, and would like to place Duplicated Files on SSD+HDD (photos+videos). Slow initial write is OK. But 100% of the time these files have read operations only (Lightroom). So Is it possible to implement read only from SSD drive, thus gaining maximum speed?
  7. Use Fie Placement in Balancing. You can directly order drivepool to use this or that drive on a folder level.
  8. Hello. I opened a ticket 4days ago. Any answer?
  9. Hello I have got a problem with DrivePool and Scanner. When I switch to another user I get DrivePoolUI and ScannerUI occupying 1 processor core at 100%. In attachment you can see this 25-30% of CPU (4cores) usage by each User Interface. I haven't noticed such behaviour with CloudDrive yet. Windows10 Pro, DrivePool Beta
  10. Well. May be I haven't explained it clearly. Sorry, English isn't my native language. File Vault is potentially good product, but it solves a little bit another task. It is more like making hash(SHA/MD5) files aside protected ones. It is good for sharing software, but not good for ordinary use. Other software knows nothing about these md5 files, so when i move datafile to another place in LightRoom the md5 will stay where it was and become useless. Checksumm/hash comparison of the files with backup or synced copy is not a problem - there are tools to do that. The problem is that DrivePool duplicates of my files are spread between 6 disks. So it is impossible to use folder comparison of PoolPartxxxx with FreeFolderSync or any other tool (I need to rebalance this folder to use only 2 drives to do that). Another problem is that if I use 3Ñ… duplication than I have to compare 2 times (and also need a rebalance to use 3 drives) Another potential problem is that when i make hash comparison with my backup, corrupted data can be missed, because good "stripe" is read while comparison, but at time of backup bad "stripe" could be read. So IMHO it is much better to implement duplicates hash comparison in DrivePool itself.
  11. Hello. I used to make a file comparison of my data with FreeFileSync (i make backups just mirroring needed data to NAS). I was forced to do so because sometimes got corrupted photos (RAW). It is rare, the drives are ok (of couse they have some degradation, but will live another 5years, at least) but as far as there are a lot of files and their number constantly grows - the chance of corruption raises also. The problem was that I learn about corrupted file only when i try to export photo from Lightroom. I look for the photo in the backup .... ooops it is corrupted too, because bad copy replaced good one during last syncing. So now i run bitwise comparison before syncing. So the problem is that duplicated data is spread among several disks and i can't compare them with FreeFileSync or another tool. Is would be very good to implement some kind of bitwise comparison (on a folder level) to be able to find such corrupted duplicates. Regards. P.S. I think i missed the thread. It should be in DrivePool discussion.
  • Create New...