Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. For "Backup": Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> Check / uncheck Duplicated / Unduplicated. If you uncheck Unduplicated for the slower HDDs then the unduplicated files will end up at the faster HDDs. However, a duplicated files may still end up at two faster HDDs. There is no way to direct one to a fast and the other to a slow HDD. You can not do this because DP does not actually make "backups" of duplicated files. It makes "duplicates". The difference being that all copies of a file have the same status, there is no "original" vs "backup" distinction. You could work with hierarchal Pools though, that would make it work at the expense of being a bit less flexible with space usage. You might be able to use the File Placement Rules but I don't use it and have no clue how it should work for you. The File Placement Rules, as I understand it, would however be very helpful wrt to keeping folders/data on specific HDDs (your 3rd Q).
  2. Yes, that is what I tried to say: No discernable performance impact.
  3. Simple. Assume you have 13 Pooled HDDs. Each contains a hidden PoolPart.* folder. You direct Windows Explorer to P:\ (which I assume is the drive letter you assigned to the Pool). DP will read the PoolPart.* folders on the 13 HDDs and merge the results. Then you select the folder Movies. DP will read the PoolPart.*\Movies folder and merge the results. Etcetera ad infinitum. There is no reason I can think of to have the merged results stand-by for the entire folder structure. That would be slow. And even then, with large Pools if they remeasure/rebalance, then a complete list will be neccessary in order for DP to check dupliction consistency and construct a file movement strategy so it would have to read the entire structure (not the actual files) and that is done rather quickly as well (and transparent to the user, you won't even notice it is working on it). This may sound slow but the 13 HDDs will be read simultaneously. There are many users here and I have yet to come across one that complains about DP being slow, whether read or write (except perhaps for the read striping not giving a benefit to some, such as me).
  4. I get the impression you think there is some sort of database involved. There is not. It's all just plain NTFS. If you had a multple files named. "Test.fil" but stored in different folders then each file would be stored on one of the pool disks ( or more than one if duplication is on) in a folder with that same foldername.. Just like what you have currently with 001.jpg....002.jpg in 2016/2017/2018 folders.
  5. Umfriend

    Maximum Space

    But duplication is irrelevant here either way as a space limitation could be solved by adding more 1GB volumes. But still a 3GB file can't ever be written to a Pool that only has volumes < 3GB.
  6. Umfriend

    Maximum Space

    Yes, that is because you have partitioned the drives and added 1GB volumes to the Pool. DP does not split individual files over HDDs/Partitions/Volumes. So if you have a Pool that conssists of 1GB volumes, then the largest filesize you could ever write to that Pool is 1GB as that one 1 GB file needs to end up at _one_ volume. In theory, you could have a million 1GB volumes added to a Pool and your storage capacity would be huge but only for files that do not exceed 1GB per file in size. @Spider99: I think you misread his post. A Pool that consists of 1GB volumes can never hold individual files larger than 1GB.
  7. Umfriend

    Maximum Space

    With DrivePool, that is very easy. You have the 1x4TB+1x8TB Pool. Then you add the other 8TB HDD. Then you select the 4TB Pool and press Remove. DP will move the files out of the 4TB HDD and presto. It will take a while as it is actually copying TBs of data. There may be an issue with a few open files but that will work itself out.
  8. And it never hurts to have a look at Event Viewer just to see if there are IDE, ATAPI or DISK errors.
  9. The only reason I would consider a (windows) server platform is the (client & server) backup solutions, both file as BMR.
  10. Wow. The notion of DrivePool not ever balancing except by selecting the appropriate disk for new files is, given my circumstances, ffing brilliant! How do you ensure DP will never rebalance (unless by user intervention)?
  11. AFAIK, the SSD Optimizer requires a number of SSDs that is equal to the level of duplication. I.e., one SSD will not work for a Pool that has x2 duplication. Having said that, you could create two x2 duplication Pools and put one on top of that, unduplicated and with the SSD as cache. That would work I think. I actually think may not work fully unless the backups made are instant. The reason for this is that you may have run a backup at T=0, DP may rebalance at T=1, your drives of one pool may fail at T=2. At T=1, some files may have been written to the failing drives and you would not know it and these would not be available in the backup made at T=0. This may seem similar to the (normal) loss of data created between backups but this is in fact a bit worse. I may be wrong though. Now two HDDs failing at the same time is rather rare but you could consider to use CloudDrive as well. You would have either x2 or x3 duplication, depending on how OCD you are and then backups to e.g. Backblaze (which I assume allows for versioning and history). Should you consider CloudDrive then I advise to ask here what the best providers are as some have all kinds of limits/issues (I do not know, not a CloudDrive user).
  12. In WHS at least you can ignore the warning. Sometimes it remains but greyed out, althoug a reboot tends to help. Mostly, they disappear. I have never encountered _new_ warning not to occur because I selected to ignore the notification (but then, how would I as it is hard to experience something that does not occur...things not known). For instance, if I insert a new HDD I get a notification that I can either use it as storage or as a backup disk. When I ignore the warning it disappears but a new one will pop-up when I insert another HDD.
  13. Then do as Christopher says but I think you may have lost some...
  14. Not familiar with testdisk. Best to follow Christophers' instructions. Is F:\ the drive this is about? Explore that through Windows Explorer and see if there is a hidden Poolpart.* folder. You could also attach it to another PC to see if you can read it from there. Did you have duplication active?
  15. First thing I would do is: 1. Assign a drive letter to the HDD (if it is not there already) 2. Explore the HDD with explorer. Make sure you have Explorer set to show hidden files. If it shows a PoolPart.* folder, open that one and if it shows data you have likely _NOT_ lost data. 3. If that is the case then I think you need to give more details about OS/DP Version/HDDs/MB/Controller card etc and better describe what is happening exactly. It sounds to me like you experienced behaviour that should have been addressed way earlier and, as it hasn't, must be now. I don't see you writing about deleting/creating/formatting partitions, that's good at least.
  16. OK. So would it be possible to set that parameter by controller or disk? Or is safe I/O tried first and only upon failure would deadly violent I/O be tried?
  17. What kind of issues could it cause and is it possible to set the parameter by controller?
  18. So I just got my SAS controller (Dell H310 flashed to IT mode). It works but it does not pass SMART to scanner. I have looked but could not find it. Isn't there a setting I should change in Scanner? Edit: Never mind, found it. Scanner -> Settings -> Advanced Settings & Troubleshooting -> Cinfiguration Properities -> DirectIO- check Unsafe (which does not sound scary at all!)
  19. 1x128 SSD for OS, 1x8TB, 2x4TB, 2x2TB, 1x900GB. The 8TB and 1x4+1x2TB are in a hierarchical duplicated Pool, all with 2TB partitions so that WHS2011 Server Backup works. The other 4TB+2TB are in case some HDD fails. The 900GB is for trash of an further unnamed downloading client.So actually, a pretty small Server given what many users here have.
  20. That is quite a few disks! Glad it helped and you got to work it out.
  21. It may have to do to with other balancers. If you have Volume Equalization and Duplication Space Optimizers active, they may need to be de-activated _or_ you need to increase the priority of the Disk Space Equalizer plug-in such that it ranks higher than the other two (but if you have StableBit Scanner, that one should always be #1 IMHO). I have not actually used that plug-in myself though. Edit: Did you activate e-measure though?
  22. The Disk Space Equalizer plug-in comes to mind. https://stablebit.com/DrivePool/Plugins
  23. Just to confirm: The change in performance is between turning read-striping on and off on your new Server that you are copying files to, correct?
  24. That's a nice collection of hardware! The 8TBs, Archives or regular HDDs? Connected to the main board ports? I am not sure how DP determines the drive to read from, I think it is the "bus speed" which may not actually be the HDD speed. Chris would know.
  25. Oh, I fully agree with your solution to the OP's requirement, just questioned the requirement. The requirement also slows write performance as a single file needs to be written twice to the same HDD and perhaps this even increases the likelihood of a drive failure (not sure but I would guess that writes do add to wear & tear). Three duplicates is rather secure already, on top of that I would sooner look into actual backup solutions (incl. offsite/rotation). Edit: I never paid much attention to read striping but when I did I never thought it worked well. @OP: You've got at least 11 HDD/SSD's attached, how/what MB? I would not be suprised if the 8xSDDs were attached through some sort of card/device that affected read-striping.
×
×
  • Create New...