Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. That is actually not correct. Sure, the DC computer components will only use what they need but the PSU that transforms AC to DC will _always_ require more W that it delivers and, AFAIK, significantly when the DC required is far less than unfettered/maximal/optimal output. Perhaps some are so good that the loss is small but, really, will you ever use 40% of that PSU in such a machine? Anyway, who am I to nag, I run Rosetta@Home on my server so I'm wasting energy to (although for science&medicine and at least my PSU is used in its efficient zone )
  2. FCOL, a 1200W PSU? Get some splitters! Them's to be used with like dual X295 AMD cards having a TDP of 500W and a 150W CPU or something. I wonder what kind of efficiency they reach when you're actually drawing like 120W (10%)? On the other hand, what's another 170 bucks...
  3. Nope, that seems weird to me. I doubt it is due to DrivePool but I have now clue what it could be. If it does not occur anymore without you deleting the old backups then I'd leave it as is. I expect Drashna to chime in to verify it is not DP related. Otherwise, you might want to go to http://forum.wegotserved.com and ask there. Looking at the date info though, could it be that the first backup on the 8th actually started the 7th, close to midnight, and ended the eigth?
  4. No, it'll just be the first day that you'll see this. Hi Jackietools
  5. My first WHS 2011 build was with a Celeron G630 (or G530, not sure anymore). It did client backups, server backups, access to shares, ran Rosetta@Home and streamed using Serviio. Never a hickup. Not sure how it would have handled two video streams and I'm not sure Serviio ever had to do transcoding (which can be CPU intensive I am told). I just upgraded to an i7-3770 for fun, really.
  6. Funny, I had issues with the ST2000DM001 where two crapped out shortly after the other. Now have the ST2000VM001, not for more than a year yet but I have _faith_...
  7. I understand. Can't help you there but I'm sure Drashna will come back to you soon. I would not be surprised if they fix this shortly. 970GB files are, I guess, just not that common so this may be an unforseen situation.
  8. Drashna will say but I am going out on a limb here with a suspicion: VOL02 has not reached the threshold (limit of 90% full or less than 100GB free) yet and so DP tries to write it there. It seems as if the threshold parameters are evaluated prior to or excluding the actual file to write. A dirty workaround might by to set the "Or this much free space" slider to 600 GB, copy the file, and set it back again. Of course, if meanwhile other files are written to the Pool as well they'll end up at VOL3 as well until the slider is back at the original level. I can understand your English perfectly well.
  9. Drashna, that is good to know! I can confirm I can restore just the database (or a complete machine).
  10. About the backup. I would think that one would never want an OS to backup a database, that is something the DBMS should do itself. However, I run SQL Server on a client and the databases are in fact backed-up by WHS2011. So much so that SQL Server is actually aware of the backups WHS2011 takes. No clue what MS did there but I like it. Just to test I restored one and indeed could attach it and it was the full DB. Of course, I have no clue how that works AIW actual transactions occur on the DB during backup. My point is, maybe Server Backup, which AFAIK uses VSS as well, can do the same as well.
  11. I wonder about this too. I know Dropbox can't handle SQL Server databases. I always assumed DP would not do well given that SQL Server constantly locks files, but on the other hand, all I/O operations go through NTFS so who knows?
  12. No service needs to be disabled to run DP, it'll run clean out of the box. I am pretty sure WHS will wipe the drive it will be installed on and entirely, not just the first partition if there are more etc. It claims one HDD in full. Serviio runs like a charm for me even with a dashboard integrated interface that, although not maintenanced, still works with the most recent version of serviio.
  13. Actually, that is what I do: 2 x 2TB Pool, full duplication and I backup one of the two. Should I need to have more memory than I'll add another 2x2TB in a seperate Pool and backup _one_ drive of each Pool.
  14. Umfriend

    Just Installed

    No, there is no catalogue. In fact, now that files are split amongst two drives, the probability of you losing *some* files has increased. Do you have/make backups? What you did is what I'd have done had I wanted the Pool to be E:\. Again, I think this is ill advised unless really neccessary. I would have expected (but failed to mention) that it would balance (unless sizes were very different), at least overnight. Maybe a DIR /s/n/b > list.txt or something might help but I don't know how it deals with hidden folders etc. And of course, it'd be a snapshot only.
  15. Umfriend

    Just Installed

    This can be done through Computer Management -> Storage -> Disk Management. I believe though that this is not best practise and that Covecube advises to use a high-in-the-alphabet letter for the Pool. Not sure what OS you are running but if it is a server OS and the links are to shared server folders then I would consider to: 1. Create folders on E: with names from which you know what shared folders they represent 2. Move from shared folders to those "copied" folders 3. Create the pool, give the Pool a drive letter like P: 4. Use the server OS's functions to move the shared folders, they will now be in the hidden Poolpart.xxxxxx folders 5. Move from the copied folders to the shared folders on E: within the Poolpart.xxxxxx folders. It might be that you can skip steps 2 and 5 but then I am not sure whether it'll take a lot of time, it probably will if only because DP will move to the two physical drives. But it really also depends on why you need the paths to remain the same. I would also wait or Drashna to chime in ;-)
  16. I think there is at least one caveats there though. Suppose the Pool has X drives, each with their own type of files. Suppose one of the drives is full. The next file of the type of intended for that drive will spill over to one of the other drives. If that one fails, you've lost a file and you'd not know it. Of course, it would be sensible to monitor the use of individual drives then and ensure that scenario doesn't happen but still... (one of the reasons I prefer duplication & backup and dislike the idea of file placement rules).
  17. I can answer the first: If one drive fails, you lose the data on that one drive. The other is still reachable. In fact, there is a hidden poolpart* folder within which all that data is simply accesible through explorer as DP stores everything in NTFS. It does not split files between drives (like raid 0 does). Donnow what spanned drive and mirroring is unless you mean one of the RAID levels.
  18. That is great news. When will it be done? ;-)
  19. Yes, that's exactly what I use it for. Add the 2nd drive to the Pool, then set duplication to x2 (Pool Options -> File Protection -> Pool file duplication), I am assuming you are using DP 2.x, not 1.x). It'll take a while but DP will ensure copies of each file are present on both drives. I assume, of course, that the relevant files on drive 1 are alread in the Pool, not on the first drive but alongside the hidden poolpart folder. If you can see the files through explorer on the Pooldrive then it should be fine.
  20. Nope, I must be suffering dainbrammage. I believe I once read there would be an issue with a BMR restore of the OS drive, perhaps in that it would first create a 60GB partition and then not be able to recover the drive as it had more than 60GB data. But I could be very very and totally wrong here. If anyone has succesfully done a BMR of a > 60GB OS Drive on WHS2011 then I'll go to a 128GB SSD (and find out how to install on a <160GB drive to be prepared for clean install in case of real messed up situation).
  21. I would suggest to "move" the serverfolders from D: to E: using the wizard from the Dashboard. AFAIK you can extend C: but I *think* that is not best practice/advised. Can't remember why that is unfortunately.
  22. I use WHS2011. I have my client backups in the Pool. No issues. I have succesfully moved server/shared folders through the Dashboard from and to the Pool, in both cases with contents. ON moving, mileage may vary as I believe some think moving folders with data can be troublesome.
  23. That's what I meant indeed. Please prod Alex daily on this, the uncertainty is excruciating! ;-)
  24. I'd really like such an app/add in. It may be the absolute best worthless app (worthless in that it may never ever find anything) but the comfort would be so great. OT: Early days I was a console operator at a Unisys mainframe (or, as the IBM-operaters called it, a mini ;-). Late 80s. Everything was double and could survive one failure: 2 CPUs, 2x5 Harddrives (huge machines), 2x2 tape drives, 2 consoles of course, 2 printers, 2 communication controllers etc. The database was duplexed as well. Every Wednesday evenening we would run the Compare. It'd check the contents of both databases. Had been done since the start in 1985, went until the system was replaced in, 1998 or so. Wednesday was overtime day (evening) because the compare ran for 2 to 3 hours and exceeded the last shift. Paid well that and was very reliable. It never ever found a thing. Everyone was happy.
×
×
  • Create New...