Jump to content

Umfriend

Members
  • Posts

    1001
  • Joined

  • Last visited

  • Days Won

    54

Everything posted by Umfriend

  1. I never did get into the BBS world at the time. Simply did not have the money to make it work or pay the phonebills (i.e., my parents did not). My first HDD was actually (1988?) a 20MB ST-506 in a A590 HDD and memory expansion kit which was connected to my (well, my brothers' but he did not do anything with it) A-1000. It also had a SCSI controller but those 42MB Quantum HDDs were simply way out of budget. I seem to remember that at the time there were both MFM as RLL HDDs but it's all a bit gray in my memory (sortof like the Samsung EVO-drives I guess). Good times but had I had some funds at the time, it would have been me who did google, facebook and youtube.
  2. OK, OT, sorry, but do you use a digital joystick with those? I would not know where to get one and how to connect it. And it is part of the experience IMHO.
  3. Now I am really jealous. Do you actually run them through an emulator?
  4. That may all be true for the HGST drives but that does not apply to the Seagates. The Seagates actially come with firmware that optimises write-behaviour. It's "the other approach" that that article refers to. There are no compatability issues with the Seagates. And yes, due to SMR you may suffer write penalties but if we are realistic, you would have to have quite some heavy I/O to actually suffer/notice this. Opening a word doc, changing it and saving it? No issue. Movie editing? Not sure, perhaps. OLTP-databases? You typically simply do not want to risk degraded performance so no. But I really believe the use-cases where write performance would be an issue are very limited. OK, but who writes 100TB a year (and wants to keep only 8TB of that)? A good review was done here: http://www.storagereview.com/seagate_archive_hdd_review_8tb In the use case of OP, writing in batches of 20 to 30GB at a time, mostly to retain, these are a great deal IMHO. Oh, and read speeds are crazy.
  5. Actually, I do use 8TB Seagate Archive HDDs but for my Server Backups and data there does change. Never a problem and I did track average backup performance for a while and rarely did it do worse than my 4TB WD Red Server Backup HDD. I guess OLTP databases may not be a good idea though. For home use, I can hardly imagine the 20GB buffer would not take 99.9% of the write I/O. Having said that, I have had one of the two Archive HDDs fail. I still need to look into this but it got re-allocation errors (pending and the permanent type) and Scanner in fact can not scan the whole HDD but locks up the Server completely. In its defense though, I had a sudden power-out while a backup was being written to it and that may have been a bit out of spec for the HDD
  6. So is CloudDrive close to being done? I'm not interested in that at all actually but I would really like Alex to find the time to implement FileSafe and DupliGroup (couldn't think of a better name yet ). Edit: I have no idea how this thread ended up in Scanner, sorry.
  7. Umfriend

    What is going on?

    Apparantly, EaseUS can do this to too. I have never done that but it has helped me shrink partitions where Windows wouldn't and appears solid to me. http://www.easeus.com/partition-manager-software/change-dynamic-disk-to-basic-disk.html
  8. Well, if it is an educational excersize only then I would opt for the WS10 Tech release. I have that installed on an old machine for that purpose as preperation for the migration from WHS2011 sometime in the early 2020's
  9. Well, Christopher will say but the idea is indeed that you would: - Deactivate the license on WHS 2011 (and note the activation key before that I would think) - Turn off WHS 2011 and take the pooled HDDs out - Install and activate DP on the W8 box. Turn the W8 box off afterwards - Install the pooled HDDs in the W8 box - Turn on W8 box and voila, the Pool should be there.
  10. Yeah, I may be nitpicking. Thing is, with DP, if you lose X HDD where X is the duplication factor then you may have lost data as well. So I am not so sure that DP is really better than RAID 5 in that respect. But yeah, even with the increased overhead of DP with duplication over RAID 5 or 6, I favor DP over RAID (0, 1, 5, 6 or 10).
  11. No. With duplication and the SSD Optimser Plugin, you need two HDD/SSD's for the landing zone (or three with x3 duplication etc). I assume the 4 drive bays are 3.5". Could you not use something like http://www.icydock.com/goods.php?id=140? Also, AFAIK, the recommendation is to have realtime duplication _enabled_.
  12. Which would not help you against a number of things. I would recommend offsite backups for important data (all data actually to the extent possible).
  13. First, relax ;-). 2nd, what OS and version of DP do you run? 3rd, 2.08 of what, KB, MB, GB, TB? Have you ever had duplication on this system?
  14. That raptor is like what, $200? You could consider http://www.newegg.com/Product/Product.aspx?Item=N82E16820148696 960GB SSD from Crucial. No clue whether it is a good/reliable SSD really, would need some research but at less than $100 extra, I would definately consider going there. If budget is a thing, perhaps switch to an i3-4370. Saves about $180 for the larger SSD and a quiet cooling solution? I'm not sure, but I am guessing that the gaming experience will not differ much, if at all, between the i3 and the i7 for most games (but in some cases it does, e.g. http://techbuyersguru.com/haswellgaming.php). An i5 might be a decent compromise. Personally, I would worry about cooling and, especially, the noise that it will cause. Anyway, 500 GB will store quite a few games and you could always add another later.
  15. Well what do you know.... Running this resulted in Total LBA Written to increase by 131.2 GB (base 2), slightly more than the 128 GB I tried to write but, of course, other things were working as well so I'll except that. Based on this I can only conclude that: 1. Copying the same file over and over does something weird, perhaps the SSD firmware indeed does something to save on writes (which I can hardly imagine and oh how I hate not knowing); and, 2. I see no reason anymore to doubt the Bitflock rules & interpretation for this drive. Many thanks for that. So, with 13.8 TB written, I should still be good for at least 10 times the activity sofar. Amazing and very nice. Chris, Alex, many thanks.
  16. OK, reverted to VBA, much easier for me and not that slow either. I plan to: 1. Run bitflock 2. Run VBA 3. Run bitflock The VBA should write 128 x 1GB files which are randomly filled to such an extent that I can't see any SSD optimising saving on actual writes. Sure, there is repitition here but it would require the SSD firmware to recognise randomly placed fixed patterns of 4KB and then do some sort of deduplication within files. I don't think so. 1TB seems overly ambitious, would take about 13.5 hrs, mostly as random number generation is slow (PS seems no better on that BTW). I'll report shortly after running this:
  17. Yeah, I've been looking to look into PS for years now. That piece of code, it writes a 2,048 B file? Can the first parameter of WriteAllBytes be a string containing a filename? I could do with a quick tutorial on syntax, assignment, declaration, conditional statements, loops and I/O. Any you can recommend? And of course, PS is not more powerfull than VBA. It is just more suited for the task at hand. Edit: OK, that does not seem too hard and is a good way to get started with PS, thanks for that idea. But boy, is PS slow. Takes ages to get 2 GB of random bytes... Going to see if it gets quicker using randow bigints or something.
  18. Well, I did consider that but honestly, aside from potentially compressing (which these SSDs do not do) I do not see how the SSD firmware would be able to understand that it is the same data over and over again. I assume that to the SSD, each filecopy instruction is seen as independent. Also, the source was a SQL Server compressed backup file, not sure a whole lot left to be gained by further compression. But sure, I can create a random file. Just need to think of how to cleanly write 2GB of random data through VBA. Let me think on how to do that in a simple, efficient and eloquent way in VBA. For now, I do not think it will change anything.
  19. I can only speculate. I *think* Drashna has stated that SQL Server databases can be stored on duplicated Pools and I _assume_ that that means that I/O on parts of the database files are done concurrently, not one first and then a full copy to the other, but I am not exactly sure. I *think* DP "simply" causes the required I/O operation to occur on both copies, so just the additional I/O neccessary. Again, not sure.
  20. Not sure what to make of this. I ran bitflock first and is stated Total LBSs Written 14,287,846,244,352 B (13.0 TB). Then I ran a small VBA program: The source file is 1,990,475,776 B and was copied 501 times to the SSD (of which 500 from the SSD itself). Assuming 2^10 (1024) as the base, that should add, I think, 0.91 TB I ran bitflock afterwards and it came back with 15,005,709,762,560 B (13.6 TB) written. Amazingly, Total LBA's Read has hardly changed (about +3.5GB). With 2^10 as base, the increase by 717,863,518,208 equates to 0.65 TB accoridng to Bitflock. What I had expected was a multiple of 2 or 4 (if e.g. the SSD reported values' interpretation was dependent on SSD size) but I did not find that either. One other value changed between the two runs of Bitflock: Wear Leveling Count increased from 86,013 to 89,869... Any ideas what's up with this or what I could do to test/investigate further?
  21. If FileSafe is what I think it is and I can also get the grouping feature (for duplication) then I would be more than happy to pay another $10 to $15 to my DP/Scanner setup. That isn't charity, I am a cheap miser.
  22. Thanks for this, nice try but not sure about the cigar! ;-) It now reports 12.9 TB written after 192 days Power-On time. I develop SQL Server DBs on this machine and basically rebuild them over and over to test T-SQL scripts. That's GBs at a time, being inserted and updated multiple times in one script. Put another way, my Server OS SSD has 3.33TB Written after about 120 days and it runs nothing except Rosetta@Home. What I could do to see if I am wrong is do a little script that woud copy and delete a 1GB file 1,024 times and see if it rises to 13.9TB. It should, right?
  23. Install one or a few 8TB Seagate Archive HDDs? Cheap per TB. If you'd consider that then I advise to google a bit (storagereview.com and Christpher and I have written a bit on our experience on this forum). You would then remove large HDDs, the 4TB ones, first so that you can write 6TB to those 8TB drives before data would be allocated to the older 2TB drives. I *think* there is also a balancer plugin through which you can cap the amount of data written to individual HDDs. If you'd put them at current usage then data to be moved should be moved to new HDDs (provided they are there and have space).
  24. Go 10GbE? I wonder if that would work over Cat-6 UTP cables provided length did not exceed, say, 30 meters (CAT-6 is 1Gb/S up to 250m I believe), but now I am hijacking.
  25. With default settings, DP allocates to HDDs with most free space and it seems as if you have added a 1TB drive in a po with rather larger ones that are not filled halfway. It will be quite some time/data before that drive is used.
×
×
  • Create New...