Jump to content

VhyVenom

Members
  • Posts

    10
  • Joined

  • Last visited

VhyVenom's Achievements

Member

Member (2/3)

0

Reputation

  1. Basic functionality to (re)check if new update exists? Can't seem to find a button to press to recheck after I clicked no. Btw - If you manage remote pool updating the DrivePool soft locally, it wont update the remote pool (makes sense). I just couldnt get the remote pool server to see there was an update available. Had to manually download and install update. -v
  2. Thank you for your reply Christopher! It is helpful to be able to bounce off thoughts and questions off knowledgable folks. Im hearing through the gravevine EFS is dead (but yes there is still potential for some users till have it since in NTFS/Windows its being deprecated but not yet fully removed). However when we talk about potential users.... Would you believe me if there was a total of 7 (as in seven! not hundred, not thousands, 7 total) active updating users of a certain infrastructure? Sometimes it doesn't make sense to support some stuff. I'd very much like to see that VerifyOnCopy flag/checkbox integrated into DrivePool options. I am not sure of the performance impact but it seems that it would a rather important option for day to day operation (that could/should be disabled for initial seeding, balancing, duplication of data into a DrivePool but enabled after that). Christopher when you say "As for DrivePool, if it detects that the files are different,..." <-- When/what scenario does DrivePool do this detection? Im not sure if you mean after you enable VerifyOnCopy or during another scenario such as read striping? ReFS "killer" features are quite dependent on Storage Spaces. Running ReFS on its own has appears to have limited benefit today. If you run StorageSpaces (nobody here, right?) then the eventual usage of ReFS will be welcomed but there is a long way for those to go. Storage Spaces is an interesting platform, there is some SIGNIFICANT advantages of using it for certain I/O needs but to take advantage of those it will likely require a setup that is beyond most peoples reach (think of legitimate $100K+ SAN replacements built on top of StorageSpaces). Regarding data integrity reguarly checking (or the sembalance of it) I absolutely think its a feature that "I wanna have". It falls into the same category of the value add that Scanner brings to the table. I will open a seperate thread for specifically for questions regarding data integrity and verification solutions - perhaps some folks may have some input. Thank you, ~v
  3. Ahh the MANUAL. For a kid who said never gonna read a manual... its time to read the manual. Just FYI maybe make it really standout on the website? I had no idea the manual existed, was actually searching through old blog posts (which are extremely helpful) and forum threads to glean data about the features - but a manual is a great way for me to familarize myself with the underlying concepts and execution. I see read striping is a more complicated matter than initially meets the eyes. I did a test where i basically grabbed random files, some ginourmous, some small some medium and intiated a transfer to a local SSD. That is when I understood just why in the world you would want several different algorithmic approaches to handle different requests during read striping especially when dealing with platter drives. I am pleased to report DrivePool successfully maintained just about a full pipe throughout the transfer thanks to its alogrithmic approach in its read striping implementation. For just one person reading a single file from the DrivePool it seemed odd on WHY wouldnt DrivePool just blindly do a even split between the duplicated data. Throw in some more users or generate a workflow that is handling different kinds of data (say batches of pictures which are small + bunch of home video which is medium + ripped Blu Rays which are LARGE) and you see that approach is rather clevely thought through for multiple use cases, All in all I am pleased from what I am seeing with Read Striping - at the end of the day my transfer pipe is regularly FILLED to capacity thanks to read striping. And I think that is probably the main metric that matters here. now the only thing I need is a bigger pipe (10gbE?) to fill. thank you, v
  4. Awesome. I appreciate the responses. One last question on this line of thinking: Delayed Duplication is ONLY for new files? I couldnt imagine seeing modifications being tossed onto a feeder disk and then sent out to the disks at a later point if you enabled delayed duplication. Hope that makes sense. Thanks, ~v
  5. Hi, What are the limits of read striping? For example if I transfer from a folder the is 3x duplicated I expect it would pull from all 3 disks of that file, correct? How come it starts off with 3 disks then falls to 1 after a period of time? I would expect it to use all 3 disks for read striping (for the whole copy)? All the pools disks are on the same controller. For testing I am copying from the Pool volume to OS SSD (which is on a different controller). When I transfer from DrivePool volume over network it engages 3 disks, then falls back to one after about 15 seconds. Also what is the light blue color and dark blue bar indicate? How should i interpret Percentage of Read striping? Will go do some searches. ~v
  6. Hi, So read striping is wicked for reads. It looks like it will pull from multiple disks when transfering. Sounds great to me! But now the question: What happens when a remote volume or file is modified is saved? For example a mounted VHD or Truecrypt volume mounted remotely, modified, unmounted/committed. The saving/commiting doesnt write the whole volume back - it only writes the modified blocks. How does it know where to store the modifications back to? Does DrivePool write modified blocks back at the same time to all copies? I feel this is basic file system functionality 101 and ofcourse DrivePool does this - its late and its possible im not be thinking straight but a second confirmation would be helpful thanks, ~v
  7. It seems that the Scanner scans take a MASSIVE time for 4-6TB disks. A 6TB disks appears to take 15 HOURS, and it will only do 1 at a time (since they are on the same controller). Any way to "speed it up" or enabling more than one disk scan per controller (say 2 disks)? Otherwise it means the first week of the month is spent Surface scanning? Also stats - can it show how long the scan took? should be an easy add. ~v
  8. Thank you for your response Christopher. I also thought it was Windows EFS based but that doesn't sound right as EFS is dead. Would like to know how that reflects for Bitlocker. I watched several of the HomeServer Shows to better understand. From my data gathering Stablebit DrivePool current response to "Bit Rot" is a rather roundabout way + notification that would appear to not fully protect against the issue. To further explain it seems: In the videocast the term "Bit Rot" was loosely defined by Alex as when data is not accessed regularly and the underlying medium "rots"/decays silently - for example bad sectors start appearing or can't read from a part of the disk. I would refer to this as "Underlying Medium based Rotting". Traditionally a user would not see the "rotting" occuring until the fateful day they attempt to reaccess that data (say months/years after it was initially written) and discover they cannot read the full file. To allievate some but not majority aspects of this, StableBit's Scanner software solution seems to have a regularly scheduled (every 30days by default) surface scan of the disk to check that all sectors are READABLE. If a sector is having trouble being read it reports it (and possibly offers some mediation choices). How this helps? My interpetation is that it helps ensure that non-regularly accessed data is not living on bad chunks of a storage medium for long periods of time without notification. It doesn't so much care about what the DATA is but whether the underlying storage platform is reporting the sectors readable. "Underlying Medium based Rotting" as I refer to it seems like a common and possible traditional issue and arguably more commonly experienced than what I would refer to as the next-gen "DATA Bit Rot". Next-gen "DATA Bit Rot"/Data Corruption I would loosely define as the scenario where the underlying storage medium is 100% readable - a user accessing the data today or 3 years from now would have a completly "readable" file. However the data that is stored has been tarnished: Whether its a bit flipped causing a jpeg to look off (as described in Arstechnica's next gen file systems article) or downright garbage data. The file is completely readable but the data stored is no longer "pure". How can we have tarnished/non-"pure" data when the underlying storage medium is 100% readable? One example is file gets mangled during transmission to our DrivePool - there is nothing the endpoint (in this case our DrivePool) can do about this as it receives garbage in so it saves to disk the garbage it received. To combat "in transit mangling" a user can use verification after copy (for example Teracopy has a option to verify after transfer to ensure simple checksum of two files match up). Christopher you mention that during duplication passes data is compared and prompted if mismatch - how about other DrivePool based procedures such as balancing (are there any other procedures?)? Also is that mistmatch promt in the DrivePool GUI or notification (would imagine a notification would be handy if you dont regularly "monitor" the DrivePool application). Christopher you mentioned the "DrivePool_VerifyAfterCopy" option in the advanced config file. First off - Can we get this as a check box (don't care if its "hidden" in the GUI, just prefer to not somehow screw up a config file)? Secondly what scenario is that used in (ie during balancing? during user initiated copies between non-pool disk and DrivePool?)? I dont understand at what level/when that verification would happen were it enabled. It seems that we need a form of checksumming of data and periodically rechecking those checksums, and generating a report when there is a illogical mismatch. An illogical mismatch of checksum I would loosely define as when it is not logical for a file to have a different checksum. For example when a file, prevoiusly checksumed, reports a different checksum at a later scan but the file meta data reports the last modified time as the same. Or vice versa the checksum is same but the last modified time is different. As an example "familyreiunion.mp4" shouldn't ever report a different checksum and same modification time, if it does we should check it out/be notified. Where does StableBit come into play in regards to Next-gen "DATA Bit Rot"/Data Corruption? Well the StableBit Scanner application today seems to provide mainly 1 function today, and thats to be a prettified scheduled Surface scan (off the top of my head, chkdsk /r does similar?) - please correct me if I am wrong. Initially it my impression that there was also a level of checksumming going on but it does not appear to be the case. The reason I would like StableBit scanner to do this versus a third party utlity is because Scanner would ideally have an understanding of the duplicated data and upon detecting a checksum issue it could check its storage pool for its duplicated/triplicated counterpoint and offer to restore a copy from the duplicated data, or at least report all this back to the user. If this gets implemented it will be the stopgap needed between todays file systems and the so called next gen file systems. BTW hopefully this all comes across in good terms . I spend far too much time looking into the offerings available in the NAS space and my career offers me access a wide array of knowledge on the developing technologies in this space. With that being said StableBit DrivePool has impressed me so far, farthest yet when collaborated with technologies like Bitlocker. I have spent several sleepless night redoing my whole test infrastructure to blindly test StableBit's offerings. I have had several pleasant pleasures along the way (Read striping? FUCK YEA. Disk Performance monitor? Extremely useful feature! Remote Control Gui? Extremely useful!!!). There has been a few unexpected quirks but I intend to open a few threads on those to highlight them. ~v
  9. Hi, In the Scanner features it outlines file recovery: File RecoveryOnce a damaged file is identified by the file system aware scan, you can attempt recovery of that file. * File recovery supports uncompressed and unencrypted NTFS files. Partial file recovery works by reassembling what's left of the file to a known good location on another disk. An optional, full file recovery step is attempted by reading each unreadable sector multiple times, while sending the drive head through a pre-programmed set of head motion profiles. This has the effect of varying the direction and the velocity of the drive's head right before reading the unreadable sector, increasing the chance of one last successful read. * File recovery is not guaranteed by any means, but stands a good chance of at least partially recovering a damaged file. My DrivePool disks are encrypted with Bitlocker then added to the Pool. Is the implied behavior of File recovery not support repairs on Bitlocker volumes? 2. Is there any form of checksumming done by the scanner or the DrivePool soft? Basically a "poor mans" next gen file system feature? 3. (How) Does DrivePool and Scanner protect against so called "bit rot" or a flipped bit, especially when duplication is enabled (in theory that would allow a copy from known good copy, based on say checksum)? thank you, ~v
×
×
  • Create New...