Jump to content
  • 0

Few questions about DrivePool


defcon

Question

Sorry for the generic topic and some of these probably are answered in other topics here or the wiki.

 

  1. Does DrivePool have any policies for spin-down or is it controlled by Windows sleep settings for hdd's
  2. Is there a way to get file listings without accessing the drive? e.g. in unRaid I read there's a cache_dirs plugin. Would be very useful e.g. when you are searching for files on the pool. I'm guessing something like this could be done very efficiently on Windows by hooking into MFT, the way tools like Everything (from voildtools) seem to do
  3. In WHS v1 there used to be a plugin to map drive locations in your pc/drive cages in a graphical way so that when disks fail it would be easy to find them. Does such a feature exist now (this may be a unrelated to DP though)
  4. If we reduce the Scanner scanning interval then can we keep the disks spun down for longer periods?
  5. And I'm sure this has been asked before - are there any plans to officially integrate snapraid ? It would be a great addition and I'd be willing to pay double the cost for a premium version.

 

Link to comment
Share on other sites

4 answers to this question

Recommended Posts

  • 0

  1. Spindown is controlled by the OS (Windows). 

    Stuff accessing the pool constantly can keep the drives. 

    Also, the beta build includes a default setting that will keep the drives away. Specifically, the "BitLocker" setting. It can be disabled by using the advanced config file. 

     

  2. Do you mean for how DrivePool gets the file list from the underlying disks? 

    If so, we actually do this already. Well, specifically rely on the NTFS caching mechanisms to do this.  So once accessed, it "should" keep the directory list in memory, reducing load time, and overhead. 

    If you mean more advanced than that, then this gets into dealing with USN Journalling and other advanced features of NTFS.  This would have an additional performance overhead.

     

     

  3. No.  It's something that's been requested and is on our "to do list". 

    However, we do have a command line utility to do this: 

    http://community.covecube.com/index.php?/topic/1587-check-pool-fileparts/

    Thi does require build 2.2.0.659 or higher, though. 

     

    Though, what this is doing is hooking into the driver, which could be done with 3rd party utilities.  Alex (the developer) would need to publish an API for reference.  But anyone could do so. And publishing the API basically means that Alex will do this himself sooner or later. 

     

  4. The scanning interval isn't as much of an issue for spin down.  You will want to throttle the SMART queries. These are considered disk activity by Windows and may keep the disk awake.  Throttling it to 60 minutes or more may allow the disks to idle better, but means that there will be significant lag time for updates to SMART data. 

     

  5. Not at this time. There are number of features we plan on adding to StableBit DrivePool before investigating integration with SnapRAID (however, there are some pending feature requests that would make integration much easier). 
Link to comment
Share on other sites

  • 0

For #2 what I meant is some way to access the file listings using a 2 step mechanism which would maintain a permanent cache, not the in memory caching done by the file system. AFAIK NTFS MFT does exactly this. So e.g. each time the CoveFS driver writes to disk it'd also update this cache, and then listing disk contents could be done anytime without accessing any disk, even from a cold boot.

 

Again it'd be similar to maintaining an index of folder contents.

Link to comment
Share on other sites

  • 0

The big part is "the MFT does exactly this", which is why we shouldn't reinvent the wheel, so to speak. 

 

However, I've flagged this as an "optimization request" for Alex. As well as an inquery on exactly how this works currently.  Since I'm not 100% certain, I'd rather ask him directly.  (as this is a super complicated part of our software). 

 

(that said, Stablebit CloudDrive actually does "pin" the MFT data to the local cache, EXPLICITLY for this reason). 

 

 

 

However, if we are maintaining this list of files, then we need to monitor ALL access to the disks, including outside of the pool.  Since you can add contents to the pool "indirectly" like this, if we don't then the contents would not be accurate.  Additionally, caching it like t his may be a significant overhead, especially as the pool grows larger. 

 

 

https://stablebit.com/Admin/IssueAnalysis/27154

Link to comment
Share on other sites

  • 0

Sorry for the long delay. 

 

Let me quote Alex here, about how we generate the file list: 

 

 

Getting a list of files on the pool is a complicated process that involved querying all of the pool parts and then buffering the results in real-time. The entire directory listing is typically not buffered, but pieces are buffered as needed (unless you want it sorted, or some other special cases).

 
Another place in DrivePool where locating files is important is during the creation of new files on the pool and opening of existing files. That's handled by querying the underlying pool parts in parallel for each Create. The underlying file system takes care of caching those requests using the system cache.
 
DrivePool doesn't keep any databases or in-memory file listings of any sort, aside from the limited temporary lists to accommodate directory enumeration. The underlying file system is essentially a highly efficient databases, and it's already there.
 
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...