Jump to content

chiamarc

Members
  • Posts

    34
  • Joined

  • Last visited

  • Days Won

    3

Reputation Activity

  1. Like
    chiamarc got a reaction from bapedrib in Help me understand DrivePool and CloudDrive interaction   
    Hi Folks (especially Chris),
     
    I'm especially frustrated right now because of a dumb mistake on my part and a high likelihood of a misunderstanding of the intricacies of how, when, and why DP is balancing and duplicating to a cloud drive.  My setup is a local pool balanced across 5 hard drives with several folders duplicated x2 comprising ~4.1TB.  The local drivepool is part of a master pool that also contains a cloud drive.  The cloud drive is only allowed to contain duplicated data and currently it is storing about 1TB of duplicates from the local pool.  I only have ~5Mbps upload bandwidth and I just spent the month of October duplicating to this cloud drive.  Yesterday I wanted to remove my local pool from the master pool because I was experiencing slow access to photos for some reason and I was also going to explore a different strategy of just backing up to the cloud drive instead (which allows for versioning).  Well, I accidentally removed my cloud drive from the pool.  At the time, CD still had about 125G to upload, so I assume that was in the write cache because DP was showing optimal balance and duplication.  When the drive was removed of course, those writes were no longer necessary and were removed from CloudDrive's queue.  OK, I didn't panic, but I wanted to make sure that the time I just spent using my last courtesy month of bandwidth over 1TB was not wasted.  So I added the cloud drive back into the master pool, expecting DP to do a scan and reissue the necessary write requests to duplicate the as yet unduplicated 125G.  But lo and behold after balancing/duplication was complete in DP, I look at the CD queue and I see 536G left "to upload"!  All I can say at this point is WTF?  There was very little intervening time between when I removed the cloud drive and re-added it and almost nothing changed in the duplicated directories.
     
    Can someone please explain or at least theorize?  I own DrivePool but I've been testing CloudDrive for a while now for this very reason.  I needed to assess its performance and functionality and so far it's been a very mixed bag, partly because it's relatively inscrutable.
     
    Thanks,
    Marc
     
  2. Like
    chiamarc got a reaction from aapedrib in Help me understand DrivePool and CloudDrive interaction   
    Hi Folks (especially Chris),
     
    I'm especially frustrated right now because of a dumb mistake on my part and a high likelihood of a misunderstanding of the intricacies of how, when, and why DP is balancing and duplicating to a cloud drive.  My setup is a local pool balanced across 5 hard drives with several folders duplicated x2 comprising ~4.1TB.  The local drivepool is part of a master pool that also contains a cloud drive.  The cloud drive is only allowed to contain duplicated data and currently it is storing about 1TB of duplicates from the local pool.  I only have ~5Mbps upload bandwidth and I just spent the month of October duplicating to this cloud drive.  Yesterday I wanted to remove my local pool from the master pool because I was experiencing slow access to photos for some reason and I was also going to explore a different strategy of just backing up to the cloud drive instead (which allows for versioning).  Well, I accidentally removed my cloud drive from the pool.  At the time, CD still had about 125G to upload, so I assume that was in the write cache because DP was showing optimal balance and duplication.  When the drive was removed of course, those writes were no longer necessary and were removed from CloudDrive's queue.  OK, I didn't panic, but I wanted to make sure that the time I just spent using my last courtesy month of bandwidth over 1TB was not wasted.  So I added the cloud drive back into the master pool, expecting DP to do a scan and reissue the necessary write requests to duplicate the as yet unduplicated 125G.  But lo and behold after balancing/duplication was complete in DP, I look at the CD queue and I see 536G left "to upload"!  All I can say at this point is WTF?  There was very little intervening time between when I removed the cloud drive and re-added it and almost nothing changed in the duplicated directories.
     
    Can someone please explain or at least theorize?  I own DrivePool but I've been testing CloudDrive for a while now for this very reason.  I needed to assess its performance and functionality and so far it's been a very mixed bag, partly because it's relatively inscrutable.
     
    Thanks,
    Marc
     
  3. Like
    chiamarc got a reaction from Burken in How the StableBit CloudDrive Cache works   
    I really don't understand why the "to upload" state becomes indeterminate for the entire write cache.  Shouldn't it only have to re-upload chunks that it didn't record as being completed?  Why is a chunk not treated akin to a block in a journaling filesystem?  Of course I understand that if chunks are 100MB in size, it could still take some time to write them, but no way should the entire cache be invalidated upon a crash.
     
    This is especially important for me right now because I've got my system locking up on a not infrequent basis that requires me to hard reset (plus the occasional BSOD).  A 200G cache on a 10TB drive (100/5 Mbps d/u) always takes 45+ minutes to recover.
  4. Like
    chiamarc reacted to marquis6461 in Files deleting themselves..   
    darkly
    Are you using Sonarr with PLEX?
    If you are,  Go into Sonarr and assign a Recycling Bin in Settings (Advanced Settings Shown) At the bottom.
    I had that problem also.  Was confused and thought the same thing. Sometimes Sonarr was bad for that by replacing a good file with a bad one.
    If not, I have used Cloud with DrivePool and missed a setting and had my files uploaded to the Cloud.  When I disconnected, I was panicking when Sonarr and Plex found holes showing the file unavailable.  It was because Drivepool was set wrong.
    You have to ensure in the Balancing area that the Clouddrive will only store Duplicated files.  You must (MUST) uncheck the Unduplicated block for the cloud drive if you want Sonarr and Plex to behave....
  5. Like
    chiamarc got a reaction from vapedrib in Limit usage on an individual physical drive?   
    Say I have several disks in my pool and I want to reserve extra space for "other" data on one or more individual disks.  That's to say, I don't want Disk x to use more than a certain percentage or byte threshold to store pool data.  Is there a way to do this short of splitting the drive into multiple partitions?
  6. Like
    chiamarc got a reaction from vapedrib in True Bandwidth   
    Hi Guys,
     
    Thanks for an absolutely wonderful product.  I was just wondering, Comcast limits my data usage to 1 TiB per month (at $10 per 50GiB beyond that).  Since many cloud providers do not allow incremental chunk updates (like what I'm using, Box), then depending on the write workload, CD has to download chunks, change them, and re-upload them.  While the "To Upload" size measurement is accurate, the tooltip that gives an estimate of how long it will take to drain the upload queue is probably off by quite a bit, especially if one is changing files frequently.  Further, the total bandwidth used over a given period of time is not really reflected anywhere.  There are tools that allow me to measure (out-of-band) all CD traffic but it would be nice to know how much data was actually read/written in order to empty the upload queue, or for that matter, to do any set of operations.  This would help with my bandwidth management (knowing if I need to limit upload/download speed in CD at a given point in the month, or ideally, doing it automatically when I reach a certain threshold).  I guess this is a request for enhancement but I'm not sure how many other people have a similar need.
     
    Thanks,
    Marc
     
  7. Like
    chiamarc got a reaction from vapedrib in Unable to assign drive letter   
    Using CD 1.0.2.936 Beta, I'm unable to assign a drive letter from within the GUI.
     
    Clicking on Manage Drive -> Drive Letter... -> Assign (as shown here):
     

     
    Results in a null assignment error:
     

     
    Please advise.
     
    Also, where can I check if a similar bug has already been reported?
  8. Like
    chiamarc reacted to Christopher (Drashna) in True Bandwidth   
    this is stuff we plan on adding in the future.  It didn't make it to the current build, and implementing this isn't going to be easy (which drive has priority, etc). 
     
    That said, no matter how this is implemented, it won't line up with Comcast's measurements.  They're known to lie about this (I've seen reports of 400GB of usage when EVERYTHING was disconnected from the modem, so ....) 
  9. Like
    chiamarc reacted to Christopher (Drashna) in I have a couple of problems...   
    Wow, you're not making this easy for us, are you? 

    And to be blunt, you're coming off as VERY hostile, especially the further I get into the post.  Most of this information is actually documented in our manual, can be found rather easily on our forums, or (as here) asked.
     
    http://stablebit.com/Support/DrivePool/2.X/Manual
     
    http://community.covecube.com/index.php?/forum/20-nuts-bolts/
     
    http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings
     
    Regardless, let me address each and every one of these questions/concerns/feedback. 
     
    Well, here we go.
     
    Yes, correct.  There are a number of reasons for this, such as disk load.  However, if you're not happy with that, then please grab the "Ordered File Placement" balancer plugin.  This will fill up one (or more, if using duplication) disks at a time, filling them sequentially (in an order that you can specify). 
    https://stablebit.com/DrivePool/Plugins

    This balancer will also help prevent folder splitting, but it isn't a guarantee.  But this is a topic that has come up a LOT, and we've discussed it a lot internally.  I have been pushing Alex (the Developer) for this functionality, as it IS highly desirable for specific setups. 
      For the file locations, ... yeah.  NTFS keeps track of this and generally does a great job. Except when it doesn't.   If duplication is enabled, then this isn't even an issue, as you just "fix" the drive (replace it, format it, etc), and you're set.   But otherwise, yeah, it can be a PITA.  That said, we do have plans on addressing this issue in the future. 

    Part of the reason for this is that maintaining a database is "expensive" ... and redundant (NTFS *is* a database).  And there are other issues/complications with this. 
      The empty folders may not be.  And no, the software doesn't remove "empty folders" for a number of reasons.  Such as duplication information being "tagged" on the folder (in the form of alternate data streams).   Additionally, from a programmatic standpoint, there is no reason to delete a folder here. The amount of saved space is so small, that it's not worth the hassle that it can cause. 
      See above (#1).   That said, doing so would require a complete rewrite from the ground up.   And introduce a lot more "knobs and dials" that may not be necessary for 99.9999% of people.   But implementing something to keep folders together is a lot simpler, and would not necessarily require a complete rewrite of the code. 
       If you do want to use the pool, and make it "orderly" to do so,  you can use the File Placement rules, and/or the Ordered File Placement/Drive Usage Limiter" to clump up/group up the files, to make this less "nightmarish". 
      I'm not entirely sure what you're getting at here. 
    As for the "unusable for duplication" space, this is a complicated calculation, based on the number of disks, the size of the disks, the amount of used and free space on the disks, etc.  The "Duplication Space Optimizer" balancer will attempt to rebalance the data in such a way to optimize this and reduce the "unusable" space.  

    However, the balancing is done at a low IO priority, as to prevent performance issues with the pool.  You can temporarily boost the priority by clicking the ">>" button by the condition bar at the bottom, or there is an advanced config file that can be used to permanently boost the priority. 

    As for multiple drives failing at the same time, yes, you can end up in the same situation.  As for "drive quality these days" ....  We are firmly "post flood manufacturing".  So failure rates have dropped significantly.  Many sources can confirm this.  But if you have proof otherwise, please do link it here. 
    That said, the StableBit Scanner balancer can help with this by evacuating the contents of problem drives, hopefully before the disk fails.   So that should minimize the impact.  
      See above. 
      Here you go:
    http://community.covecube.com/index.php?/topic/52-faq-parity-and-duplication-and-drivepool/

    And at this point, there is absolutely no plans on introducing parity into the software.  Parity by it's nature is expensive, in terms of resource cost.  Especially if you're performing realtime parity protection.    

    And yes, duplication is more expensive.  But to be blunt, this author put the issue succinctly. : 
    http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

    Otherwise, you can absolutely do what you want with the StableBit DrivePool, the SSD Optimizer, and SnapRAID.  
      Just because it doesn't fit your needs isn't a reason to get nasty.   For a vast majority of people, the balancing works just fine for their needs.    

    That said, it is worth mentioning that the log files actually go over SPECIFICALLY why files are being moved, and what balancer triggered it.

    That said, we do plan on improving the feedback for StableBit DrivePool, to make things like this more transparent.

    Additionally, in most cases, balancing isn't going to happen unless you have a balancer installed.  The built in balancers are generally "edge case" options, that deal with very specific situations. 

    In fact, you may want/need to read this: 
    http://community.covecube.com/index.php?/topic/2954-understanding-file-placement/&do=findComment&comment=20223
      See previous.
      Not a bad idea.  
    https://stablebit.com/Admin/IssueAnalysis/27546
      Exists already.  WIll pop up next to the "Pool condition bar" when the placement is "out of whack". 
    http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Pool%20Organization%20Bar

    http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing

      Can be controlled by the balancing settings, as above.

    As for "what drives to balance", .... That really doesn't make sense given the context of how the pool works, and how the balancing works.   
    And again, for the most part, most of the balancers are not going to be doing much.  They're going to sit there until the pool hits specific configurations.  
      It can.   The order of the balancers  are listed in the UI is the order of priority. 
    As for IO priority, you can.  When balancing, there is a ">>" button that you can click to temporarily boost the priority.  

    Additionally, the advanced config file allows you to permanently increase this, if you want. 

    As for queuing, again, set the order in the balancer's tab.   Otherwise, this becomes complicated and problematic.  meaning ... we try to make sure that there is no situation where data is balanced back and forth continuously.    Because that's a great way to cause issues. 
      Terminology could be better.  But yes, the "SSD" drive is a write cache. 
    https://stablebit.com/Admin/IssueAnalysis/27547
      Because multiple balancers are checking for different things.   
    However, a "master free space limit" or some such may not be a bad idea, assuming it's not incredibly complex to implement. 

    However, certain balancers may not be able to use this, even if we do implement it, just because of how they work. 
    https://stablebit.com/Admin/IssueAnalysis/27548
      Just thinking about that .... would be incredibly complex.  However, I've bugged it as a feature request, just in case. 
    https://stablebit.com/Admin/IssueAnalysis/27549

    That said, you can use File Placement rules to do this.  Such as "\*.nfo", "\*.jpg", "\*.xml", etc to force placement of these files onto a specific drive.
    http://stablebit.com/Support/DrivePool/2.X/Manual?Section=File%20Placement
      The real irony, is that most of the feedback that we get is "simple and elegant".  
    But bugged.  
    https://stablebit.com/Admin/IssueAnalysis/27550
      See above.
    Bugged.

    that said, there are some additional elements, such as the balancing targets and the real time placement limiters that do show up on the bar. 
    http://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks%20List
      Overtly hostile.  
    That said, adding more and more files here can (does) cause a performance hit. I'm pretty sure that this is explicitly why there is a limit.  However, bugged. 
      No response here.  Design aesthetics are really a "per person" thing. 

    However, we do have a planned "overhaul" of the UI, so this may change. 
      The remote control not showing all devices on the network is a known issue, but not a software specific issue.  It's a network issue.  However, we have always supported manually adding peers to the remote control.  And IIRC, we do plan on significantly improving the remote control functionality, to make this easier.
    http://wiki.covecube.com/StableBit_DrivePool_2.x_Advanced_Settings#RemoteControl.xml

      We don't use Telerik in StableBit DrivePool.  At all. period.  StableBit Scanner does, and you can even see the "Telerik" dll in the application directory.   And we don't use it because ... telerik sucks.  And we've ran into that with StableBit Scanner, but Alex hasn't wanted to design the UI for Scanner from the ground up, yet. 

    That said, the select/deselect option should be there. Bugged. 
    https://stablebit.com/Admin/IssueAnalysis/27552
      Folders are for easy creation. Rules are for more complex creations.  This is actually covered in the manual.
    http://stablebit.com/Support/DrivePool/2.X/Manual?Section=File%20Placement
      Already bugged (#23)
      Overtly hostile.  Skimmed, so if I missed something, you know why.

    That said, the measured data is stored, and updated.  The UI element you're talking about is updated in realtime, so yeah, it checks.    I don't remember the reason for this (but there was).  
      Overtly hostile opinion.  

    Also see above. 
      You mean the drive list in File Placement Rules? 
       StableBit DrivePool isn't meant to be an indexing program.  If we did, it would be more conducive to this, I'm sure.
    That said, is this just bitching, or was there a specific complaint here that I could address? 
      You covered this already, above.  
      Planned, actually. 
      ..... I don't even know where to start. You're accusing us of using Telerik on one hand, and then WPF on the other....

      Pretty sure it is.   .....
      Sure?   We should do exactly the same thing as FreeNAS and unRAID because they are.  
    I'm not really sure why there is a need for this.  Also, it's an additional point of failure here.   But it's already requested above. 
      Again, we're not using a single Telerik control in StableBit DrivePool.    
      Not sure how useful it would really be. 
      C:\ProgramData\StableBit DrivePool\Service\Logs\Service
      Probably an info update issue. 
    What version of StableBit DrivePool are you using? 
      This has not only been addressed, but it's the top most pinned thread in this sub-forum.
    http://community.covecube.com/index.php?/topic/2620-is-drivepool-abandoned-software/

    We're a small team, and StableBit CloudDrive took a lot of our focus, and took a LOT longer than expected.   We do have plans on preventing this from happening in the future. 

    But unless you're holding out on a time machine, not much we can do about that now.  That said we are actively going through issues and do plan on having a stable release in the near future.  
    Also, if you do have a time machine, awesome!  (go back and post some of the stuff 2 years ago, rather than letting these issues pile up and NOT contacting us for 2+ years). 
      It's a pending feature request. 
      We plan on overhauling the StableBit Scanner UI, IIRC.  And address a lot of the issues it has. 
      It does this per controller.  So a single USB enclosure would be recognized as as that "single case".   
      We do already do the scanning "Per controller".  
      See #41.  It's a known issue, and one we plan on addressing. 
      Are you on the latest beta build? If not, a lot of that has been addressed/fixed. 

    If not, then .... see #41.  
       
     
     
    That said, I double checked.... you have not ONCE contacted us in the 2+ years that these issues have apparently stewed.  Not once.  
     
    Some of these are absolutely legitimate complaints. And we would have gladly addressed them.  Either in private (via https://stablebit.com/Contact) or here, publically.   
     
    We
    want to help. But we CANNOT if you don't let us.  We may not know of an issue, if you never bother telling us.   Which really appears to be the case.  
     
     
    You can here to rant at us, without ... well, doing any research without contacting us.  Because this is pretty clearly a rant.  Not a legitimate grievance. 
     
     
    Even still, we want to help you out, and address many (most) of these issues.  We really do.  
     
     
     
    But if you're this upset with us, I'm not sure how well that can progress.  it's next to impossible to help an overtly hostile person.  
     
    And absolutely worst case, open a ticket and request a refund. We'll do it.  We'd rather have you happy, by issuing a refund, than responding angrily, or by being assholes.    Because that's not cool. At all. 
     
     
     
    Regards
    Christopher Courtney.
    Director of Customer Relations for Covecube Inc. 
  10. Like
    chiamarc got a reaction from Christopher (Drashna) in Files deleting themselves..   
    You could also try running Procmon for a while to capture which processes are deleting anything.  The only thing you need to capture are filesystem events and make sure to check "Drop filtered events" under the Filter menu.  Then after running for some time, stop capturing (or continue, it's up to you) and search for "Delete: True".  The first entry you find should be the result of a SetDispositionInformationFile operation.  Right click on the cell in the Detail column and select "Include 'Delete: True'".  This will filter every deletion event.  Search in the Path column for an instance of a file you didn't expect to be deleted.  In the Process Name column you will find which process set the file for deletion.
     
    If you have no idea how to use Process Monitor, there are plenty of quick tutorials on the web.  Good luck.
  11. Like
    chiamarc got a reaction from Christopher (Drashna) in Drivepool duplicating to only Clouddrive..   
    Brilliant!  Tested and satisfied!  This gets me *much* closer to my goal and I can now drop files into E: and be sure that they are duplicated locally and into the cloud!  This will suffice for the time being as I can always keep important stuff on E: and not so important stuff on D:.
     
    Thanks again for being so quick to respond (and change code)!
×
×
  • Create New...