Jump to content

Christopher (Drashna)

  • Posts

  • Joined

  • Last visited

  • Days Won


Community Answers

  1. Christopher (Drashna)'s post in Using a fast usb drive as cache drive? was marked as the answer   
    The problem isn't just "yanking out the drive".
    The main problem is that USB is inherently, and allowed to be flaky.  The drive can (will) periodically spontaneously disconnect form the system, and that is a serious issue. 
    Ever had a USB drive plugged in and heard the disconnect and then reconnect chimes in rapid succession?  That's what I'm talking about.  This is rather common and actually PERMITTED by the USB specification. 
    The problem is that if this happens while writing to the cache, the only thing we can do (that won't guarantee corrupt, basically) is to BSOD.   And BSODing every time the USB drive disconnects? That's .. really not good for system stability, in general. 
    There is a request to add this ability, anyways, but it would be an advanced setting, at best. And even then, we're still not sure it's a good idea.
    That said, if you don't mind hacky solutions, you could create a VHD file, store it on the USB drive, mount it and then use that for the cache.  I'm not even sure that would work, though. 
  2. Christopher (Drashna)'s post in Is DrivePool abandoned software? was marked as the answer   
    That is absolutely understandable.  I wouldn't want to pay for software that is being abandoned either.
    We do apologize for the extreme delay in releases here.  Unfortunately, this is heavily due to StableBit cloudDrive being signfiicantly more complex than we anticipated (the initial public beta was build 240, we're on 834....).   Additionally, we are a small company, and Alex is the only developer at this point.  
    So for the most part, his effort has been focused on StableBit CloudDrive.  This means that StableBit DrivePool and StableBit Scanner have suffered, in that they haven't gotten much attention (bug fixes). 
    We know that this isn't good for us or our products, and it leaves things in ... well, a mess.  
    Once StableBit CloudDrive has a stable release, things will "get better".  After this happens, Alex plans on going through all of the pending issues for both products, and then pushing a public beta and then stable release for those products. 
    After that, we have plans on streamlining the development and testing process so that we can have periodic "scheduled" public releases, that are not dependant on Alex's workflow.  So that this never happens again. 
    And trust me, we are not happy about how things have progressed ourselves... but drastically changing this right now isn't good either, as it could significantly delay a release for StableBit CloudDrive, and it's already been in beta "too long" for us. 
    Additionally, the internal beta builds are very stable, and should be safe to use for production use (myself and many others do). 
    Further, I have written a "known issues" post, that you can check to see if you want/need to upgrade to a beta build:
    Yeah, I apologize for that.  Between being swamped by the latest CloudDrive (Google Drive specific) issue, and personal issues (medical and mental stuff), I haven't checked the forums as much as I would like, or should. 
    @Spider99, not ignoring you, but I've already referenced/answered what you've said above. 
    This is something that Alex and I have talked about at length, and repeatedly.   
    So it's something that is DEFINITELY on our minds.  As well, it should be. 
    There are a number of "solutions" here that we can implement to ensure that we remain afloat.  
    Time/version limited licenses (as you've mentioned). Releasing new products periodically to keep revenue up (we have several additional products/services planned already) Release a subscription based product (service), for reliable income. Switch to paid support solutions  
    Each one of these options has their pros and cons, and none are mutually exclusive (we could do all of the above).   These are all options that we've discussed internally, and at length.  
    To be honest, neither Alex nor I are found of the "time/version limited licenses", and prefer to stick to a lifetime license solution. It's a much better experience for users, as it produces less confusion.  And basically it's too late to switch our licensing scheme for existing products (or we'd have to grandfather everyone in).   
    And as I said, we have several products planned for release already. StableBit CloudDrive is technically included in that, for now. But we have several other products/services planned. Specifically, StableBit FileVault, StableBit Cloud, StableBit PowerGrid and StableBit.me.   These are all additional products/services that will fit in neatly with our existing products, and should provide additional revenue for us.  You can read a bit about these here:
    (personally, I've been pushing hard for StableBit FileVault for a while now, as it may really address the "bitrot protection" feature that many people want)
    As for ongoing revenue stream, services are .. well, the best option for that. A monthly or yearly payment means continuous revenue stream, as well as offset the price for such a service.  And that would likely be what StableBit Cloud is.  It would not (may not) be self hosted, but we hope that what it does offer would be more than worth it for those interested. 
    And as for the paid support, this has been the topic that has been the most heavily debated.  To put this bluntly, even though this is what would directly affect me (paid support, means more money for me directly), I am very much opposed to this.  Other products (competitors) do implement this, and in some cases, I can absolutely understand why ...  I do not like (rather, I hate) the idea of a paywall between customers and good service.   Good tech support is something that should be part of the intrinsic price of the product, not a hidden cost. 
    However, there are circumstances that I do agree would warrant that paywall.  Such as immediate support (within the hour), remote support to help set up things, etc. 
    So, as you can see, this is something that is definitely "on our minds". And it does come up often.  
    And this is by far not a complete list of potential actions we can take. It's just the primary ones that we've discussed. 
    @anotherforumname:  I hope this assuages your fears about our software becoming abandonware. Both from an update standpoint, and from a financial one, as well. 
    If you have any more questions, don't hesitate to ask. 
  3. Christopher (Drashna)'s post in Not all disks in the pool being used? was marked as the answer   
    sorry for the delay.
    For the SSD Optimizer, it should work fine on newer versions (I'm on the "bleeding edge" with my system, and it works fine). 
    Specifically make sure it's enabled.  CLick on "Pool Options" -> Balancing.   Open the "Balancers" tab, and make sure that "SSD Optimizer" is enabled, and uncheck all of the other.
    ALso, select this specific balancer and make sure that SSD drives are checked as SSDs (this isn't set automatically).
  4. Christopher (Drashna)'s post in Pool moved to Second PC with a pool what happens? was marked as the answer   
    Yes and no.
    Specifically, each pool as a unique ID generated for it.  If the pool ID's don't match, they're part of separate pools.
    So if you connect a second set of pooled disks to the system, it will create a second pool drive, automatically. 
  5. Christopher (Drashna)'s post in Remote Management not working was marked as the answer   
    Yay network discovery....
    This can happen, and it tends to be a router/switch/network config issue.
    That said, yo ucan manually specify "peers" on each device to force them to show up:
  6. Christopher (Drashna)'s post in Duplicating in use files was marked as the answer   
    Like a champ.
    That said, it sounds like you want more technical details. 
    If real time duplication is enabled, the file is written to both disks, and is locked on both disks.  This specifically bypasses any issues with file locks.  
    Now, regardless of real time duplication, modifications are done to all copies of the files, also bypassing this issue. 
    Alex and I have ran VMs off of the pool without issues.  I store my OutLook PSTs on my pool, as well.  So, this is a well tested feature. 
  7. Christopher (Drashna)'s post in Unable to Mount Drive was marked as the answer   
    This happens if there are too many errors.   This is specifically to prevent these errors from locking up the system (which absolutely can happen). 
    That would definitely cause the issue.
  8. Christopher (Drashna)'s post in Product Name to differentiate from Amazon Cloud Drive was marked as the answer   
    Nope, not really.  
    That and the full product name is "StableBit CloudDrive". "StableBit" isn't actually the company name. That's Covecube. 
    Also, it appears that Amazon is trying to rebrand to "Amazon Drive" rather than Amazon Cloud Drive.  So rebranding our product wouldn't really help in the long run, either.
  9. Christopher (Drashna)'s post in transfer trial license? was marked as the answer   
    Yes, absolutely.
    The only difference between the trial and retail license is that the trial ends. 
    When the trial license expires, the upload speeds are severely throttled, but you can still download at full speed, so you can move the files off of the drive. 
    So, really all you need to do here, is detach the drive, and then attach it on the new system. No issues, no problem, no hassle.
  10. Christopher (Drashna)'s post in Non-Realtime Duplication Mechanics was marked as the answer   
    If real time duplication is disabled, then it depends on the file. And what you're doing.
    If the file is already duplicated, then any modification to the file is done in parallel to all copy of the files.  This includes writing to the file or moving them around Newly created files are not duplicated, until a duplication pass occurs (IIRC, 1AM, daily).  If you're reading the file, then this is handled "normally". If read striping is enabled, then it depends on ... well, more factors.
    Additionally, when duplicating the file, IIRC, we do set the modify time to be the same.
    Specifically, if there are multiple files, during a duplication pass or when accessing the file, we check the modified time. If that doesn't match, we may check the CRC of the file.  If that doesn't match, then we flag the user for a duplication mismatch... otherwise, we update the info on both files to match the newest file.
    (IIRC, I'm not 100% sure about that)
    As for the Alternate data stream, I'm not sure.  However, I do believe that yes, it would get duplicated to both disks (as ADS are just a special file type, essentially). 
  11. Christopher (Drashna)'s post in Dedupe and folders in drivepool or underlying drives was marked as the answer   
    No.  It will copy the entire file to the other disk.  However, that file may get dedup-ed in the same way.
    The issue here, is how deduplication and our software works.  And this is part of why the beta vesion is "required", or you need to enable an advanced option.
    Normally, our software bypasses all file system filters when accessing the underlying data.  This is boost performance (filters can cause serious slow down), and for compatibility (some filters freak out when the same file is being accessed repeatedly, and when one request isn't finished yet.... I'm looking at you, Avast). 
    This is fine, usually.  However, the deduplication feature splits the contents. It creates a special reparse point out of the original file. It leaves non-redundant data attached to this file/reparse point hybrid object, and it puts all of the duplicate data into the "System Volume Information" folder (that same one used by VSS).   
    Then when accessing files, it uses a file system filter to splice this data back together, in the right order. 
    Now, as to why this is an issue:
    Deduplication can't access the blocks of data on the Pool drive. So it just doesn't work.  StableBit DrivePool bypasses the file system filters on pooled disks, so you would only get partial (or no) data when accessing deduplicated data.   This is why you MUST disable the "bypass" option.  This way, the dedup filter can splice the data back together, properly.   
    The beta version looks for the "dedup" filter, and automatically disables this "bypass file system filter" option on the pool, to prevent this from being an issue.
    Additionally, the latest internal betas include some special handling when measuring the drive. 
    Also, when balancing or duplicating the data, it will grab the spliced together data, as well.
  12. Christopher (Drashna)'s post in Installing a new OS on same PC, transfer "scanner history" was marked as the answer   
    Well, you can mark them as scanned, manually.
    But yeah, that would be something nice to include, and is something that we are definitely thinking about. 
  13. Christopher (Drashna)'s post in Drivepool + Clouddrive integration - limiting cloud backups for files w/ 3x duplication was marked as the answer   
    Unfortuantely, there isn't a good way to do this right now.  

    You can set the "Drive Usage Limiter" to only have duplicated data on the CloudDrive disk.  Since it needs 2 valid disks, it WILL use this drive first, and then it will find a second drive for this as well.
    For x3 duplication, that will store one copy on the CloudDrive disk, and 2 on the local disks. 
    However, this degrades the pool condition, because the balancer settings have been "violated" (duplicated data on drives not marked for duplication). 
    However, we do plan on adding "duplication grouping" to handle this seamlessly. But there is no ETA on that.
  14. Christopher (Drashna)'s post in Moving CloudDrive to a new pc was marked as the answer   
    Detach the drives, deactivate the license, install on new system, activate the license or trial and attach the drives. 
    That's it. 
    And yes, when you go to add a drive, it will list existing drives, and even acknowledge the drive's state (eg, if attached on other systems). 
  15. Christopher (Drashna)'s post in Disk Performance Graphics was marked as the answer   
    Sorry, yes, it's just faded out, since it's already complete.
    Or to quote: 
  16. Christopher (Drashna)'s post in Encryption Question was marked as the answer   
    It entirely depends on the provider.  For each provider, we use a slightly different format, based on that provider. 
    Specifically, for certain providers (such as Google Drive, Amazon Cloud Drive, etc) we do obfuscate the data. Meaning we do some simple encryption on the data, if you don't enable full drive encryption. 
    And by "simple encryption", I mean we use a very simple to "break" encryption, and each file has a NULL byte at the start. This is to prevent the provider from parsing the file and recategorizing it based on the file's header.   
    For Amazon Cloud Drive, for instance, this should completely block it's ability to identify data from StableBit CloudDrive. 
    Otherwise, the data is raw disk data, and won't show up as the files on the provider, but chunks of raw disk data.
  17. Christopher (Drashna)'s post in Advanced Settings XML question - override only single values? was marked as the answer   
    There shouldn't be any issue with removing the other settings, and only keeping what you need.  The other settings will stay at the default values. 
  18. Christopher (Drashna)'s post in Data Reads slow with DrivePool & CloudDrive was marked as the answer   
    In theory, the Read Striping feature should handle this on duplicated data. 
    Additionally, you may want to make sure that directory structures are being pinned on the drive in question (Under "Disk Options" -> "Performance" -> "Pinning"), though these should be enabled by default. 
    Additionally, adding better handling is something that has been requested, and we've discussed.  And it's definitely something that may help here. 
    Though, as for the versions, I beleive that Alex has changed the handling ofr ACD, so if you wanted to try the build, that may help out.
  19. Christopher (Drashna)'s post in Changing Cache Drive was marked as the answer   
    Yup, you need to detach the drive, and then reattach it.  While reattaching it, you can specify the cache drive.
  20. Christopher (Drashna)'s post in Disk performance not showing data was marked as the answer   
    This happens occasionally, it's not a DrivePool related issue, but a Windows issue.
    However, the fix is rather simple: 
  21. Christopher (Drashna)'s post in Using DrivePool Drives without Software was marked as the answer   
    The files reside in a hidden "PoolPart.xxxxx" folder on each disk. 
    Underneath this folder will mirror what the pool looks like normally.  It may not be complete, depending on what files are on the drive. 
  22. Christopher (Drashna)'s post in SSD as Cache was marked as the answer   
    Nope. But it works best if they are.
    Also, you need a number of SSDs equal to the level of duplication you're using, ideally. 
    Otherwise, data may be written to "Archive" drives instead.
  23. Christopher (Drashna)'s post in Balancing warning/error was marked as the answer   
    Well, flagged either way, so Alex can clean this up (eg, handle better in this case)
  24. Christopher (Drashna)'s post in Pushover returning 400 Bad Request was marked as the answer   
    It's fixed in the latest build, actually.
    Specifically, they changed their API in a way that was not backwards compatible, IIRC.
  25. Christopher (Drashna)'s post in Fixing Volume Bitmap is incorrect error? was marked as the answer   
    Do NOT run it with "/scan".
    While, this option is pretty awesome, it skips a number of checks, include the Volume Bitmap.  You need to run this offline only, meaning without the "/scan" flag. 
    This is an issue that I ran into myself and it tricked me.  (as well as Alex initially).
  • Create New...