Jump to content

Alex

Administrators
  • Posts

    253
  • Joined

  • Last visited

  • Days Won

    49

Reputation Activity

  1. Like
    Alex got a reaction from Talyrius in Amazon Cloud Drive - Why is it not supported?   
    Me too, that's the way that I read it initially also. But the difference in allowed bandwidth would be huge if I'm wrong, so I really want to avoid any potential misunderstandings.
     
    As for larger chunks, It's possible to utilize larger chunks, but keep in mind, the size of the chunk controls the latency of read I/O. So if you try to read anything on the cloud drive that's not stored locally, that's an automatic download of at least the chunk size before the read can complete. And we have to keep read I/O responsive in order for the drive's performance to be acceptable.
     
    In my testing a 1MB limit seemed like a good starting point in order to support a wide variety of bandwidths.
     
    But yeah, if you're utilizing something like Amazon S3 (which has no limits I assume) and you have a Gigabit connection, you should be able to use much larger chunks in theory. I'll explore this option in future releases.
  2. Like
    Alex got a reaction from Antoineki in Amazon Cloud Drive - Why is it not supported?   
    Some people have mentioned that we're not transparent enough with regards to what's going on with Amazon Cloud Drive support. That was definitely not the intention, and if that's how you guys feel, then I'm sorry about that and I want to change that. We really have nothing to gain by keeping any of this secret.
     
    Here, you will find a timeline of exactly what's happening with our ongoing communications with the Amazon Cloud Drive team, and I will keep this thread up to date as things develop.
     
    Timeline:
    On May 28th the first public BETA of StableBit CloudDrive was released without Amazon Cloud Drive production access enabled. At the time, I thought that the semi-automated whitelisting process that we went through was "Production Access". While this is similar to other providers, like Dropbox, it became apparent that for Amazon Cloud Drive, it's not. Upon closer reading of their documentation, it appears that the whitelisting process actually imposed "Developer Access" on us. To get upgraded to "Production Access" Amazon Cloud Drive requires direct email communications with the Amazon Cloud Drive team. We submitted our application for approval for production access originally on Aug. 15 over email: On Aug 24th I wrote in requesting a status update, because no one had replied to me, so I had no idea whether the email was read or not. On Aug. 27th I finally got an email back letting me know that our application was approved for production use. I was pleasantly surprised. On Sept 1st, after some testing, I wrote another email to Amazon letting them know that we are having issues with Amazon not respecting the Content-Type of uploads, and that we are also having issues with the new permission scopes that have been changes since we initially submitted our application for approval. No one answered this particular email until... On Sept 7th I received a panicked email from Amazon addressed to me (with CCs to other Amazon addresses) letting me know that Amazon is seeing unusual call patterns from one of our users.  On Sept 11th I replied explaining that we do not keep track of what our customers are doing, and that our software can scale very well, as long as the server permits it and the network bandwidth is sufficient. Our software does respect 429 throttling responses from the server and it does perform exponential backoff, as is standard practice in such cases. Nevertheless, I offered to limit the number of threads that we use, or to apply any other limits that Amazon deems necessary on the client side. I highlighted on this page the key question that really needs answered. Also, please note that obviously Amazon knows who this user is, since they have to be their customer in order to be logged into Amazon Cloud Drive. Also note that instead of banning or throttling that particular customer, Amazon has chosen to block the entire user base of our application.
    On Sept. 16th I received a response from Amazon. On Sept. 21st I haven't heard anything back yet, so I sent them this. I waited until Oct. 29th and no one answered. At that point I informed them that we're going ahead with the next public BETA regardless. Some time in the beginning of November an Amazon employee, on the Amazon forums, started claiming that we're not answering our emails. This was amidst their users asking why Amazon Cloud Drive is not supported with StableBit CloudDrive. So I did post a reply to that, listing a similar timeline. One Nov. 11th I sent another email to them. On Nov 13th Amazon finally replied. I redacted the limits, I don't know if they want those public.
    On Nov. 19th I sent them another email asking to clarify what the limits mean exactly. On Nov 25th Amazon replied with some questions and concerns regarding the number of API calls per second: On Dec. 2nd I replied with the answers and a new build that implemented large chunk support. This was a fairly complicated change to our code designed to minimize the number of calls per second for large file uploads.

    You can read my Nuts & Bolts post on large chunk support here: http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/ On Dec. 10th 2015, Amazon replied: I have no idea what they mean regarding the encrypted data size. AES-CBC is a 1:1 encryption scheme. Some number of bytes go in and the same exact number of bytes come out, encrypted. We do have some minor overhead for the checksum / authentication signature at the end of every 1 MB unit of data, but that's at most 64 bytes per 1 MB when using HMAC-SHA512 (which is 0.006 % overhead). You can easily verify this by creating an encrypted cloud drive of some size, filling it up to capacity, and then checking how much data is used on the server.

    Here's the data for a 5 GB encrypted drive:



    Yes, it's 5 GBs.  
     
      To clarify the error issue here (I'm not 100% certain about this), Amazon doesn't provide a good way to ID these files. We have to try to upload them again, and then grab the error ID to get the actual file ID so we can update the file.  This is inefficient and would be solved with a more robust API that included a search functionality, or a better file list call. So, this is basically "by design" and required currently.  -Christopher     Unfortunately, we haven't pursued this with Amazon recently. This is due to a number of big bugs that we have been following up on.  However, these bugs have lead to a lot of performance, stability and reliability fixes. And a lot of users have reported that these fixes have significantly improved the Amazon Cloud Drive provider.  That is something that is great to hear, as it may help to get the provider to a stable/reliable state.    That said, once we get to a more stable state (after the next public beta build (after 1.0.0.463) or a stable/RC release), we do plan on pursuing this again.     But in the meanwhile, we have held off on this as we want to focus on the entire product rather than a single, problematic provider.  -Christopher     Amazon has "snuck in" additional guidelines that don't bode well for us.  https://developer.amazon.com/public/apis/experience/cloud-drive/content/developer-guide Don’t build apps that encrypt customer data  
    What does this mean for us? We have no idea right now.  Hopefully, this is a guideline and not a hard rule (other apps allow encryption, so that's hopeful, at least). 
     
    But if we don't get re-approved, we'll deal with that when the time comes (though, we will push hard to get approval).
     
    - Christopher (Jan 15. 2017)
     
    If you haven't seen already, we've released a "gold" version of StableBit CloudDrive. Meaning that we have an official release! 
    Unfortunately, because of increasing issues with Amazon Cloud Drive, that appear to be ENTIRELY server side (drive issues due to "429" errors, odd outages, etc), and that we are STILL not approved for production status (despite sending off additional emails a month ago, requesting approval or at least an update), we have dropped support Amazon Cloud Drive. 

    This does not impact existing users, as you will still be able to mount and use your existing drives. However, we have blocked the ability to create new drives for Amazon Cloud Drive.   
     
    This was not a decision that we made lightly, and while we don't regret this decision, we are saddened by it. We would have loved to come to some sort of outcome that included keeping full support for Amazon Cloud Drive. 
    -Christopher (May 17, 2017)
  3. Like
    Alex got a reaction from Christopher (Drashna) in +1 for GoogleDrive for Work support   
    Yeah, it's basically just a flip of a switch if this works. All providers use a common infrastructure for chunking.
  4. Like
    Alex got a reaction from Christopher (Drashna) in Mount Existing Files   
    The reason why I chose this architecture for StableBit CloudDrive is because I think the primary #1 feature of StableBit CloudDrive is the full drive encryption. You set up your own encryption key, no one else knows it. We don't know it. Your cloud providers don't know it. This gives you absolute control over your data's security.
     
    Note that even if your cloud provider says that they're encrypting your drive, the point of control over that encryption is not under your supervision. So essentially you really don't know what's going on with that encryption. Perhaps some rogue employee may have access to those keys, etc...
     
    Given that type of approach, it made sense to store your data in opaque chunks.
     
    Letting you access your existing files in the cloud through a drive letter would require a totally different architecture of StableBit CloudDrive (and frankly, other application have already done this). I think we are relatively unique in our approach (at least I don't know of any other products in our price range that do what we do).
     
    I guess the way that I see StableBit CloudDrive is really TrueCrypt for the cloud (although, admittedly, we're not open source). The idea is that you create an opaque container, store it in the cloud, and have confidence that it's secure. Or you can do the same locally, but the architecture is optimized for the cloud.
  5. Like
    Alex got a reaction from Christopher (Drashna) in Dropbox permanent re-authorize   
    In other news, this was the last major bug that I have on my radar for the StableBit CloudDrive BETA (i.e. in our bug tracking system).
     
    So I'm going to be performing any final testing and prepping for a 1.0 release final.
     
    If anyone does find anything major beyond this point, please let us know (https://stablebit.com/Contact).
     
    Thanks,
  6. Like
    Alex got a reaction from Christopher (Drashna) in Amazon Cloud Drive, near future.?   
    Thank you for the sentiment.
     
    What I was thinking at the time, and I mentioned this to Christopher... At the heart of the problem is that our product scales... it scales really well.
  7. Like
    Alex got a reaction from Shane in check-pool-fileparts   
    If you're not familiar with dpcmd.exe, it's the command line interface to StableBit DrivePool's low level file system and was originally designed for troubleshooting the pool. It's a standalone EXE that's included with every installation of StableBit DrivePool 2.X and is available from the command line.
     
    If you have StableBit DrivePool 2.X installed, go ahead and open up the Command Prompt with administrative access (hold Ctrl + Shift from the Start menu), and type in dpcmd to get some usage information.
     
    Previously, I didn't recommend that people mess with this command because it wasn't really meant for public consumption. But the latest internal build of StableBit DrivePool, 2.2.0.659, includes a completely rewritten dpcmd.exe which now has some more useful functions for more advanced users of StableBit DrivePool, and I'd like to talk about some of these here.
     
    Let's start with the new check-pool-fileparts command.
     
    This command can be used to:
    Check the duplication consistency of every file on the pool and show you any inconsistencies. Report any inconsistencies found to StableBit DrivePool for corrective actions. Generate detailed audit logs, including the exact locations where each file part is stored of each file on the pool. Now let's see how this all works. The new dpcmd.exe includes detailed usage notes and examples for some of the more complicated commands like this one.
     
    To get help on this command type: dpcmd check-pool-fileparts
     
    Here's what you will get:
     
    dpcmd - StableBit DrivePool command line interface Version 2.2.0.659 The command 'check-pool-fileparts' requires at least 1 parameters. Usage: dpcmd check-pool-fileparts [parameter1 [parameter2 ...]] Command: check-pool-fileparts - Checks the file parts stored on the pool for consistency. Parameters: poolPath - A path to a directory or a file on the pool. detailLevel - Detail level to output (0 to 4). (optional) isRecursive - Is this a recursive listing? (TRUE / false) (optional) Detail levels: 0 - Summary 1 - Also show directory duplication status 2 - Also show inconsistent file duplication details, if any (default) 3 - Also show all file duplication details 4 - Also show all file part details Examples: - Perform a duplication check over the entire pool, show any inconsistencies, and inform StableBit DrivePool >dpcmd check-pool-fileparts P:\ - Perform a full duplication check and output all file details to a log file >dpcmd check-pool-fileparts P:\ 3 > Check-Pool-FileParts.log - Perform a full duplication check and just show a summary >dpcmd check-pool-fileparts P:\ 0 - Perform a check on a specific directory and its sub-directories >dpcmd check-pool-fileparts P:\MyFolder - Perform a check on a specific directory and NOT its sub-directories >dpcmd check-pool-fileparts "P:\MyFolder\Specific Folder To Check" 2 false - Perform a check on one specific file >dpcmd check-pool-fileparts "P:\MyFolder\File To Check.exe" The above help text includes some concrete examples on how to use this commands for various scenarios. To perform a basic check of an entire pool and get a summary back, you would simply type:
    dpcmd check-pool-fileparts P:\
     
    This will scan your entire pool and make sure that the correct number of file parts exist for each file. At the end of the scan you will get a summary:
    Scanning...   ! Error: Can't get duplication information for '\\?\p:\System Volume Information\storageconfiguration.xml'. Access is denied   Summary:   Directories: 3,758   Files: 47,507 3.71 TB (4,077,933,565,417   File parts: 48,240 3.83 TB (4,214,331,221,046     * Inconsistent directories: 0   * Inconsistent files: 0   * Missing file parts: 0 0 B (0     ! Error reading directories: 0   ! Error reading files: 1 Any inconsistent files will be reported here, and any scan errors will be as well. For example, in this case I can't scan the System Volume Information folder because as an Administrator, I don't have the proper access to do that (LOCAL SYSTEM does).
     
    Another great use for this command is actually something that has been requested often, and that is the ability to generate audit logs. People want to be absolutely sure that each file on their pool is properly duplicated, and they want to know exactly where it's stored. This is where the maximum detail level of this command comes in handy:
    dpcmd check-pool-fileparts P:\ 4
     
    This will show you how many copies are stored of each file on your pool, and where they're stored.
     
    The output looks something like this:
    Detail level: File Parts   Listing types:     + Directory   - File   -> File part   * Inconsistent duplication   ! Error   Listing format:     [{0}/{1} IM] {2}     {0} - The number of file parts that were found for this file / directory.     {1} - The expected duplication count for this file / directory.     I   - This directory is inheriting its duplication count from its parent.     M   - At least one sub-directory may have a different duplication count.     {2} - The name and size of this file / directory.   ... + [3x/2x] p:\Media -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media [Device 0] -> \Device\HarddiskVolume3\PoolPart.6a76681a-3600-4af1-b877-a31815b868c8\Media [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media [Device 2] - [2x/2x] p:\Media\commandN Episode 123.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 123.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 123.mov [Device 2] - [2x/2x] p:\Media\commandN Episode 124.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 124.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 124.mov [Device 2] ... The listing format and listing types are explained at the top, and then for each folder and file on the pool, a record like the above one is generated.
     
    Of course like any command output, it could always be piped into a log file like so:
    dpcmd check-pool-fileparts P:\ 4 > check-pool-fileparts.log
     
    I'm sure with a bit of scripting, people will be able to generate daily audit logs of their pool
     
    Now this is essentially the first version of this command, so if you have an idea on how to improve it, please let us know.
     
    Also, check out set-duplication-recursive. It lets you set the duplication count on multiple folders at once using a file pattern rule (or a regular expression). It's pretty cool.
     
    That's all for now.
  8. Like
    Alex got a reaction from steffenmand in Amazon Cloud Drive - Why is it not supported?   
    Me too, that's the way that I read it initially also. But the difference in allowed bandwidth would be huge if I'm wrong, so I really want to avoid any potential misunderstandings.
     
    As for larger chunks, It's possible to utilize larger chunks, but keep in mind, the size of the chunk controls the latency of read I/O. So if you try to read anything on the cloud drive that's not stored locally, that's an automatic download of at least the chunk size before the read can complete. And we have to keep read I/O responsive in order for the drive's performance to be acceptable.
     
    In my testing a 1MB limit seemed like a good starting point in order to support a wide variety of bandwidths.
     
    But yeah, if you're utilizing something like Amazon S3 (which has no limits I assume) and you have a Gigabit connection, you should be able to use much larger chunks in theory. I'll explore this option in future releases.
  9. Like
    Alex reacted to steffenmand in Amazon Cloud Drive - Why is it not supported?   
    Thanks for being so open :-)
     
    Regarding their limits i read their term "a client instance" as an user connection and not a representation of your application!
    Should their limits limit the amount of threads, could it not be possible to allow larger chunk size on amazon? like 5 mb chunks or maybe even 10mb.
    Sitting on a 1 gbit line i would love to be able to utilize the fastest speed possible. Then rather have large chunks and more threads than smaller chunks and less threads.
     
    But hopefully they will get you on track for the final release so we can get some purchases going for stablebit :-)
  10. Like
    Alex reacted to Christopher (Drashna) in Amazon Cloud Drive - Error: HTTP Status BadRequest   
    I know, right?!
     
    But apparently, it's a completely different team that doesn't interact with the S3 team (at least, that's my guess, and what it appears like).
     
    In fact, I'm not sure why they didn't just use the S3 infrastructure. ....
  11. Like
    Alex reacted to Christopher (Drashna) in Amazon Cloud Drive, near future.?   
    Doesn't matter what they are. If they won't communicate with us and let us know what the limits *should* be, then we have to guess and HOPE they won't demote us to development status again.  Getting official word is better. But they'd still have to re-promote us to production status again..... 
     
     
    But to Alex's point: we're more than willing to work with them. But they need to respond to their emails.  Communication only works if it's going in both directions. 
     
    That .... or we EECB them (and that's .... well, not a good option). 
  12. Like
    Alex got a reaction from danfer in Full Drive Encryption   
    Some years ago, when I first got the idea for StableBit CloudDrive, as cool as the concept sounded technically, I started thinking about its use cases. What would be the reason to use StableBit CloudDrive over any of the various "drive" applications that you can get directly from a cloud storage provider? Of course there are some obvious reasons, for example, StableBit CloudDrive is a real drive, it has an intelligent adaptive cache that stores the most frequently accessed data locally, but the main reason which really drove the overall design of StableBit CloudDrive is full drive encryption.
     
    If you want to ensure the privacy and the security of your data in the cloud, and who doesn't in today's security conscious climate, one option that you have is to have the cloud storage provider encrypt your data for you. That's exactly what many cloud storage providers are already doing, and it all sounds great in theory, but there's one fundamental flaw with this model. The service or company that is storing your data is also the entity which is responsible for ensuring the privacy and the security of your data. In some cases, or perhaps in most, they also provide a secondary way of accessing your data (a back door / key recovery mechanism), which can be misused. If there's a bad actor, or if the company is compelled to do so, they can unwillingly or willingly expose your data without your knowledge to some 3rd party or even publicly.
     
    I came to the conclusion, as I'm sure many other people have in the past, that trusting the storage provider with the task of ensuring the security and privacy of your data is fundamentally flawed.
     
    Now there are other solutions to this problem, like manually encrypting your files with a standalone encryption tool before uploading them or using a 3rd party backup application that supports encryption, but StableBit CloudDrive aims to solve this problem in the best technical way possible. By integrating full drive encryption for the cloud directly into the Operating System, you have the best of both worlds. Your encrypted cloud drive acts like a real disk, making it compatible with most existing Microsoft Windows applications, and it's fully encrypted. Moreover, you are in complete control of the encryption, not the cloud provider.
     
    Because full drive encryption is essential to securing your data in the cloud, let's get a little technical and let me show you how StableBit CloudDrive handles it:
     

    This is just a rough diagram to give you a top level overview and is not technically complete.
     
    Here are the basic steps involved when an application reads or writes data onto an encrypted cloud drive.
    A user mode application (or a kernel mode driver) issues a read or a write request to some file system. The file system issues a read or a write requests to the underlying disk driver, if the request can't be satisfied by the RAM Cache. For an encrypted cloud drive, any data that is passed into our disk driver is first encrypted, before anything is done to it. Any data that needs to leave our disk driver is decrypted as the last step before returning it to the file system. This means that any data on an encrypted cloud drive will always be encrypted both in the local cache and in the cloud. In fact none of the code involved in handling the I/O request even needs to know whether the data is encrypted or not. Decrypted data is never committed to your hard drive, unless the application itself saves the decrypted data to disk. For example, if you open a text file with Notepad on an encrypted cloud drive and then save that file to your desktop (which is not encrypted), then you've just committed a decrypted file to disk.
     
    The encryption algorithm that StableBit CloudDrive uses is AES-256 bit CBC, an industry standard encryption algorithm. The encryption API is Microsoft's CNG (https://msdn.microsoft.com/en-us/library/windows/desktop/aa376210(v=vs.85).aspx), which is FIPS 140-2 compliant. We definitely didn't want to reinvent the wheel when it comes to choosing how to handle the encryption. CNG is a core component of Microsoft Windows and has both a user mode component and a kernel mode component (upon which the user mode component is built).
     
    When you first create your cloud drive, you can choose among 2 methods of unlocking your encrypted drive. Either you can generate a random key, or you can enter a passphrase of varying length and have the system derive a key from the passphrase.
     
    If you choose to generate a key, we do so using a secure random number generator (CNG from user mode) and combine that data (using XOR) with data obtained from your mouse movements (which are gotten from your use of the application). This adds an extra layer of security, just in case the secure random number generator is discovered to be compromised in the future.
     
    The one and only key that is used to decrypt your data can optionally be derived from a passphrase that you enter. This key derivation is performed using a salted PBKDF2-HMAC-SHA512 (200,000 iterations) using the unmanaged CNG API from user mode.
     
    You can also choose to save the key locally for added convenience. If you choose to do this, we use the standard Microsoft Windows credential store to save a locally encrypted version of your key. Obviously if anyone ever gets physical access to your system, or obtains remote access, your key can be compromised. You should save your key locally only if you're not concerned with absolute security and want the added convenience of not having to enter your key on every reboot.
     
    Whatever unlock method you choose, you should always save your key to some permanent medium. You should print it out on a piece of paper or save it to an external thumb drive and store it in a secure place. Since only you have the key, it is absolutely critical that you don't lose it. Without your key, you will lose your data.
     
    Hopefully this post gave everyone a little insight as to why I think encryption is important in StableBit CloudDrive and how it's actually implemented.
  13. Like
    Alex reacted to Philmatic in Drivepool good with torrent activities?   
    Yes, totally. I have a 150mbps downstream connection running 50 or so torrents at a time and DP has absolutely no issues with it.
     
    I do two things though, to be safe:
    I disable duplication on the folder I download to, it's just not needed I increase the cache on uTorrent and tell it to use up to 1GB of my System RAM for write caches. I have never seen "Disk Overloaded".
  14. Like
    Alex reacted to Christopher (Drashna) in What if a KB breaks something   
    Honestly? Not really.
    The Microsoft Connect website used to be pretty awesome about that.... but that's since been depreciated....
     
    If you know a Windows Desktop or Windows Experience Microsoft MVP, that would be the best way to "pass along the word".
    There is also the EECB method.....
     
    Aside from those, posting at the technet forums:
    http://social.technet.microsoft.com/Forums/en-US/home
  15. Like
    Alex reacted to Christopher (Drashna) in How many drives are supported?   
    Well, I'm glad to hear that it's able to see all the drives without any issues (other than not reading SMART, from the sounds of your other thread).
     
    64 drives is no small amount!
  16. Like
    Alex reacted to vorpel in How many drives are supported?   
    I know it has been a while since I started this thread, but I've finally gotten my home system mostly setup.  I have 64 drives (2 local SATA, 24 on internal Areca ARC-1680ix card, and 38 from SAS expanders through a second local ARC-1680ix card).  Stablebit scanner sees all drives!
     
    Thanks.
  17. Like
    Alex reacted to Christopher (Drashna) in Drive emptying options - what it does?   
    You are very welcome!
     
    If you have any other questions, don't hesitate to ask.
  18. Like
    Alex reacted to gringott in SSD Optimizer Balancing Plugin   
    I have been using it. Seems to be just what I needed. Thanks for this plugin.
  19. Like
    Alex got a reaction from sean10780 in StableBit DrivePool's File Placement and "Product 3"   
    I've just put up a new blog post that introduces some of the functionality of the next StableBit product, dubbed "Product 3".
     
    Blog post: http://blog.covecube.com/2014/05/stablebit-drivepool-2-1-0-553-rc-file-placement-and-product-3/
     
    I'll quote the text that is the most relevant to Product 3:
     
     
    As we approach the first public BETA I'll be posting some more technical detail here about how Product 3 will work and what unique features it will offer.
  20. Like
    Alex reacted to Breeze in Just..."Wow!" ;)   
    I recently moved to WHS 2011 and knew about DrivePool, found out about Scanner, and bought the one machine license for both. I've been spending more time with DrivePool getting things organized.
     
    I put a new disk in WHS yesterday and discovered that Scanner can read the SMART data through one of the SATA chipsets on the motherboard that for years I'd given up on; none of the diagnostic tools I used in the past could see it. Then I discovered the shortcuts at the bottom of the Scanner window disclosing everything we'd like to know (or not!) about the drives...
     
    I just want to say that you've done an amazing job with Scanner; it's a far, far deeper program than the opening GUI initially suggests. Very comprehensive and I can definitely see a place for it on conventional workstations. Kudos!
  21. Like
    Alex reacted to Christopher (Drashna) in Defragging: Individual Drives or DrivePool?   
    Defragging is block level, IIRC. So it won't work on the pool. Ever.
     
    However, you can definitely defragment the disks in the pool, and there should be no issue with doing that.
  22. Like
    Alex got a reaction from Christopher (Drashna) in StableBit DrivePool's File Placement and "Product 3"   
    I've just put up a new blog post that introduces some of the functionality of the next StableBit product, dubbed "Product 3".
     
    Blog post: http://blog.covecube.com/2014/05/stablebit-drivepool-2-1-0-553-rc-file-placement-and-product-3/
     
    I'll quote the text that is the most relevant to Product 3:
     
     
    As we approach the first public BETA I'll be posting some more technical detail here about how Product 3 will work and what unique features it will offer.
  23. Like
    Alex reacted to Christopher (Drashna) in Scanner does not check file system health   
    I'm glad to hear it!
  24. Like
    Alex reacted to Wonko in Scanner does not check file system health   
    Thank you very much for the idea. After installing the 2.5.0.3000 Beta build, the file system scan immediately occurred for the drive that was not being scanned in that manner. Problem solved!
  25. Like
    Alex got a reaction from Minaleque in SSD Optimizer Balancing Plugin   
    I've just finished coding a new balancing plugin for StableBit DrivePool, it's called the SSD Optimizer. This was actually a feature request, so here you go.
     
    I know that a lot of people use the Archive Optimizer plugin with SSD drives and I would like this plugin to replace the Archive Optimizer for that kind of balancing scenario.
     
    The idea behind the SSD Optimizer (as it was with the Archive Optimizer) is that if you have one or more SSDs in your pool, they can serve as "landing zones" for new files, and those files will later be automatically migrated to your slower spinning drives. Thus your SSDs would serve as a kind of super fast write buffer for the pool.
     
    The new functionality of the SSD Optimizer is that now it's able to fill your Archive disks one at a time, like the Ordered File Placement plugin, but with support for SSD "Feeder" disks.
     
    Check out the attached screenshot of what it looks like
     
    Notes: http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt
    Download: http://stablebit.com/DrivePool/Plugins
     
    Edit: Now up on stablebit.com, link updated.

×
×
  • Create New...