Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11741
  • Joined

  • Last visited

  • Days Won

    390

Reputation Activity

  1. Like
    Christopher (Drashna) reacted to Kraevin in +1 for GoogleDrive for Work support   
    That's awesome to hear, keep up the great work! Can't wait to try it out!
     
     
    You know you can get the google drive at work unlimited right? I pay 10 dollars a month for it, and its way better than amazon in my opinion.
  2. Like
    Christopher (Drashna) reacted to WBadminUser in Amazon Cloud Drive - Why is it not supported?   
    Best news i have heard in a while.
  3. Like
    Christopher (Drashna) got a reaction from Alex in Amazon Cloud Drive - Why is it not supported?   
    They did respond, but it's ... well more of "developer" stuff. However, the outlook is good. (basically, it looks like the calls per second are within acceptable values).  
     
    It looks like Alex hasn't had a chance to respond (they replied on Wednesday... so really bad timing, basically). But hopefully, the ball is actually rolling!
  4. Like
    Christopher (Drashna) got a reaction from steffenmand in Amazon Cloud Drive - Why is it not supported?   
    They did respond, but it's ... well more of "developer" stuff. However, the outlook is good. (basically, it looks like the calls per second are within acceptable values).  
     
    It looks like Alex hasn't had a chance to respond (they replied on Wednesday... so really bad timing, basically). But hopefully, the ball is actually rolling!
  5. Like
    Christopher (Drashna) got a reaction from Alex in Amazon Glacier for Archival purposes?   
    Nope. 
     
    The problem is that there is a 4+ hour wait period to be able to access the data that we've uploaded. That basically makes it unusable for us. Both because file systems like to recheck data, and because it would make the upload verification impossible.
     
    We have already looked into it, and discovered this. Sorry.
  6. Like
    Christopher (Drashna) reacted to dslabbekoorn in New Setup, Windows Server 2012 R2   
    Thanks Christopher, that's just what I was hoping to hear.  I'll definetly take your advice on the duplication.
     
    Happy Thanksgiving.
  7. Like
    Christopher (Drashna) got a reaction from Alex in RSS feed for CloudDrive forum   
    Fixed.
     
     
    You can also "follow" the forum (button at top) to get email notifications of topics.
  8. Like
    Christopher (Drashna) got a reaction from Alex in NetDrive now having same issue as CloudDrive?   
    It's  not quite the same as what happened to us. 
     
    However, it does confirm that Amazon is absolutely having infrastructure issues with Amazon Cloud Drive. 
    Basically, too much traffic generated and their network and database infrastructures are struggling to deal with all the demand. 
  9. Like
    Christopher (Drashna) got a reaction from Alex in Amazon Cloud Drive - Why is it not supported?   
    The problem with faster internet connection is that it isn't that simple. Just because you may have gigabit internet doesn't mean that the server you're connecting to can support that speed.   
     
    But to summarize here, it's a complicated issue, and we do plan on re-evaluating in the "near future".
     
    But this won't affect the performance of the drive. Specifically, each chunk is accessed by a single thread. That means that more threads you use, the more of these chunks are accessed in parallel.  So, with enough threads, you could ABSOLUTELY saturate your bandwidth.
     
    Actually, this SPECIFICALLY was the issue with Amazon Cloud Drive... and what Alex means by "scales well".
     
     
    To clarify, we had at least one user was uploadind at 45MB/s (360mbps).  That's with ~45x1MB chunks at the same time.
     
    So you may be able to see why the 1MB size really isn't a bad limit here. It ensure responsiveness of the file system, even under heavy load.
     
     
     
     
     
    See above.  Larger chunk sizes aren't strictly necessary.  And we do plan on re-evaluating this issue in the future.  
     
     
     
     
    But currently, there isn't a way to get the checksum without downloading the entire chunk. Anything over 1MB will be using the partial chunk retrieval.   And changing this would require rewriting the providers.
     
     
    And there is still the upload verification option.
  10. Like
    Christopher (Drashna) reacted to Reptile in Cloud Providers   
    It's done.
     
    Google Drive costs $40 dollars / month for unlimited business storage. Rock solid provider with reasonable upload limits.
  11. Like
    Christopher (Drashna) reacted to dslabbekoorn in New Setup, Windows Server 2012 R2   
    I finally bit the bullet and got Windows Server 2012R2.  I plan on using my drivepool for the client backup folder and a separate 2TB usb drive for server backups.  Are there still any problems with having the client backup on the pool, I remember there being issues that prevented it from working.  The new system is setup and running and I just got my pool re-attached to the server, PLEX seems to be working again as well.  Now setting up the client backups is the only thing remaining.  Server R2 does seem to be "snappier" than WSE, and runs much more quickly on my server hardware.  WSE just started to have too many connector issues with the clients after they were updated to Windows 10, 2 upgrades failed with the black screen of death and required full drive restores from server backups to save them.  Kind of why I want to get backup working again asap. 
     
    My question to the audience is will my above scenario work?  Client backups to the pool and server backups to an "off line" usb drive.
     
    By the way, Stablebit Drivepool has been the most bullet proof software I have ever used, 4 different server OS's and the pool marches on with no loss of data.  Now if only Microsoft could do so well.
     
    Dave
  12. Like
    Christopher (Drashna) reacted to WBadminUser in Amazon Cloud Drive - Why is it not supported?   
    I bought a copy just to support the efforts here
     
    I noticed with the other providers you upped the Storage Chunk Size. however it was not changed for ACD. Was this intentionally while you are waiting on ACD to answer there darn emails
  13. Like
    Christopher (Drashna) reacted to ottoman in Questions regarding CloudDrive, Encryption and DrivePool   
    Thank you for your very detailed answers. Regarding #3 you are right. The cache file reports a size which is not equal to the "size on disk". I came to the conclusion that CloudDrive is not really the tool I need. But I may try CloudDrive again when Amazon CloudDrive is available in my country and the provider is stable. Right now I am looking into BitLocker+DrivePool and it seems promising
  14. Like
    Christopher (Drashna) got a reaction from propergol in Change allocation unit size of all drives in DrivePool: Best way?   
    It's better to not use the larger allocation unit size for SSDs.  
     
    Specifically, because SSDs don't benefit from them. The main reason to use the larger size is performance and preventing fragmentation. Since neither affect an SSD, there is no benefit.
    Additioanlly, since SSDs are smaller in capacity, you're more likely to lose a good chunk of space by doing so.
     
     
    However, there is absolutely no harm in using the different sizes on the SSDs vs the spinning hard drives. 
     
     
    So 64k allocation unit size for "spinners" and default for SSDs is good.
  15. Like
    Christopher (Drashna) reacted to Alex in +1 for GoogleDrive for Work support   
    Yeah, it's basically just a flip of a switch if this works. All providers use a common infrastructure for chunking.
  16. Like
    Christopher (Drashna) reacted to steffenmand in +1 for GoogleDrive for Work support   
    Excatly what i wanted :-D let my speed be unleashed!
     
    hopefully this will make it to amazon as well when you get the answers from them as well :-)
  17. Like
    Christopher (Drashna) reacted to Alex in Local cache data   
    I called this "Full Round Trip Encryption", for a lack of a better term, and it was very much a core part of the original design.
     
    Specifically, what this means is that any byte that gets written to an encrypted cloud drive, is never stored in its unencrypted form on any permanent storage medium (either locally or in the cloud).
  18. Like
    Christopher (Drashna) reacted to Hakuren in SSD Pool   
    Thanks. Implemented. Will see tomorrow how it performs.  1 drive finished which is success in itself with default settings. I really don't want to move past 4 HDDs now. I'm probably terminally addicted to flash and just can't stand spinning media anymore. LOL Quite annoyed that HAMR technology which - announced like 50 times already - is still in the doldrums. With 2 like 200TB drives I wouldn't even need a pool 
     
    Word (or more) of clarification. I don't use any network storage. DP is not running on a server but on my primary workstation (CaseLabs TH10A, it holds a loooot of stuff). Never used NAS, never will. Everything external is DAS cold storage which as name suggest is used only when I acquire sufficient amount of new data to move something more substantial. BTW: DP works perfectly well with my 8 bay Fantec USB3 QB-XU8S3-6G (clone of a clone of MediaSonic Probox and many others) enclosures. Was thinking about Thunderbolt but whole circus with certification and often truly ridiculous pricing I decided against.
     
    As for SSDs. I really arriving at the conclusion it's better to bunch like 16x 250 GB drives for RAID60 than toy with pool (these ToughArmor 8x2.5" enclosures are really cute, tiny and hold bags of drives - already obtained one and will get quite few of those + another expander if cash flow permits). With regard to TRIM. Well it's not that catastrophic. I think its relevance is vastly over hyped (except critical applications). I did experiment with SSDs connected in old X58 system via first motherboard then 6805 and then 71605. Normal volumes and RAIDed. Same drives were moved and I never took care of TRIM thingy. After ~4 years of 12-15/h use I dismantled arrays and took all drives for a "service" and discovered only 1 out of 4 shown 3% drop in life expectancy (that 1 was used as system boot drive for about 2 years) with rest in tip top shape. Because of SSD superior transfers it's not end of the world if there a bit of delay with writes. I have 2 SSD arrays and all of them perform quite well. Read is blazing fast (talking R10) with write hit with like 30% penalty. My fingers itching for more SSDs in my case, but I try to resist and see what X-Point vel Optane (what a stupid name) will bring to the table. 750s are worth every [put your currency in here] can't wait for 2016.
     
    I've mentioned 750 above, but my only concern was operating temperature. With smartools I'm now calm. Was worried a bit it getting too hot, but after few runs of monitoring it never passed 35C so no problems there.
  19. Like
    Christopher (Drashna) got a reaction from Alex in Amazon Cloud Drive, near future.?   
    Specifically, scales better than Amazon's able to accommodate for (because they're not throttling on the server side properly).  
  20. Like
    Christopher (Drashna) reacted to Alex in Mount Existing Files   
    The reason why I chose this architecture for StableBit CloudDrive is because I think the primary #1 feature of StableBit CloudDrive is the full drive encryption. You set up your own encryption key, no one else knows it. We don't know it. Your cloud providers don't know it. This gives you absolute control over your data's security.
     
    Note that even if your cloud provider says that they're encrypting your drive, the point of control over that encryption is not under your supervision. So essentially you really don't know what's going on with that encryption. Perhaps some rogue employee may have access to those keys, etc...
     
    Given that type of approach, it made sense to store your data in opaque chunks.
     
    Letting you access your existing files in the cloud through a drive letter would require a totally different architecture of StableBit CloudDrive (and frankly, other application have already done this). I think we are relatively unique in our approach (at least I don't know of any other products in our price range that do what we do).
     
    I guess the way that I see StableBit CloudDrive is really TrueCrypt for the cloud (although, admittedly, we're not open source). The idea is that you create an opaque container, store it in the cloud, and have confidence that it's secure. Or you can do the same locally, but the architecture is optimized for the cloud.
  21. Like
    Christopher (Drashna) reacted to Alex in Dropbox permanent re-authorize   
    In other news, this was the last major bug that I have on my radar for the StableBit CloudDrive BETA (i.e. in our bug tracking system).
     
    So I'm going to be performing any final testing and prepping for a 1.0 release final.
     
    If anyone does find anything major beyond this point, please let us know (https://stablebit.com/Contact).
     
    Thanks,
  22. Like
    Christopher (Drashna) reacted to Alex in Amazon Cloud Drive, near future.?   
    Thank you for the sentiment.
     
    What I was thinking at the time, and I mentioned this to Christopher... At the heart of the problem is that our product scales... it scales really well.
  23. Like
    Christopher (Drashna) reacted to Alex in check-pool-fileparts   
    If you're not familiar with dpcmd.exe, it's the command line interface to StableBit DrivePool's low level file system and was originally designed for troubleshooting the pool. It's a standalone EXE that's included with every installation of StableBit DrivePool 2.X and is available from the command line.
     
    If you have StableBit DrivePool 2.X installed, go ahead and open up the Command Prompt with administrative access (hold Ctrl + Shift from the Start menu), and type in dpcmd to get some usage information.
     
    Previously, I didn't recommend that people mess with this command because it wasn't really meant for public consumption. But the latest internal build of StableBit DrivePool, 2.2.0.659, includes a completely rewritten dpcmd.exe which now has some more useful functions for more advanced users of StableBit DrivePool, and I'd like to talk about some of these here.
     
    Let's start with the new check-pool-fileparts command.
     
    This command can be used to:
    Check the duplication consistency of every file on the pool and show you any inconsistencies. Report any inconsistencies found to StableBit DrivePool for corrective actions. Generate detailed audit logs, including the exact locations where each file part is stored of each file on the pool. Now let's see how this all works. The new dpcmd.exe includes detailed usage notes and examples for some of the more complicated commands like this one.
     
    To get help on this command type: dpcmd check-pool-fileparts
     
    Here's what you will get:
     
    dpcmd - StableBit DrivePool command line interface Version 2.2.0.659 The command 'check-pool-fileparts' requires at least 1 parameters. Usage: dpcmd check-pool-fileparts [parameter1 [parameter2 ...]] Command: check-pool-fileparts - Checks the file parts stored on the pool for consistency. Parameters: poolPath - A path to a directory or a file on the pool. detailLevel - Detail level to output (0 to 4). (optional) isRecursive - Is this a recursive listing? (TRUE / false) (optional) Detail levels: 0 - Summary 1 - Also show directory duplication status 2 - Also show inconsistent file duplication details, if any (default) 3 - Also show all file duplication details 4 - Also show all file part details Examples: - Perform a duplication check over the entire pool, show any inconsistencies, and inform StableBit DrivePool >dpcmd check-pool-fileparts P:\ - Perform a full duplication check and output all file details to a log file >dpcmd check-pool-fileparts P:\ 3 > Check-Pool-FileParts.log - Perform a full duplication check and just show a summary >dpcmd check-pool-fileparts P:\ 0 - Perform a check on a specific directory and its sub-directories >dpcmd check-pool-fileparts P:\MyFolder - Perform a check on a specific directory and NOT its sub-directories >dpcmd check-pool-fileparts "P:\MyFolder\Specific Folder To Check" 2 false - Perform a check on one specific file >dpcmd check-pool-fileparts "P:\MyFolder\File To Check.exe" The above help text includes some concrete examples on how to use this commands for various scenarios. To perform a basic check of an entire pool and get a summary back, you would simply type:
    dpcmd check-pool-fileparts P:\
     
    This will scan your entire pool and make sure that the correct number of file parts exist for each file. At the end of the scan you will get a summary:
    Scanning...   ! Error: Can't get duplication information for '\\?\p:\System Volume Information\storageconfiguration.xml'. Access is denied   Summary:   Directories: 3,758   Files: 47,507 3.71 TB (4,077,933,565,417   File parts: 48,240 3.83 TB (4,214,331,221,046     * Inconsistent directories: 0   * Inconsistent files: 0   * Missing file parts: 0 0 B (0     ! Error reading directories: 0   ! Error reading files: 1 Any inconsistent files will be reported here, and any scan errors will be as well. For example, in this case I can't scan the System Volume Information folder because as an Administrator, I don't have the proper access to do that (LOCAL SYSTEM does).
     
    Another great use for this command is actually something that has been requested often, and that is the ability to generate audit logs. People want to be absolutely sure that each file on their pool is properly duplicated, and they want to know exactly where it's stored. This is where the maximum detail level of this command comes in handy:
    dpcmd check-pool-fileparts P:\ 4
     
    This will show you how many copies are stored of each file on your pool, and where they're stored.
     
    The output looks something like this:
    Detail level: File Parts   Listing types:     + Directory   - File   -> File part   * Inconsistent duplication   ! Error   Listing format:     [{0}/{1} IM] {2}     {0} - The number of file parts that were found for this file / directory.     {1} - The expected duplication count for this file / directory.     I   - This directory is inheriting its duplication count from its parent.     M   - At least one sub-directory may have a different duplication count.     {2} - The name and size of this file / directory.   ... + [3x/2x] p:\Media -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media [Device 0] -> \Device\HarddiskVolume3\PoolPart.6a76681a-3600-4af1-b877-a31815b868c8\Media [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media [Device 2] - [2x/2x] p:\Media\commandN Episode 123.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 123.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 123.mov [Device 2] - [2x/2x] p:\Media\commandN Episode 124.mov (80.3 MB - 84,178,119 -> \Device\HarddiskVolume2\PoolPart.5823dcd3-485d-47bf-8cfa-4bc09ffca40e\Media\commandN Episode 124.mov [Device 0] -> \Device\HarddiskVolume8\PoolPart.d1033a47-69ef-453a-9fb4-337ec00b1451\Media\commandN Episode 124.mov [Device 2] ... The listing format and listing types are explained at the top, and then for each folder and file on the pool, a record like the above one is generated.
     
    Of course like any command output, it could always be piped into a log file like so:
    dpcmd check-pool-fileparts P:\ 4 > check-pool-fileparts.log
     
    I'm sure with a bit of scripting, people will be able to generate daily audit logs of their pool
     
    Now this is essentially the first version of this command, so if you have an idea on how to improve it, please let us know.
     
    Also, check out set-duplication-recursive. It lets you set the duplication count on multiple folders at once using a file pattern rule (or a regular expression). It's pretty cool.
     
    That's all for now.
  24. Like
    Christopher (Drashna) reacted to cryodream in Physically moving or switching places of disks that are already in the pool   
    Christopher, thank you for answers.
     
    Very happy to hear, that I can safely re-shuffle the drives.
     
    Yep, I have my drives mounted to the folders on one of the SSDs (D:\Drives\...).
     
    And thanks a lot for reminding me to use the case and bay names in the disk settings in the Scanner. That'll make this way easier. Awesome.
     
    Thanks again.
  25. Like
    Christopher (Drashna) got a reaction from cryodream in worrying issue with pictures etc   
    Well, I got some news that will make some of you guys happy.
     
    "StableBit FileSafe".
    It's in the planning stage right now. What does that mean? No code for it yet. Especially as we will have to consider exactly how to integrate it into our other products. And what features it will have. Etc.
    But it's definitely something we want to release.
×
×
  • Create New...