Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11568
  • Joined

  • Last visited

  • Days Won

    366

Community Answers

  1. Christopher (Drashna)'s post in Moving CloudDrive to a new pc was marked as the answer   
    Detach the drives, deactivate the license, install on new system, activate the license or trial and attach the drives. 
     
    That's it. 
     
    And yes, when you go to add a drive, it will list existing drives, and even acknowledge the drive's state (eg, if attached on other systems). 
  2. Christopher (Drashna)'s post in Disk Performance Graphics was marked as the answer   
    Sorry, yes, it's just faded out, since it's already complete.
     
    Or to quote: 
  3. Christopher (Drashna)'s post in Encryption Question was marked as the answer   
    It entirely depends on the provider.  For each provider, we use a slightly different format, based on that provider. 
     
    Specifically, for certain providers (such as Google Drive, Amazon Cloud Drive, etc) we do obfuscate the data. Meaning we do some simple encryption on the data, if you don't enable full drive encryption. 
    And by "simple encryption", I mean we use a very simple to "break" encryption, and each file has a NULL byte at the start. This is to prevent the provider from parsing the file and recategorizing it based on the file's header.   
     
    For Amazon Cloud Drive, for instance, this should completely block it's ability to identify data from StableBit CloudDrive. 
     
    Otherwise, the data is raw disk data, and won't show up as the files on the provider, but chunks of raw disk data.
  4. Christopher (Drashna)'s post in Advanced Settings XML question - override only single values? was marked as the answer   
    There shouldn't be any issue with removing the other settings, and only keeping what you need.  The other settings will stay at the default values. 
  5. Christopher (Drashna)'s post in Data Reads slow with DrivePool & CloudDrive was marked as the answer   
    In theory, the Read Striping feature should handle this on duplicated data. 
     
    Additionally, you may want to make sure that directory structures are being pinned on the drive in question (Under "Disk Options" -> "Performance" -> "Pinning"), though these should be enabled by default. 
     
    Additionally, adding better handling is something that has been requested, and we've discussed.  And it's definitely something that may help here. 
     
     
    Though, as for the versions, I beleive that Alex has changed the handling ofr ACD, so if you wanted to try the 1.0.0.777 build, that may help out.
  6. Christopher (Drashna)'s post in Changing Cache Drive was marked as the answer   
    Yup, you need to detach the drive, and then reattach it.  While reattaching it, you can specify the cache drive.
  7. Christopher (Drashna)'s post in Disk performance not showing data was marked as the answer   
    This happens occasionally, it's not a DrivePool related issue, but a Windows issue.
     
    However, the fix is rather simple: 
    http://wiki.covecube.com/StableBit_DrivePool_Q2150495
  8. Christopher (Drashna)'s post in Using DrivePool Drives without Software was marked as the answer   
    The files reside in a hidden "PoolPart.xxxxx" folder on each disk. 
     
    Underneath this folder will mirror what the pool looks like normally.  It may not be complete, depending on what files are on the drive. 
  9. Christopher (Drashna)'s post in SSD as Cache was marked as the answer   
    Nope. But it works best if they are.
     
    Also, you need a number of SSDs equal to the level of duplication you're using, ideally. 
    Otherwise, data may be written to "Archive" drives instead.
  10. Christopher (Drashna)'s post in Balancing warning/error was marked as the answer   
    Well, flagged either way, so Alex can clean this up (eg, handle better in this case)
  11. Christopher (Drashna)'s post in Pushover returning 400 Bad Request was marked as the answer   
    It's fixed in the latest build, actually.
     
    http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.2.3122_BETA.exe
     
    http://dl.covecube.com/ScannerWhs2/beta/download/StableBit.Scanner_2.5.2.3122_BETA.wssx
     
    Specifically, they changed their API in a way that was not backwards compatible, IIRC.
  12. Christopher (Drashna)'s post in Fixing Volume Bitmap is incorrect error? was marked as the answer   
    Do NOT run it with "/scan".
     
    While, this option is pretty awesome, it skips a number of checks, include the Volume Bitmap.  You need to run this offline only, meaning without the "/scan" flag. 
     
    This is an issue that I ran into myself and it tricked me.  (as well as Alex initially).
  13. Christopher (Drashna)'s post in Drives dropping out and will not stay connected was marked as the answer   
    Well, StableBit Scanner may help. (right click on the column header and select "by controller").
     
    Otherwise, you can use the Device Manager, select "View" and "Devices by connection".  You'll have to "find" the drives, but that should give you a good idea.
     
     
    Though, I think this was resolved in a ticket, but not 100% sure.
  14. Christopher (Drashna)'s post in DrivePool in Dashboard - Windows Server 2012 R2 Essentials was marked as the answer   
    Click on Help (top, right corner) and select "Safe Mode Settings". 
     
    Make sure that the "DrivePool" "add-ins" are enabled, and then relaunch the dashboard. 
  15. Christopher (Drashna)'s post in How to pause uploads - version 1.0.0.631 BETA was marked as the answer   
    The ideal way is to detach the drive. But if you have a lot of data to upload, that isn't going to work, really. 
     
    We don't have an outright "pause" button to do this.  It's something that's been requested, and that we will probably add.
     
    In the meanwhile, open up the UI, and go to the ACD drive. Click on "Drive Options" -> Performance -> I/O Performance. 
     
    This will open up a window.  Set the upload threads to "0", and this will stop it from uploading.  You'll need to set this back once you're back up and running.  
     
     
    Also, data corruption won't occur regardless.  Data is not purged from the cache until we have verified that it's been uploaded. 
  16. Christopher (Drashna)'s post in Google Drive Rate Limits and Threading/Exp Backoff was marked as the answer   
    Alex has released a new build that should address (or at least help minimize) this issue. 
     
    Specifically, it looks like this is an app wide limit on the number of API calls that needs to be "appealed" to Google to get increased, or a rewrite is needed to minimize the API calls. 
     
     
    Please download 1.0.0.725 1.0.0.749 if you're experiencing this issue. 
    http://dl.covecube.com/CloudDriveWindows/beta/download/
  17. Christopher (Drashna)'s post in Reloading server was marked as the answer   
    I'm sorry to hear that.  And hopefully a reload helps. 
     
     
    As for the Pool and software, if the system is still "stable enough", you can deactivate the license in the software. 
     
    To do so: 
    StableBit DrivePool 1.X: Open the Dashboard (either remotely or locally), and click on the "Settings" button in the dashboard (top, right section). Open the "DrivePool" section in the Settings window, and click on the "manage license" link at the bottom of the page. StableBit DrivePool 2.X/Stablebit CloudDrive: Open the UI on the system that the software is installed on, click on the "Gear" icon in the top, right corner and select the "manage license" option. StableBit Scanner: Open the UI on the system that Scanner is installed on. Click on "Settings" and select "Scanner Settings". Open the "Licensing" tab, and click on the "Manage license" link. This will open a window that shows you the Activation ID, as well as a big button to "Deactivate" the license. Once you've done this, you can activate the license on a new system.     If you're not able to do this, let us know at https://stablebit.com/Contactand we can deactivate the license server side.      As for reinstallation, you can connect the drives either before or after installing StableBit DrivePool.  It doesn't really matter, actually.  As soon as the software sees the pooled disks, it will automatically ("automagically", as stated by more than a few people ) rebuild the pool with the available disks. It will remeasure the pool, and reduplicate files as needed.    This happens at pretty much any point, so there is no specific "required" order (in fact, we have a number of people that use enclosures to periodically disconnect and reconnect "removable" pools for backup, and that's handled the same way) 
  18. Christopher (Drashna)'s post in Adding REFS drives with data to a pool results in no data showing in pooled volume was marked as the answer   
    Could you upgrade to the latest version? 
    http://dl.covecube.com/DrivePoolWindows/beta/download/StableBit.DrivePool_2.2.0.711_x64_BETA.exe
    This actually looks to be NFS server related, and ... there is a fix related to that in the newer version already. 
     
     
    And if it continues to happen, grab that crash dump as well (also, you can zip it up, and it should reduce it's size SIGNIFICANTLY). 
  19. Christopher (Drashna)'s post in Windows 10 Anniversary Update and Long File Paths was marked as the answer   
    Yes, StableBit DrivePool supports this. 
     
    Specifically, there are two times when we deal with files: 
    In the kernel (via the driver itself) which has no practical limitation In the service, which uses UNC pathing, which has no practical limiation  
    Just keep in mind that this "fix" may cause unexpected behavior, especially in 32-bit programs. IIRC.
  20. Christopher (Drashna)'s post in DrivePool + Deduplication issue was marked as the answer   
    The issue with measuring should be fixed now.
     
    However, the space reported is a problem.  This is because of how Deduplication works.  We had partial size reported, but .... 
     
    There isn't a good way to handle deduplication here. Period.  Most of the data is stored outside of the pool folder strucuture, in the "System Volume Information" folder specifically.   So it's not actually pooled. 
     
    Additionally, beacuse of how NTFS reports the data .... it's far from ideal. 
     
     
     
     
    So, this is a known issue, but there really isn't a good way to properly handle this.  It's partially a cosmetic issue, but it's partially a technical issue. And one we aren't really sure how to handle. 
     
    For now, just keep in mind that deduplication will do this, but it should work fine otherwise. 
  21. Christopher (Drashna)'s post in Question regarding Cleaning up and Cache was marked as the answer   
    By "G Drive", do you mean Google Drive? 
     
    Also, Alex has a great writeup on how the cache works: http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/
  22. Christopher (Drashna)'s post in Checking Cloud Data was marked as the answer   
    Ah, yeah, "invalid file links" definitely indicate a problem with the file system. And that *could* cause it to dismount. 
     
    And I'd recommend doing this sooner rather than later. 
     
    if you're using Windows 8/10, you can use "chkdsk x: /scan" (where "x:" is the drive in question) to scan the disk online. It will fix what it can, but that may not be everything. 
     
     
    As for the beta versions, yes. I'd recommend the 1.0.0.722 build.
  23. Christopher (Drashna)'s post in Moving data from storage spaces on WS12R2 essentials to drivepool was marked as the answer   
    I didn't see the post on the HSS forums, which is odd.... must be one of the sections i'm not manually subscribed to.
     
     
     
    As long as it's a Windows 8+ machine, it *should* be able to read the drive, but I'm not too familiar with that, as I haven't tested out that scenario in a long while. 
     
    But yeah....
     
     
     
    Unfortunately, yeah, it's not a simple thing to do so.  Using the Server Manager to do this is the ONLY way I'd recommend, as you get more options, but even then....
     
    As for the low space, that's because of the 1TB drive, and how it splits up the blocks of data.  
     
    Thoough, it may also be from VSS storage, which can be pruned. Run "control sysdm.cpl", Open the "System Protection" tab, and delete the snapshots from the Storage Spaces pool. 
     
     
     
     
    If you're getting a new drive here, then my recommendation would be to install StableBit Drivepool, add both the new disk and the Storage Spaces pool to a StableBit DrivePool pool.  
     
    Then use the "seeding" guide to move the data in the Storage Spaces "disk" into our pool. 
     
    Then ... well, remove the Storage Spaces "disk" from our pool, and this should move the contents off of the Storage Spaces "Disk" and onto the 4TB drive. 
     
    Since that sounds like it should be plenty of space, once it's completed, there should be no "real data" on the Storage Spaces disk. From there, you can break it up completely, and then add the disks to the StableBit DrivePool pool. 
     
     
    Details on how to "seed" the pool:
    http://wiki.covecube.com/StableBit_DrivePool_Q4142489
     
    Also, you will want to change the drive letter of the pool to whatever Storage Spaces was using, and then reboot. Doing this *may* "fix" everything so that the dashboard is happy and all of the shares are visible on the network. 
     
    And changing the pool's drive letter is how you normally would change any disk's drive letter:
    http://wiki.covecube.com/StableBit_DrivePool_Q6811286
    (caveat, ignore the reported size here, it's an artifact of the driver we use)
  24. Christopher (Drashna)'s post in Cloud Drive not uploading? was marked as the answer   
    Yes, I still recommend upgrading to this version. 
     
    And no, you won't lose your data. We explicitly maintain backwards compatibility every time we implement something new or change how it works.  
     
    The only time that isn't true is with Amazon Cloud Drive, because it's become a serious nightmare and we need to sit down, hammer out a new provider for it, and ... spam Amazon until they actually respond to our inquiries in a meaningful manner.
  25. Christopher (Drashna)'s post in a couple iSCSi questions was marked as the answer   
    For the VMs, then I think that should be fine.
     
     
     
     
    This may be because of the NICs being used.  A lot of modern network hardware has a number of defaults enabled that cause some pretty poor performance. 
     
    Features that include "checksum" and "offload" in the name can do this (they offload the processing to the CPU, via the drivers so... yeah).  Also, improperly configured jumbo frames can cause performance hits. 
     
    However, if the poor performance is locally, then that's a different issue.
     
    As for DrivePool, I'm glad to hear it.  And yeah, it's designed to be simple. And very easily recoverable.
     
     
     
     
    Thanks.  And I started with a small system. It's grown, and grown.  
     
    Though, I can definitely understand the direction.  I have solar, so electricity is "free" and the savings have already paid for the installation costs. 
     
    But yeah, it doesn't take a lot to run Active Directory.  
     
     
    As for using 20 bays .... I'm at 23 bays used right now.   As for deleting ... I joke about "dee leet?" with friends. I like to keep pretty much everything, in case i need it.  It's a trait that I "most likely" picked up from my dad (whom was definitely a hoarder). I only hoard data (and working computers/servers), but still.  Though, it's proven to be very useful in a number of cases.
     
    But yeah, anything older than Server 2008 is not supported and probably not needed. Besides, Microsoft will still keep them hosted forever, I suspect. 
     
    And a very nice setup. Though, if you're keeping the drive stored at home, I'd recommend keeping them in something fire proof/resistant. 
     
     
    Yes, I believe so. 
     
    If you're using two pools, with no duplication, and you wanted to use SSD Optimizer for both pools, then you need to use two SSDs. Or a SSD partitioned into two volumes. 
     
    But if you're using a pool with x2 duplication, then you definitely need two, physical SSDs. 
×
×
  • Create New...