Jump to content


  • Posts

  • Joined

  • Last visited

Reputation Activity

  1. Like
    Minaleque reacted to Pancakes in Weird SMB/Oplock problem   
    For a while now I have had issues where files will not duplicate properly after downloading and moving them, It doesn't happen all the time, and I have been busy so I never bothered looking into it
    Today I noticed that a TV show would not duplicate

    So I checked to see if anything had it open, and it didn't. HOWEVER my "deluge" account which is used by Deluge running on a CentOS 7 VM had the original download file open, which no longer existed. I thought this was weird, because how can you have a file open which doesn't exist?

    So I closed the file, and then the file in /Data/Media/TV Shows/wherver duplicated fine
    Is this a DrivePool issue, or a Windows SMB issue, or a CentOS/Deluge issue?
    Related to this, sometimes I have a problem where I will download a file, move it and find that the file is now also back in the downloads folder, because Deluge is still hanging onto it. Closing the session/open file makes the file disappear 
    When I add a torrent, the file will start downloading to \\FS01\Data\Downloads\Incomplete, and when done it will move to \\FS01\Data\Downloads. From there I will remove the torrent from Deluge, and move it to \\FS01\Data\Media\ and then a subfolder
    All of these folder are on the same pool. \\FS01\Data\Downloads is NOT duplicated, the rest of pool is 
    Does anyone have any suggestions on how I should resolve this?
  2. Like
    Minaleque reacted to hansolo77 in Building new server from scratch!   
    Hey all!  I've been working on learning how to setup and install WSE 2012 R2 and I think I'm finally there.  However, I'm feeling very limited in my capacities as my case is overflowing with drives.  So I'm going to be building a new server from the ground up and would like some advice and suggestions as to what I should get.  I'm only a part-time worker and don't really make a lot of money.  So the purchasing time line is going to be really long.  So far, I know for certain that I'm going to but the Norco 4224 case.  I've read reviews and it has been recommended time and time again.  The only issue with this case appears to be the fans, and the potential for the backplanes to be DOA.  But at the price, it's a steal compared to other similar cases.  Plus, I'm going to get it through NewEgg, as they're the cheapest place around, and they offer really quick RMA's for exchanges if there's something wrong with.
    Now I'm at the point of internals.  The first order of business is the motherboard.  Form factor isn't really an issue as the Norco case fully supports a whole range.  Individual features are where I'm struggling.  I know that I want to have room for expansion.  So I'm kinda staying away from those Mini ITX boards, since they, for the most part, all seem to have only 1 expansion slot.  I already have 1 SAS controller, and plan on getting an expander.  So that would be 2 slots.  As for the processor, I'm not sure what I need.  My usage scenario is a simple home file server for client backups and media streaming.  So I suppose I don't need anything major.  The same goes for RAM. 
    As it is right now, I'm thinking about getting one of these:

    SUPERMICRO MBD-X9SCL-F-O LGA 1155 Intel C202 Micro ATX Intel Xeon E3 Server Motherboard
    SUPERMICRO MBD-A1SAM-2550F-O uATX Server Motherboard FCBGA 1283 DDR3 1600/1333
    The first board is nice, in that it has plenty of expansion slots, Support for Xeon processors, and is cheap.  However, it only supports ECC memory of max 32gb, SATA 3.0GB/s, and is cheap.  The second board is nice, in that it has just enough expansion slots for my needs (controller and expander), already comes with a processor, supports (but not require) ECC memory of max 64gb, and has 2xSATA 6.0Gb/s, but is a little more expensive.
    So which board should I get?  Integrated CPU and more RAM, or more slots, less RAM, and mandatory ECC?  Or should I look at something else?  What are your suggestions?

    EDIT ->  I just looked at and am now also adding this contender:
    SUPERMICRO MBD-A1SAM-2550F-O uATX Server Motherboard FCBGA 1283 DDR3 1600/1333
    It's got more everything.  More SATA 6.0Gb/s, TONS more RAM, support for faster CPU, and more expensive.  To put it in perspective, this new board would probably take a month to save up for.
  3. Like
    Minaleque reacted to ConnectionProblem in Fix File Permissions Windows 10   
    First of all been using your products for 3 years and they are great. Thank you for your support of a niche capability that I loved about WHS.
    I recently reformatted installed Windows 10 and restored my pool, but my file permissions are a little messed up. Writing to folders of child folders that existed pre reinstall require User Access Control Confirmation and some programs pop an error saying I don't have access to write to them.
    Thank you
  4. Like
    Minaleque reacted to whispers in Slow download/upload speeds and sudden stops   
    So ive been using clouddrive for some time and now it suddenly starts doing strange shit. 
    I'm on a 1gig internet speed and for some reason when uploading to gdrive it will not go over 16.5mb/s in upload and not above 5 in download. The software isnt saying anything about being throttled or anything. When uploading/downloading it keeps its speed for 10-12 seconds and then just stops.. 20-30 seconds after it starts uploading again and then stops after 10-12 seconds and it keeps going in that loop. I have 0 idea whats going on but its really not useable. 
    Is there anything i can do here?
  5. Like
    Minaleque reacted to freefaller in Encrypted Azure Blob Storage   
    Just wondering how this works with Clouddrive.
    Any point enabling it at all? Given that the data is already encrypted.
  6. Like
    Minaleque reacted to Alex in Large Chunks and the I/O Manager   
    In this post I'm going to talk about the new large storage chunk support in StableBit CloudDrive BETA, why that's important, and how StableBit CloudDrive manages provider I/O overall.
    Want to Download it?
    Currently, StableBit CloudDrive BETA is an internal development build and like any of our internal builds, if you'd like, you can download it (or most likely a newer build) here:
    The I/O Manager
    Before I start talking about chunks and what the current change actually means, let's talk a bit about how StableBit CloudDrive handles provider I/O. Well, first let's define what provider I/O actually is. Provider I/O is the combination of all of the read and write request (or download and upload requests) that are serviced by your provider of choice. For example, if your cloud drive is storing data in Amazon S3, provider I/O consists of all of the download and upload requests from and to Amazon S3.
    Now it's important to differentiate provider I/O from cloud drive I/O because provider I/O is not really the same thing as cloud drive I/O. That's because all I/O to and from the drive itself is performed directly in the kernel by our cloud drive driver (cloudfs_disk.sys). But as a result of some cloud drive I/O, provider I/O can be generated. For example, this happens when there is an incoming read request to the drive for some data that is not stored in the local cache. In this case, the kernel driver cooperatively coordinates with the StableBit CloudDrive system service in order generate provider I/O and to complete the incoming read request in a timely manner.
    All provider I/O is serviced by the I/O Manager, which lives in the StableBit CloudDrive system service.
    Particularly, the I/O Manager is responsible for:
    As an optimization, coalescing incoming provider read and write I/O requests into larger requests. Parallelizing all provider read and write I/O requests using multiple threads. Retrying failed provider I/O operations. Error handling and error reporting logic. Chunks
    Now that I've described a little bit about the I/O manager in StableBit CloudDrive, let's talk chunks. StableBit CloudDrive doesn't inherently work with any types of chunks. They are simply the format in which data is stored by your provider of choice. They are an implementation that exists completely outside of the I/O manager, and provide some convenient functions that all chunked providers can use.
    How do Chunks Work?
    When a chunked cloud provider is first initialized, it is asked about its capabilities, such as whether it can perform partial reads, whether the remote server is performing proper multi-threaded transactional synchronization, etc... In other words, the chunking system needs to know how advanced the provider is, and based on those capabilities it will construct a custom chunk I/O processing pipeline for that particular provider.
    The chunk I/O pipeline provides automatic services for the provider such as:
    Whole and partial caching of chunks for performance reasons. Performing checksum insertion on write, and checksum verification on read. Read or write (or both) transactional locking for cloud providers that require it (for example, never try to read chunk 458 when chunk 458 is being written to). Translation of I/O that would end up being a partial chunk read / write request into a whole chunk read / write request for providers that require this. This is actually very complicated. If a partial chunk needs to be read, and the provider doesn't support partial reads, the whole chunk is read (and possibly cached) and only the part needed is returned. If a partial chunk needs to be written, and the provider doesn't support partial writes, then the whole chunk is downloaded (or retrieved from the cache), only the part that needs to be written to is updated, and the whole chunk is written back.If while this is happening another partial write request comes in for the same chunk (in parallel, on a different thread), and we're still in the process of reading that whole chunk, then coalesce the [whole read -> partial write -> whole write] into [whole read -> multiple partial writes -> whole write]. This is purely done as an optimization and is also very complicated. And in the future the chunk I/O processing pipeline can be extended to support other services as the need arises. Large Chunk Support
    Speaking of extending the chunk I/O pipeline, that's exactly what happened recently with the addition of large chunk support (> 1 MB) for most cloud providers.
    Previously, most cloud providers were limited to a maximum chunk size of 1 MB. This limit was in place because:
    Cloud drive read I/O requests, which can't be satisfied by the local cache, would generate provider read I/O that needed to be satisfied fairly quickly. For providers that didn't support partial reads, this meant that the entire chunk needed to be downloaded at all times, no matter how much data was being read. Additionally, if checksumming was enabled (which would be typical), then by necessity, only whole chunks could be read and written. This had some disadvantages, mostly for users with fast broadband connections:
    Writing a lot of data to the provider would generate a lot of upload requests very quickly (one request per 1 MB uploaded). This wasn't optimal because each request would add some overhead. Generating a lot of upload requests very quickly was also an issue for some cloud providers that were limiting their users based on the number of requests per second, rather than the total bandwidth used. Using smaller chunks with a fast broadband connection and a lot of threads would generate a lot of requests per second. Now, with large chunk support (up to 100 MB per chunk in most cases), we don't have those disadvantages.
    What was changed to allow for large chunks?
    In order to support the new large chunks a provider has to support partial reads. That's because it's still very necessary to ensure that all cloud drive read I/O is serviced quickly. Support for a new block based checksumming algorithm was introduced into the chunk I/O pipeline. With this new algorithm it's no longer necessary to read or write whole chunks in order to get checksumming support. This was crucial because it is very important to verify that your data in the cloud is not corrupted, and turning off checksumming wasn't a very good option. Are there any disadvantages to large chunks?
    If you're concerned about using the least possible amount of bandwidth (as opposed to using less provider calls per second), it may be advantageous to use smaller chunks. If you know for sure that you will be storing relatively small files (1-2 MB per file or less) and you will only be updating a few files at a time, there may be less overhead when you use smaller chunks. For providers that don't support partial writes (most cloud providers), larger chunks are more optimal if most of your files are > 1 MB, or if you will be updating a lot of smaller files all at the same time. As far as checksumming, the new algorithm completely supersedes the old one and is enabled by default on all new cloud drives. It really has no disadvantages over the old algorithm. Older cloud drives will continue to use the old checksumming algorithm, which really only has the disadvantage of not supporting large chunks. Which providers currently support large chunks?
    I don't want to post a list here because it would get obsolete as we add more providers. When you create a new cloud drive, look under Advanced Settings. If you see a storage chunk size > 1 MB available, then that provider supports large chunks.  
    Going Chunk-less in the Future?
    I should mention that StableBit CloudDrive doesn't actually require chunks. If it at all makes sense for a particular provider, it really doesn't have to store its data in chunks. For example, it's entirely possible that StableBit CloudDrive will have a provider that stores its data in a VHDX file. Sure, why not. For that matter, StableBit CloudDrive providers don't even need to store their data in files at all. I can imagine writing providers that can store their data on an email server or a NNTP server (a bit of a stretch, yes, but theoretically possible).
    In fact, the only thing that StableBit CloudDrive really needs is some system that can save data and later retrieve it (in a timely manner). In that sense, StableBit CloudDrive can be a general purpose drive virtualization solution.
  7. Like
    Minaleque reacted to Alex in How to Contribute Test Data   
    I've started putting up disk and controller test data in this forum, as it relates to the StableBit Scanner's ability to gather SMART and Identify data using various disk controllers.
    Direct I/O Test
    "Direct I/O" is a set of technologies that the StableBit Scanner uses to read data directly from the disk. I collect my test results using an internal Direct I/O testing tool.

    You can get the latest version here: Download
    The tool will probe your disk and controller for various forms of data that the StableBit Scanner uses and will display either a green check mark or a red X to indicate whether the probe was successful. At the bottom, it will list the probing "methods" that were successfully used to probe the controller / disk.
    If you're interested in contributing your test data to this forum, then just run the tool and select a disk that is connected to the controller that you want to probe.
    Make sure that your computer is not doing anything else while probing. There is a small chance that the probing process will crash your system.
  8. Like
    Minaleque reacted to athunt in Access Denied: Drivepool can't access folders   
    EDIT: Ok actually the problem could have been caused from me manually move those files from say ddrive L to F or it was caused from setting a rule on L to disallow all those file types. I just realized on drive L in the pool those folders cant be accessed at all. So that is where the issue dirves. I will play with it some more and see if i can figure it out.
    L is just a cache drive (with a clouddrive and some cach files) so i may just remove it and readd the drive.

    So i have seen this twice the past 6 months or so since i got it. I recently manually copied files inside the hidden pool folder from one drive to another one (on the same pool). I copied a top level folder with many files insdie of it. After doing so, and im pretty sure it was by doing this that cuased it but havne't confirmed it, I can no longer access those folders from the POOL mounted drive. Though i can manually go into the drives and still access those folders (luckily).
    When i tried to change the properties or owner i get an error. "You must have read permissions" to view the object properties or "Unable to view current owner". I am the sole computer user on the system. The first time i saw this error I was able to just change the owner to fix those access denied errors. Now i can't even do that. I tried to manually run acommand to do something similar,
    ICACLS "J:\MyFiles" /INHERITANCE:e /GRANT:r <UserName>:F /T /C (from some googling). That didn't work.
    I tried the StableBit utility Wss.Troubleshoot_1.0.0.165.exe to fix NTFS permissions which than attempted to set all permissions to Everyone with full control. That also said it failed.
    I am at a loss of what to do. The only thing I can think of is copy all the files off the top level folder from inside the private pool folders. Delete the folder. Recreate it and manually redrag the files back in (from inside the private folder on the dirves instead of the pool). 
    Have you encountered anything like this? I will probably attempt to do that but wanted to get this out there incase i see it again or theres already an answer. I saw some past articles on this but mostly they said to do what I tried above.
    Great software Seems to be the only thing i ever see that's a little annoying and it may just be due to me going against normal processes to copy files.
  9. Like
    Minaleque reacted to AllMyData in Possibly incorrect SMART warning reading   
    I have a disk that has a SMART warning within Scanner. It appears to have a Spin Retry Count of 65536, however the manufacturers software (HGST Windows Drive Fitness Test) states I have a Spin Retry Count if 95, this is after both a short test and an extended test. I'm more inclined to believe the manufacturers data, but clearly this disagrees with Scanner.
  10. Like
    Minaleque reacted to Christopher (Drashna) in SSD Optimizer Balancing Plugin   
    By default, it should respect the settings.   You can take a read here at why specifically: 
    But basically, the file placement rules should respect the real time placement limiters (which could be a problem here), And the balancing plugins will respect the file placement rules. 
    If you uncheck the "file placement rules respect real-time placement limits set by the balancing plugins" it should definitely write to the Archive drives when the file placement rules "demand" it.   But in your config, it should work fine, as well. 
  11. Like
    Minaleque reacted to Val3ntin in How To: Get SMART data passed on from ESXI 5.1 Host   
    Based on the feedback from the community, here is how to get your ESXi Host to pass on SMART data for Scanner in your guest VMs.
    All you need to do is the following and then any disk (not USB) you plugin in thereafter will be available for RDM:
    In the ESXi Console window, highlight your server Go to the Configuration tab Under Software, click Advanced Settings Click RdmFilter Uncheck the box for RdmFilter.HbaIsShared Click OK  
    Used the Advanced Settings for StableBit Scanner to enable "UnsafeDirectIo" to get the SMART data from the virtual controller:
    And make sure that "UnsafeDirectIo" is set to "True", and reboot.
    *Note: UnsafeDirectIo is "unsafe" for a reason. It is possible that it can cause issues or glitches, or in extremely rare conditions it can cause BSODs. In a large majority of cases, these issues don't occur, but it is a possibility. So definitely "at your own risk".
    Original Post:

    Hi Guys,

    I have a Dell Precision T3500 Workstation, that are running VMware ESXi 5.1.0. On this host i have created 2 virtual machines, both are Windows 2012 server standard. One of these are running Stablebit Scanner v.
    My problem is, that it does not show SMART status, temperatures or anything for any of my drives. This i all the data i get (se picture). Is there something i need to install on my ESXi host, or is this just not possible on my setup, because i use VMware?

    This i what i have on the host server:

    Thank you in advance..
  12. Like
    Minaleque reacted to Alex in SSD Optimizer Balancing Plugin   
    I've just finished coding a new balancing plugin for StableBit DrivePool, it's called the SSD Optimizer. This was actually a feature request, so here you go.
    I know that a lot of people use the Archive Optimizer plugin with SSD drives and I would like this plugin to replace the Archive Optimizer for that kind of balancing scenario.
    The idea behind the SSD Optimizer (as it was with the Archive Optimizer) is that if you have one or more SSDs in your pool, they can serve as "landing zones" for new files, and those files will later be automatically migrated to your slower spinning drives. Thus your SSDs would serve as a kind of super fast write buffer for the pool.
    The new functionality of the SSD Optimizer is that now it's able to fill your Archive disks one at a time, like the Ordered File Placement plugin, but with support for SSD "Feeder" disks.
    Check out the attached screenshot of what it looks like
    Notes: http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt
    Download: http://stablebit.com/DrivePool/Plugins
    Edit: Now up on stablebit.com, link updated.

  13. Like
    Minaleque reacted to Alex in Forum Downtime   
    Our forum, wiki and blog web sites experienced an issue with the database server that caused those sites to be down for the past 34 hours. The issue has been resolved and everything is back up and running. I'm sorry for the inconvenience.
    StableBit.com, the download server, and software activation services were not affected.
  • Create New...