Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Reputation Activity

  1. Thanks
    Umfriend got a reaction from Christian in Unplugging 1 of 2 drives in one pool to provide third drive port to upgrade capacity in another (double drive) smaller pool?   
    I don't really know. I speculate that MB-SATA may not be designed to be optimal. For instance, I do not know whether they can actually deliver 600MBps on all ports simultaneously. I guess it can be found through Yahoo but not sure. PCIe SATA cards, I have read that they can deliver lackluster performance, as in shared bandwidth dependent on the actual number of PCIe lanes it uses, but never that they'd interfere with MB-Sata. Again, I don't actually know and I am sure that there may be different qualities out there.
    But really, I think your PCIe SATA card will be fine and give no issues. It should work for your transition. I'd leave the card in the PC once done so that, AIW you need it, you have two ports readily available. The SAS HBA route is one I would recommend if you expect storage to grow as measured by number of drives. For me, it works like a charm and as these are, AFAIK, all made with enterprise servers in mind, I am pretty comfortable about performance, compatability and endurance.
  2. Like
    Umfriend got a reaction from sicboy in please help ! all of files in drivepool gone to Others!   
    OK. That you have multiple PoolPart.* folders on H and K is a clear issue. That you don;t have them on P and M is weird. And then there are the PoolPart/* files which shouldn't be there.
    Not sure what to do here. Tranferring files, removing drives from Pool through GUI, reformat and add is a possibility but takes a long time. Perhaps better to contact support (https://stablebit.com/Support) or wait for a better volunteer here.
    Another scenario, but I am not sure if that would work well, is:
    1. Remove the suspect drives from the Pool throuh the GUI
    2. From each PoolPart.* folder on those drives, check whether they have any contents. If they don't, delete them. If they do, rename the folders.
    3. Add the drives to the Pool
    4. You will now see a new PoolPart.* folder. For each of the for drives, move the contents from the renamed PoolPart.* folder(s) to the new PoolPart.* folder according to this: StableBit DrivePool Q4142489 (Follow this well, you will need to stop the DrivePoolService and start it when done)
    5. Do a re-measure
    I *think* this will work but....
  3. Like
    Umfriend got a reaction from sicboy in please help ! all of files in drivepool gone to Others!   
    If you haven't yet, a reboot never hurts. And then, try and determine if you have two hidden PoolPart.* folders on a single drive (like P:/ for instance) and whether you can open a file directly from within such PoolPart.* folder. Once we know that we can go from there.
  4. Like
    Umfriend reacted to gtaus in Hard drive enclosure or NAS?   
    I used to run a Windows Storage Spaces server for about 7 years. The last 2 years, as my pool kept larger and larger, I had more and more problems with Storage Spaces. I spent a long time considering other options including FreeNAS. I talked to people who were running, or used to use, FreeNAS and learned that FreeNAS has problems like Storage Spaces when the pool gets large.
    At that time, I was just over 80TB on the pool and having significant problems with Storage Spaces that I did not have when the pool was much smaller. The people I talked to about FreeNAS told me similar stories, it worked fine to a point and then when the pool got larger, they started having significant problems. In fact, the guys I talked to had already given up on FreeNAS and moved on to other options.
    I moved on to DrivePool and my experience has been much better. I am now over 80TB on my DrivePool server and, so far, have not seen the problems I experienced with Storage Spaces. There are some things I miss about the "promise" of Storage Spaces, but in real life, the performance of Storage Spaces falls short. My friends running FreeNAS told me the same story with using FreeNAS.
    I am not claiming that DrivePool is perfect, but it just seems to work better for me. After adding a SSD to DrivePool as a front end cache, I now get write speeds that exceeded my Storage Spaces setup. If you chose to duplicate some folders in DrivePool, then you have the option of using Read Striping and that can almost double your read speed in some scenarios.
    However, I chose DrivePool over other options not because it was faster, but rather because it just worked better for me. When a pool drive fails in DrivePool, you only lose the data on that one drive, not the entire pool (as happened to me in Storage Spaces). If you have duplication set on either the entire pool or just certain folders, you can rebuild the pool from the duplicated data. Also, when I have had HDDs fail, sometimes most of the data on that drive is still available and can be transferred back to the pool. In one instance, I had only 2 or 3 corrupt files on a 3TB HDD that was failing. I was able to move all good files off the drive before it finally, totally, failed.
  5. Thanks
    Umfriend got a reaction from sicboy in ิbuilding new server in 2021 needs an advice   
    Hi Newbie! ;D,
    DP needs very little, I had it running on an Intel Celeron G530 and could stream 1080p to at least one device. So a cheap build with, say, a Ryzen 3 3200G, 8GB of RAM, a decent 350W PSU and W10 would work like a charm as a file/stream server. The things you'd probably look for are SATA connectors (cheap boards often have only 4). Although you could get a cheap SAS card (IBM 1015 or somesuch, used.) which would provide plenty of expandability. The other thing is the network connection. 1Gb Ethernet, I think, should be "enough for anybody".
    It is a bit of a bad time as CPUs are in high demand relative to production capacity. Was a time, which will come again, when you could have a satsfactory CPU for like US$60.
    Edit and PS: Your English is fine. Just use capitals to start and periods to end a sentence and it'll be great.
  6. Thanks
    Umfriend got a reaction from AMCross in quick question on hard drive setup   
    It's a bit of a long shot but perhaps on the E and F drives, the PoolPart.* folder is not hidden and on the D and G drives it is? You can set Windows Explorer to show hidden files and folders.
  7. Like
    Umfriend got a reaction from gtaus in File Placement, how to add new HDD for only backup files   
    I think a 2nd Pool for non-critical data using old hardware is an excellent idea.
  8. Like
    Umfriend got a reaction from gtaus in File Placement, how to add new HDD for only backup files   
    I have never tried to use File Placement Rules but as I understand it, if you tell DP to place \movies on that 6TB HDD then it will and also spill over to other drives when the 6TB HDD is full. DP will not, I think, through a no space error. Not sure how to ensure that non other data arrives on that drive, that might need a lot of FPRs, but may be easy.
    Having said that, I would not add a drive I am suspicious about if, unlike me, you do not use duplication. Do you have Scanner? Maybe run the Seagate tool again.
  9. Thanks
    Umfriend reacted to Reid Rankin in WSL 2 support   
    Here's the DrivePool tracking issue; it appears has been resolved in version
  10. Like
    Umfriend got a reaction from TeleFragger in how to determine what drive failed   
    I read this is as asking how to identify the actual physical drives in the case. I physically label my drives and stack them in the server according to their labels. Without something like that, I have no clue how you would be able to identify the physical drives...
  11. Like
    Umfriend got a reaction from cocksy_boy in Forced out of FlexRaid Transparent raid. Coming to Drivepool + Snapraid, need some infos.   
    Moving data to the Pool while retaining the data on the same drive is called seeding and it is advised to stop the service first (https://wiki.covecube.com/StableBit_DrivePool_Q4142489). I think this is because otherwise DP might start balancing while you are in the process of moving drive-by-drive.
    I am not sure but I would think you would first set settings, then do the seeding.
    (I am pretty sure that) DP does not "index" the files. Whenever you query a folder DP will on the spot read the drives and indeed show the "sum". Duplicate filenames will be an issue I think. I think that DP measures the Pool it will either delete one copy (I think if the name, size and timestamp are the same or otherwise inform of some sort of file conflict. This is something you could actually test before you do the real move (stop service, create a spreadhseet "Test.xlsx", save directly to a Poolpart.*/some folder on one of the drives, edit the file, save directly to Poolpart.*/some folder on another drive, start service and see what it does?).
    DP does not go mad with same folder names, some empty, some containing data. In fact, as a result of balancing, it can cause this to occur itself.
    I have no clue about snapraid. I would speculate that you first create and populate the Pool, let DP measure and rebalance and then implement snapraid. Not sure though. You may have to read up on this a bit and there is plenty to find, e.g. https://community.covecube.com/index.php?/topic/1579-best-practice-for-drivepool-and-snapraid/.
  12. Like
    Umfriend got a reaction from Adramelramalech in eXtreme bottlenecks   
    Have you checked Event Viewer and what model is thus exactly?
    And if the data you want in the Pool is already on the disks you want to add to the Pool, then there is a much faster way of getting them in the Pool.
  13. Like
    Umfriend reacted to srcrist in (pre-sales) Spesific info on what services SB CloudDrive can interface with?   
    I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this.
    The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can just install it and look at the list that way, if you'd like.
    CloudDrive does not support Teamdrives/shared drives because their API support and file limitations make them incompatible with CloudDrive's operation. Standard Google Drive and GSuite drive accounts are supported.
    The primary tradeoff from a tool like rClone is flexibility. CloudDrive is a proprietary system using proprietary formats that have to work within this specific tool in order to do a few things that other tools do not. So if flexibility is something you're looking for, this probably just isn't the solution for you. rClone is a great tool, but its aims, while similar, are fundamentally different than CloudDrive's. It's best to think of them as two very different solutions that can sometimes accomplish similar ends--for specific use cases. rClone's entire goal/philosophy is to make it easier to access your data from a variety of locations and contexts--but that's not CloudDrive's goal, which is to make your cloud storage function as much like a physical drive as possible.
    I don't work for Covecube/Stablebit, so I can't speak to any pricing they may offer you if you contact them, but the posted prices are $30 and $40 individually, or $60 for the bundle with Scanner. So there is a reasonable savings to buying the bundle, if you want/need it.
    There is no file-based limitation. The limitation on a CloudDrive is 1PB per drive, which I believe is related to driver functionality. Google recently introduced a per-folder file number limitation, but CloudDrive simply stores its data in multiple folders (if necessary) to avoid related limitations.
    Again, I don't work for the company, but, in previous conversations about the subject, it's been said that CloudDrive is built on top of Windows' storage infrastructure and would require a fair amount of reinventing the wheel to port to another OS. They haven't said no, but I don't believe that any ports are on the short or even medium term agenda.
    Hope some of that helps.
  14. Like
    Umfriend got a reaction from StepSideways in Mixed speed disks   
    So I think it is a matter of use case and personal taste. IMHO, just use one Pool. If you're going to replace the 5900rpm drives anyway over time anyway. I assume you run things over a network. As most are still running over 1Gbit networks (or slower), even the 5900rpm drives won't slow you down.
    I've never used an SSD Optimizer plugin but yeah, it is helpful for writing, not reading (except for the off-chance that the file you read is still on the SSD). But even then it would need a datapath that is faster than 1Gbit all the way.
    What you could do is test a little by writing to the disks directly outside the Pool in a scenario that resembles your usecase. If you experience no difference, just use one Pool, makes management a lot easier. If anything, I would more wonder about duplication (do you use that?) and real backups.
  15. Thanks
    Umfriend got a reaction from gd2246 in Why is balancing so slow? Is it just me, or like this for everyone?   
    In the DP GUI, see the two arrows to the right of the balancing status bar? If you press that, it will increase the I/O priority of DP. May help some. Other that that, ouch! Those are more like SMR-speeds.
  16. Thanks
    Umfriend got a reaction from RBeatse in Pool drives seen as “regular drives”?   
    Pools behave as if they are regular NTFS formatted volumes. However, any software that uses VSS (which many backup solutions do) is not supported. I don't know Crashplan so couldn't say. Having said that, you could backup the underlying drives. If you use duplication, then Hierachical Pools can ensure that you only backup one instance of the duplicates.
  17. Like
    Umfriend got a reaction from Todash in Adding drives with data still showing as other   
    Have you tried remeasuring?
  18. Like
    Umfriend got a reaction from Shane in 2nd request for help   
    Use remove. You can move through Explorer but if you do that you need to stop the drivepool service first. Moreover, once you start DP service, it may try to rebalance files back to other drives so you need to turn of balancing to prevent that from happening. Also, if you have duplication then you want to disable that first. Yes, it will all take some time but it has, AFAIK, never failed. Quick and dirty though... not that failsafe sometimes. And even cutting/pasting will take quite some time.
  19. Like
    Umfriend got a reaction from Sammy in Using Drives inside Pool?   
    No, that is just fine. There is no issue with adding a disk to a Pool and then place data on that disk besides it (i.e. outside the hidden PoolPart.* folder on that drive).
  20. Thanks
    Umfriend got a reaction from Remcroft in Possible to copy directly to poolpart folders without issues?   
    In principle, yes. Not sure how to guarantee that they will stay there due to rebalancing, unless you use file placement rules.
  21. Like
    Umfriend got a reaction from vfsrecycle_kid in Can/Does DrivePool preemptively calculate the total free space post-replication-rebalance?   
    No. If, and only if, the entire Pool had a fixed duplication factor then it *could* be done. E.g., 1TB of free space means you can save 0.5TB of net data with x2 duplication or .33TB with x3 duplication etc. However, as soon as you mix duplication factors, well, it really depends on where thre data lands, doesn't it? So I guess they chose to only show actual free space without taking duplication in mind. Makes sense to me. Personally, I over provision all my Pools (a whopping two in total ;D) such that I can always evacuate the largest HDD. Peace of mind and coninuity rules in my book.
  22. Like
    Umfriend got a reaction from TeleFragger in My Rackmount Server   
    Yeah, WS2019 missing the Essentials role sucks. I'm running WSE2016 and I have no way forward so this will be what I am running until the end of days probably....
    But wow, nice setup!
    With the HBA card, can you get the HDDs to spin down? I tried with my Dell H310 (some 9210 variant IIRC) but no luck.
  23. Confused
    Umfriend got a reaction from TeleFragger in 2 pools - best setup method?   
    I am not exactly sure what you want to accomplish. Do you want duplication and fast reads? You might want to consider using hierarchical pools, something like:
    Pool A: 12 x 500GB SSD
    Pool B: 2x4TB + 2x2TB + 1x 500GB SSD
    Pool C: Pool A + Pool B
    I would think that writes go fast (Pool A SSD only, Pool B uses the SSD Cache) and that reads go fast as well as they would read from Pool A effectively (even if the request goes out to Pool C).
    The downside is that you can only store about 6TB duplicated.
  24. Like
    Umfriend got a reaction from Christopher (Drashna) in DrivePool not filling new empty disks immediately?   
    So when you add a 6TB HDD to that setup, and assuming you have not tinkered with the balancing settings, any _new_ files would be stored on that 6TB HDD indeed. A rebalancing pass, which you can start manually, will fill it up as well. With default settings, DP will try to ensure that each disk has the same amount of free space. It would therefore write to the 6TB first until 4TB is fee. Then equally to the 6TB and 4TB until both have 3TB free etc. The 500GB HDD will see action only when the others have 500GB or less available.
    This is at default settings and without duplication.
  25. Like
    Umfriend got a reaction from TeleFragger in moving drives around   
    Yes. I have never tried it but DP should not need drive letters. You can also map drives to folders somehow so that you can still easily explore them. Not sure how that works but there are threads on this forum.
  • Create New...