Jump to content
Covecube Inc.

Leaderboard

Popular Content

Showing content with the highest reputation since 04/10/20 in all areas

  1. hammerit

    WSL 2 support

    I tried to access my drivepool drive via WSL 2 and got this. Any solution? I'm using 2.3.0.1124 BETA. ➜ fludi cd /mnt/g ➜ g ls ls: reading directory '.': Input/output error Related thread: https://community.covecube.com/index.php?/topic/5207-wsl2-support-for-drive-mounting/#comment-31212
    4 points
  2. malse

    WSL2 Support for drive mounting

    Hi im using Windows 10 2004 with WSL2. I have 3x drives: C:\ (SSD), E:\ (NVME), D:\ (Drivepool of 2x 4TB HDD) When the drives are mounted on Ubuntu, I can run ls -al and it shows all the files and folders on C and E drives. This is not possible on D When I run ls -al on D, it returns 0 results. But I can cd into the directories in D stragely enough. Is this an issue with drivepool being mounted? Seems like it is the only logical difference (aside from it being mechanical) between the other drives. They are all NTFS.
    4 points
  3. My advice; contact support and send them Troubleshooter data. Christopher is very keen in resolving problems around the "new" google way of handling folders and files.
    3 points
  4. Shane

    NTFS Permissions and DrivePool

    Spend long enough working with Windows and you may become familiar with NTFS permissions. As an operating system intended to handle multiple users, Windows maintains records that describe which user owns each file and folder and how much access each user has to those files and folders. These records are kept on the same volumes as those files and folders. Unfortunately, in the course of moving or copying folders and files around, Windows may fail to properly update these settings for a variety of reasons (e.g. bugs, bit errors, power interruptions, failing hardware). This can mean files a
    2 points
  5. gtaus

    Removing drive from pool

    Have you determined what speed your TV streaming device pulls movies from your Storage Spaces or DrivePool? For example, when I watch my DrivePool GUI, I can see that my Fire TV Stick is pulling about ~4 MB/s tops for streaming 1080p movies. I don't suffer any stuttering or caching on my system. If I try to stream movies >16GB, then I start to see problems and caching issues. But, at that point, I know I have reached the limits of my Fire TV Stick with limited memory storage and its low power processor. It is not a limit of how fast DrivePool can send data over my wifi. Well, there
    2 points
  6. Thanks guys I plucked up the courage and went to beta on all apps and connected to the new cloud thing. Looking good!
    2 points
  7. This is a topic that comes up from time to time. Yes, it is possible to display the SMART data from the underlying drives in Storage Spaces. However, displaying those drives in a meaningful way in the UI, and maintaining the surface and file system scans at the same time is NOT simple. At best, it will require a drastic change, if not outright rewrite of the UI. And that's not a small undertaking. So, can we? Yes. But do we have the resources to do so? not as much (we are a very small company)
    2 points
  8. I've also been bad about checking the forums. It can get overwhelming, and more difficult to do. But that's my resolution this year: to make a big effort to keep up with the forum.
    2 points
  9. (also: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 )
    2 points
  10. methejuggler

    Plugin Source

    I actually wrote a balancing plugin yesterday which is working pretty well now. It took a bit to figure out how to make it do what I want. There's almost no documentation for it, and it doesn't seem very intuitive in many places. So far, I've been "combining" several of the official plugins together to make them actually work together properly. I found the official plugins like to fight each other sometimes. This means I can have SSD drop drives working with equalization and disk usage limitations with no thrashing. Currently this is working, although I ended up re-writing most of the ori
    2 points
  11. Managed to fix this today as my client was giving errors also. Install Beta version from here: http://dl.covecube.com/CloudDriveWindows/beta/download/ (I used 1344) Reboot. Don't start CloudDrive and/or service. Add the below to this file: C:\ProgramData\StableBit CloudDrive\Service\Settings.json "GoogleDrive_ForceUpgradeChunkOrganization": { "Default": true, "Override": true } Start Service & CloudDrive. Should kick in straight away. I have 42TB in GDrive and it went through immediately. Back to uploading as usual now. Hope this hel
    2 points
  12. I see this hasn't had an answer yet. Let me start off by just noting for you that the forums are really intended for user to user discussion and advice, and you'd get an official response from Alex and Christoper more quickly by using the contact form on the website (here: https://stablebit.com/Contact). They only occasionally check the forums when time permits. But I'll help you out with some of this. The overview page on the web site actually has a list of the compatible services, but CloudDrive is also fully functional for 30 days to just test any provider you'd like. So you can jus
    2 points
  13. Unintentional Guinea Pig Diaries. Day 8 - Entry 1 I spent the rest of yesterday licking my wounds and contemplating a future without my data. I could probably write a horror movie script on those thoughts but it would be too dark for the people in this world. I must learn from my transgressions. In an act of self punishment and an effort to see the world from a different angle I slept in the dogs bed last night. He never sleeps there anyways but he would never have done the things I did. For that I have decided he holds more wisdom than his human. This must have pleased the Data God'
    2 points
  14. After posting, I found an issue I had missed: the disk was marked as Read Only in disk management. After running DISKPART from cmd I managed to remove the read-only tag using the command attributes disk clear readonly and it appears to be OK now.
    2 points
  15. Unintentional Guinea Pig Diaries. Day 7 - Entry 1 OUR SUPREME LEADER HAS SPOKEN! I received a response to my support ticket though it did not provide the wisdom I seek. You may call it an "Aladeen" response. My second drive is still stuck in "Drive Queued to perform recovery". I'm told this is a local process and does not involve the cloud yet I don't know how to push it to do anything. The "Error saving data. Cloud drive not found" at this time appears to be a UI glitch and it not correct as any changes that I make do take regardless of the error window. This morning I discove
    2 points
  16. Unintentional Guinea Pig Diaries. Day 5 - Entry 2 *The Sound of Crickets* So I'm in the same spot as I last posted. My second drive is still at "Queued to perform Recovery". If I knew how to force a reauth right now I would so I could get it on my API. Or at the very least get it out of "queued" Perhaps our leaders will come back to see us soon. Maybe this is a test of our ability to suffer more during COVID. We will soon find out. End Diary entry.
    2 points
  17. gtaus

    2nd request for help

    I have only been using DrivePool for a short period, but if I understand your situation, you should be able to open the DrivePool UI and click on the "Remove" drive for the drives you no longer want in the pool. I have done this in DrivePool and it did a good job in transferring the files from the "remove" drive to the other pool drives. However, given nowadays we have large HDDs in our pools, the process takes a long time. Patience is a virtue. Another option is to simply view the hidden files on those HDDs you no long want to keep in DrivePool, and then copy them all over to the one dri
    2 points
  18. gtaus

    Hard drive enclosure or NAS?

    I used to run a Windows Storage Spaces server for about 7 years. The last 2 years, as my pool kept larger and larger, I had more and more problems with Storage Spaces. I spent a long time considering other options including FreeNAS. I talked to people who were running, or used to use, FreeNAS and learned that FreeNAS has problems like Storage Spaces when the pool gets large. At that time, I was just over 80TB on the pool and having significant problems with Storage Spaces that I did not have when the pool was much smaller. The people I talked to about FreeNAS told me similar stories, it w
    1 point
  19. Yes. But YMMV. Mostly, having the cache on a different drive means that you're separating out the I/O load, which can definitely improve performance. Though, this depends on the SSDs you're using. Things like AHCI vs NVMe, and the IOPS rating of the drive make more of a difference.
    1 point
  20. Scanner is missing a couple of column options that exist in the app Age and Power - both of which i use - can these be added? Ping and Bay are also missing
    1 point
  21. Same here. The bonus with DrivePool is that if it fails you don't lose data, just the pooled drive itself. Worst come to worst, just reinstall the earlier version again.
    1 point
  22. Shane

    Permissions Confusion?

    The poolpart folders do not need to inherit their permissions from their respective volume roots (though they default to doing so on a newly created pool using previously unformatted drives). The SYSTEM account must have full control of the poolpart folder, subfolders and files. The Administrators account is recommended to have full control of same. For more details, I have just created this thread.
    1 point
  23. Shane

    NTFS Permissions and DrivePool

    Edit: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 is the normal, Stablebit-approved method to reset NTFS permissions. It should suffice in most cases, and it's easier to do than my method, so try it first. To fix broken NTFS permissions I now use a freeware program called SetACL because - at least in my experience - it properly supports both long paths and unicode and can fix damaged security records that Windows builtin tab and utilities like takeown and icacls can’t (for reasons that personally I boil down to “unicode and long paths were added to Windows partly via the progra
    1 point
  24. It can't be added. The "RAID" setup presents the resulting drive to the OS as a volume - it has no SMART data. The individual disks may well be accessible as hardware devices, but they do not present as accessible volumes/drives. WD Dashboard works because it is specifically designed to access the relevant hardware at a low level. Each manufacturer does this a little differently, so it's probably not feasible to add to Scanner, which works on a software level.
    1 point
  25. Example of my current setup with hierarchical pools: I have my end of life disks all in one pool, and each of the other pools has 2 drives each, purchased at the same time (I placed the purchase dates in the name to tell them apart easier). One of the 8tb seagate drives already has a pending sector on it, so I don't trust it (so no unduplicated content). Will probably RMA that soon. The 14tb drives are solid so far. The new Western Digital 8tb drives are still being scrubbed by the scanner, so I have it set not to place content on them yet until that finishes. Due to some str
    1 point
  26. i would go with option 1 - as you only have two drives with duplication so they are the "same" - does not matter which one you choose - vss will not be copied etc 3 - would work but will be slower than 1 2 - avoid cloning liable to give you problems and yes - shut down any service thats writing to the pool before you start - more for maintaining the best speed internal copy will be approx 1TB per 3 hrs - give or take - remember speed will vary by files size (lots of small files very slow) (large files quick) and where the data is on the disk - i suspect that the new 8TB wil
    1 point
  27. methejuggler

    Plugin Source

    I actually just thought of a solution for this which doesn't require a plugin! I could make separate pools for each of the drives I bought at the same time and NOT set duplication on these, and then make one big pool that only consists of those smaller pools and only set duplication on the one big pool. Then it would duplicate between pools, and ensure that the duplicates are on different groups of drives. I'll probably do the same with any disks nearing their end of life. Place all near EoL disks in one pool to make sure it doesn't duplicate files on multiple near EoL disks.
    1 point
  28. methejuggler

    Plugin Source

    Hybrid SSDs are nice for normal use, but in mixed-mode operating environments they get overwhelmed pretty quickly and start thrashing (ie. NAS with several users). There's also the problem of things like DrivePool mixing all your content up across the different drives, so the drive replaces your cached documents with a movie you decide to watch, and then the documents are slow again, even though you weren't going to watch the movie more than once. If there was a way to specify to only cache often written/edited files for increased speed, then maybe? But I think that would still run into issues
    1 point
  29. methejuggler

    Plugin Source

    Of course, I have Backblaze for cloud backup too, but re-downloading 10+ TB of data that could have been protected better locally isn't ideal. I'm glad to hear you've had good luck so far, but don't fool yourself - multiple drive failure happens. Keep in mind that drives fail more when they're used more. The most common situations of multiple drive failure is that one drive fails, and you need to restore those files from your redundancies. During the restore process, another drive fails due to the increased use. The most simultaneous failures I've heard of is 4 (not to me)... but tha
    1 point
  30. Hi gtaus, I can see that your account is following this thread, so hopefully you'll get notified about this response. Maybe check that https://community.covecube.com/index.php?/notifications/options/ is set to your liking?
    1 point
  31. Checking, it seems the ST6000DM003 is a SMR drive. I don't recommend putting it in any pool where you want decent rewrite performance, but if you're only wanting good read performance it's fine.
    1 point
  32. Reid Rankin

    WSL 2 support

    Here's the DrivePool tracking issue; it appears has been resolved in version 2.3.0.1193.
    1 point
  33. Moving data to the Pool while retaining the data on the same drive is called seeding and it is advised to stop the service first (https://wiki.covecube.com/StableBit_DrivePool_Q4142489). I think this is because otherwise DP might start balancing while you are in the process of moving drive-by-drive. I am not sure but I would think you would first set settings, then do the seeding. (I am pretty sure that) DP does not "index" the files. Whenever you query a folder DP will on the spot read the drives and indeed show the "sum". Duplicate filenames will be an issue I think. I think that D
    1 point
  34. I'd suggest a tool called Everything, by Voidtools. It'll scan the disks (defaults to all NTFS volumes) then just type in a string (e.g. "exam 2020" or ".od3") and it shows all files (you can also set it to search folder names as well) that have that string in the name, with the complete path. Also useful for "I can't remember what I called that file or where I saved it, but I know I saved it on the 15th..." problems.
    1 point
  35. I pass through the HBA for Drivepool. I use Dell Perc H310 cards and the SMART data is all visible, as it should be because my Windows VM has direct access to the HBA. edit: Wrong Chris I know, but hopefully helpful?
    1 point
  36. Just to clarify for everyone here, since there seems to be a lot of uncertainty: The issue (thus far) is only apparent when using your own api key However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation Previous versions (also) do not conform with this limit (and go WELL over the 500k limit) Yes there has definitely been a change at googles side that implemented this new limitation Although there may be issues with the current beta (it is a
    1 point
  37. Same here, on multiple PCs. Complete removal and reinstallation does not solve the problem
    1 point
  38. In the DP GUI, see the two arrows to the right of the balancing status bar? If you press that, it will increase the I/O priority of DP. May help some. Other that that, ouch! Those are more like SMR-speeds.
    1 point
  39. KingfisherUK

    My Rackmount Server

    So, nearly two and a half years down the line and a few small changes have been made: Main ESXi/Storage Server Case: LogicCase SC-4324S OS: VMWare ESXi 6.7 CPU: Xeon E5-2650L v2 (deca-core) Mobo: Supermicro X9SRL-F RAM: 96GB (6 x 16GB) ECC RAM GFX: Onboard Matrox (+ Nvidia Quadro P400 passed through to Media Server VM for hardware encode/decode) LAN: Quad-port Gigabit PCIe NIC + dual on-board Gigabit NIC PSU: Corsair CX650 OS Drive: 16GB USB Stick IBM M5015 SAS RAID Controller with 4 x Seagate Ironwolf 1TB RAID5 array for ESXi datastores (
    1 point
  40. Hi all, I am using CloudDrive for a while now and I am very happy with it, nevertheless I have a question if it is also possible to script it? At the moment I have configured DrivePool and CloudDrive on my personal server. CloudDrive is my solution for a backup to the cloud, but as I don't want to have the drive always mounted (virusses, ransomware, etc), I'd like to script it. I already have a script which syncs my data to the CloudDrive but I'd like to enhance it so it mounts the drive. And it would be very nice if it could be dismounted when Cloud synchronization is completed.
    1 point
  41. There are some inherent flaws with USB storage protocols that would preclude it from being used as a cache for CloudDrive. You can see some discussion on the issue here: I don't believe they ever added the ability to use one. At least not yet.
    1 point
  42. A 4k (4096) cluster size supports a maximum volume size of 16TB. Thus, adding an additional 10TB to your existing 10TB with that cluster size exceeds the maximum limit for the file system, so that resize simply won't be possible. Volume size limits are as follows: Cluster Size Maximum Partition Size 4 KB 16 TB 8 KB 32 TB 16 KB 64 TB 32 KB
    1 point
  43. welcome! And yeah, it's a really nice way to set up the system. It hides the drives and keeps them accessible, at the same time.
    1 point
  44. Thanks for the response. Turns out, I was clicking on the down arrow and that did not give the option of enable auto scanning. So after reading your response, I clicked on the "button" itself and it toggled to enabled. Problem solved. Auto scanning immediately started so I know that it is working. Thanks.
    1 point
  45. Edward

    Removing drive from pool

    Fully recognize that the current issue is not mine (but I'm the OP) however would highly appreciate if: 1. How do I find out which files on drives are unduplicated? 2. That this thread is anyway updated with recommended processes/commands need to be followed when a problem occurs. Or a link to such processes/commands. cheers Edward
    1 point
  46. Wow, nice digging! And sorry for not getting back here sooner! Also, for the permissions, this should work too: http://wiki.covecube.com/StableBit_DrivePool_Q5510455
    1 point
  47. Alex

    check-pool-fileparts

    If you're not familiar with dpcmd.exe, it's the command line interface to StableBit DrivePool's low level file system and was originally designed for troubleshooting the pool. It's a standalone EXE that's included with every installation of StableBit DrivePool 2.X and is available from the command line. If you have StableBit DrivePool 2.X installed, go ahead and open up the Command Prompt with administrative access (hold Ctrl + Shift from the Start menu), and type in dpcmd to get some usage information. Previously, I didn't recommend that people mess with this command because it wasn't rea
    1 point
  48. Full Stop. No. The Read Striping may improve performance, but we still have the added overhead of reading from NTFS volumes. And the performance profile for doing so. RAID 1 works at a block level, and it's able to split IO requests between the disks. SO one disk could read the partition table information, while the second disk starts reading the actual file data. This is a vast oversimplification of what happens, but a good illustration of what happens. So, while we may read from both disks, in parallel, there are a number of additional steps that we have to perfor
    1 point
  49. This is part of the problem with the way that the SSD optimizer balancer works. Specifically, it creates "real time placement limiters" to limit what disks new files can be placed on. I'm guessing that the SSD is below the threshold set for it (75% by default, so ~45-50GBs). Increasing the limit on the SSD may help this (but lowering it may as well, but this would force the pool to place files on the other drives rather than on the SSD). Additionally, there are some configuration changes that may help make the software more aggressively move data off of the drive. h
    1 point
  50. I've got a few different setups (NASes, Storage Pools, convential JBOD server and a couple DrivePools). My largest DrivePool is currently configured at 88.2TB usable and I've got a couple 6TB drives not in that figure used for parity (SnapRaid). This pool is strictly for media mostly used by Plex. I've got 4 Windows 2012 R2 servers and two are currently dedicated for multimedia. 2 are more generic servers and hold a lot of VHDX images using windows De-Duplication. Then I've got a few smaller NAS boxes used to hold typical family stuff. But getting back to DrivePool. I'll be incre
    1 point

Announcements

×
×
  • Create New...