Jump to content
Covecube Inc.

Leaderboard

Popular Content

Showing content with the highest reputation since 09/18/20 in all areas

  1. Just a heads up, yes, it does continue to attempt to upload, but we use an exponential backoff when the software gets throttling requests like this. However, a daily limit or schedular is something that has been requested and is on our "to do"/feature request list. I just don't have an ETA for when it would be considered or implemented.
    2 points
  2. Shane

    NTFS Permissions and DrivePool

    Spend long enough working with Windows and you may become familiar with NTFS permissions. As an operating system intended to handle multiple users, Windows maintains records that describe which user owns each file and folder and how much access each user has to those files and folders. These records are kept on the same volumes as those files and folders. Unfortunately, in the course of moving or copying folders and files around, Windows may fail to properly update these settings for a variety of reasons (e.g. bugs, bit errors, power interruptions, failing hardware). This can mean files and folders that you can no longer delete, access or even have ownership of anymore, sometimes for no obvious reason when you check via the Security tab of the file/folder's Properties (they can look fine but actually be broken inside). So, first up, here’s what the default permissions for a pool drive should look like: And now here’s what the default permissions for the hidden poolpart folder on any drive added to the pool should look like: The above are taken from a freshly created pool using a previously unformatted drive, on a computer named CASTLE that is using Windows 10 Professional. I believe it should be the same for all supported versions of Windows so far. Any entries that are marked Deny override entries that are marked Allow. There are limited exceptions for SYSTEM. It is optional for a hidden poolpart folder to Inherit its permissions from its parent drive. It is recommended that the Administrators account have Full control of all poolpart folders, subfolders and files. It is necessary that the SYSTEM account have Full control of all poolpart folders, subfolders and files. The permissions of files and folders in a pool drive are the permissions of those files and folders in the constituent poolpart folders. Caveat: duplicates are expected to have identical permissions (because in normal practice, only DrivePool should be creating them). My next post in this thread will describe how I go about fixing these permissions when they go bad.
    2 points
  3. gtaus

    Removing drive from pool

    Have you determined what speed your TV streaming device pulls movies from your Storage Spaces or DrivePool? For example, when I watch my DrivePool GUI, I can see that my Fire TV Stick is pulling about ~4 MB/s tops for streaming 1080p movies. I don't suffer any stuttering or caching on my system. If I try to stream movies >16GB, then I start to see problems and caching issues. But, at that point, I know I have reached the limits of my Fire TV Stick with limited memory storage and its low power processor. It is not a limit of how fast DrivePool can send data over my wifi. Well, there is how many bars are available to indicate how strong the connection is, but bars does not equal speed. On my old 56K router, I would also have 4 or 5 bars indicating a strong connection, but I was constantly fighting buffering issues while streaming. I upgraded to a 1 gigabit router, which is much faster, and that took care of my buffering problems. Well, good questions but beyond my level of tech expertise with that equipment. I get my internet service from a local telephone company, and they have a computer support team on staff to answer questions and help customers with their equipment. If you are leasing your equipment from ATT, then they might have a support team you could contact for assistance. At least you have something that is currently working for you, so it's not like you are in a panic. After years of running Storage Spaces on my system, and now with DrivePool for just less than 1 year, I don't yet understand why you are experiencing streaming issues with DrivePool. On my system, it made no difference at all in regards to streaming, which I have stated runs at about 4 MB/s tops and usually much less on my system.
    2 points
  4. Thanks guys I plucked up the courage and went to beta on all apps and connected to the new cloud thing. Looking good!
    2 points
  5. This is a topic that comes up from time to time. Yes, it is possible to display the SMART data from the underlying drives in Storage Spaces. However, displaying those drives in a meaningful way in the UI, and maintaining the surface and file system scans at the same time is NOT simple. At best, it will require a drastic change, if not outright rewrite of the UI. And that's not a small undertaking. So, can we? Yes. But do we have the resources to do so? not as much (we are a very small company)
    2 points
  6. I've also been bad about checking the forums. It can get overwhelming, and more difficult to do. But that's my resolution this year: to make a big effort to keep up with the forum.
    2 points
  7. (also: https://wiki.covecube.com/StableBit_DrivePool_Q5510455 )
    2 points
  8. methejuggler

    Plugin Source

    I actually wrote a balancing plugin yesterday which is working pretty well now. It took a bit to figure out how to make it do what I want. There's almost no documentation for it, and it doesn't seem very intuitive in many places. So far, I've been "combining" several of the official plugins together to make them actually work together properly. I found the official plugins like to fight each other sometimes. This means I can have SSD drop drives working with equalization and disk usage limitations with no thrashing. Currently this is working, although I ended up re-writing most of the original plugins from scratch anyway simply because they wouldn't combine very well as originally coded. Plus, the disk space equalizer plugin had some bugs in a way which made it easier to rewrite than fix. I wasn't able to combine the scanner plugin - it seems to be obfuscated and baked into the main source, which made it difficult to see what it was doing. Unfortunately, the main thing I wanted to do doesn't seem possible as far as I can tell. I had wanted to have it able to move files based on their creation/modified dates, so that I could keep new/frequently edited files on faster drives and move files that aren't edited often to slower drives. I'm hoping maybe they can make this possible in the future. Another idea I had hoped to do was to create drive "groups" and have it avoid putting duplicate content on the same group. The idea behind that was that drives purchased at the same time are more likely to fail around the same time, so if I avoid putting both duplicated files on those drives, there's less likelihood of losing files in the case of multiple drive failure from the same group of drives. This also doesn't seem possible right now.
    2 points
  9. Managed to fix this today as my client was giving errors also. Install Beta version from here: http://dl.covecube.com/CloudDriveWindows/beta/download/ (I used 1344) Reboot. Don't start CloudDrive and/or service. Add the below to this file: C:\ProgramData\StableBit CloudDrive\Service\Settings.json "GoogleDrive_ForceUpgradeChunkOrganization": { "Default": true, "Override": true } Start Service & CloudDrive. Should kick in straight away. I have 42TB in GDrive and it went through immediately. Back to uploading as usual now. Hope this helps.
    2 points
  10. After posting, I found an issue I had missed: the disk was marked as Read Only in disk management. After running DISKPART from cmd I managed to remove the read-only tag using the command attributes disk clear readonly and it appears to be OK now.
    2 points
  11. Carl

    Repeating Checksum error

    I'm working on evaluating the StableBit suite (all downloaded and installed about 9 days ago) for home use and have run into an issue with CloudDrive using OneDrive which is key to my plans. It seems that it's stuck in a repeating loop of errors for days and I'm not sure how to resolve the problem. The chuck offset hasn't changed. The status bar is showing as green so this log and the feedback is the only indicator of problems. Not sure that it makes any difference but the CloudDrive is included in a pool (DrivePool) for redundancy. Any additional information I can provide please let me know. Thanks 1:40:04.4: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:04.4: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted. 1:40:08.8: Warning: 0 : [ChecksumBlocksChunkIoImplementation:96] Expected checksum mismatch for chunk 24393, ChunkOffset=0x00500140, ExpectedChecksum=0x2f1c206e5f68aeea, ComputedChecksum=0x717acf0107f90f0f. 1:40:09.5: Warning: 0 : [WholeChunkIoImplementation:96] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:09.5: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:09.5: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted. 1:40:13.8: Warning: 0 : [ChecksumBlocksChunkIoImplementation:96] Expected checksum mismatch for chunk 24393, ChunkOffset=0x00500140, ExpectedChecksum=0x2f1c206e5f68aeea, ComputedChecksum=0x717acf0107f90f0f. 1:40:14.5: Warning: 0 : [WholeChunkIoImplementation:96] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:14.5: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:14.5: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted. 1:40:18.8: Warning: 0 : [ChecksumBlocksChunkIoImplementation:96] Expected checksum mismatch for chunk 24393, ChunkOffset=0x00500140, ExpectedChecksum=0x2f1c206e5f68aeea, ComputedChecksum=0x717acf0107f90f0f. 1:40:19.5: Warning: 0 : [WholeChunkIoImplementation:96] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:19.5: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:19.5: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted. 1:40:23.8: Warning: 0 : [ChecksumBlocksChunkIoImplementation:96] Expected checksum mismatch for chunk 24393, ChunkOffset=0x00500140, ExpectedChecksum=0x2f1c206e5f68aeea, ComputedChecksum=0x717acf0107f90f0f. 1:40:24.6: Warning: 0 : [WholeChunkIoImplementation:96] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:24.6: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:24.6: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted. 1:40:29.0: Warning: 0 : [ChecksumBlocksChunkIoImplementation:96] Expected checksum mismatch for chunk 24393, ChunkOffset=0x00500140, ExpectedChecksum=0x2f1c206e5f68aeea, ComputedChecksum=0x717acf0107f90f0f. 1:40:29.8: Warning: 0 : [WholeChunkIoImplementation:96] Error on read when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:29.8: Warning: 0 : [WholeChunkIoImplementation:96] Error when performing master partial write. Checksum mismatch. Data read from the provider has been corrupted. 1:40:29.8: Warning: 0 : [IoManager:96] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted. 1:40:34.0: Warning: 0 : [ChecksumBlocksChunkIoImplementation:96] Expected checksum mismatch for chunk 24393, ChunkOffset=0x00500140, ExpectedChecksum=0x2f1c206e5f68aeea, ComputedChecksum=0x717acf0107f90f0f.
    1 point
  12. Thanks, I ran these commands and now all sorted: takeown /f "S:\System Volume information" icacls "S:\System Volume Information" /grant "SERVERNAME\USERNAME":F takeown /f "S:\System Volume information\WPSettings.dat" icacls "S:\System Volume Information\WPSettings.dat" /grant "SERVERNAME\USERNAME":F del "S:\System Volume Information\WPSettings.dat" icacls "S:\System Volume Information" /setowner "NT Authority\System" icacls "S:\System Volume Information" /remove "SERVERNAME\USERNAME"
    1 point
  13. The OS writes files by going (approximately) "create/open entry for file on drive's index, stream data to file from program, write details (e.g. blocks used so far) to drive's index, repeat previous two steps until program says it's done or the program says it's encountered an error or the drive runs out of room or insert-other-condition-here, write final details in the index and close entry for file". Or in even simpler terms: at the system level all files are written one block at a time, no matter how many blocks they'll eventually involve. Now a workaround for programs that deal with fixed file sizes is to ask the OS in advance "how much free space is on drive X" so that they can know whether there's going to be room before they start writing (well, "know" as in "guess" because other programs might also write to the drive and then it becomes a competition). But the catch there is that when the OS in turn asks the drive pool "how much free space do you have", DrivePool reports its total free space rather than the free space of any particular physical drive making up the pool. This is because it can't know why it's being asked how much free space it has (DP: "am I being asked because a user wants to know or because a program wants to write one big file or because a program wants to write multiple small files or because some other reason, and oh also if I've got any placement rules those might affect my answer too?" OS: "I don't know").
    1 point
  14. Not every Cloud Provider is going to be listed. And that's okay. For the public beta, we focused on the most prolific and popular cloud storage providers. If you don't see a specific provider in StableBit CloudDrive, let us know and we'll look into it. If you can provide a link to the SDK/API, it would be helpful, but it's not necessary. Just because you see it listed here does not me we will add the provider. Whether we add them or not depends on a number of factors, including time to develop them, stability, reliability, and functionality, among other factors. Providers already requested: Mega https://stablebit.com/Admin/IssueAnalysis/15659 SharePoint https://stablebit.com/Admin/IssueAnalysis/16678 WebDAV IceDrive OwnCloud (SabreDAV) https://stablebit.com/Admin/IssueAnalysis/16679 OpenStack Swift https://stablebit.com/Admin/IssueAnalysis/17692 OpenDrive https://stablebit.com/Admin/IssueAnalysis/17732 Yandex.Disk https://stablebit.com/Admin/IssueAnalysis/20833 EMC Atmos https://stablebit.com/Admin/IssueAnalysis/25926 Strato HiDrive https://stablebit.com/Admin/IssueAnalysis/25959 Citrix ShareFile https://stablebit.com/Admin/IssueAnalysis/27082 Email support (IMAP, maybe Pop) https://stablebit.com/Admin/IssueAnalysis/27124 May not be reliable , as this heavily depends on the amount of space that the provider allows. And some providers may prune messages that are too old, go over the quota, etc. JottaCloud https://stablebit.com/Admin/IssueAnalysis/27327 May not be usable, as there is no publicly documented API. FileRun https://stablebit.com/Admin/IssueAnalysis/27383 FileRun is a Self Hosted, PHP based cloud storage solution. Free version limited to 3 users, enterprise/business licensing for more users/features available. JSON based API. SpiderOak https://stablebit.com/Admin/IssueAnalysis/27532 JSON based API Privacy minded pCloud https://stablebit.com/Admin/IssueAnalysis/27939 JSON based API OVH Cloud https://stablebit.com/Admin/IssueAnalysis/28204 StorJ https://stablebit.com/Admin/IssueAnalysis/28364 Providers like tardigrade.io appear to use this library/API iDrive Amazon S3 Compatible API. No need for separate provider. ASUS WebStorage https://stablebit.com/Admin/IssueAnalysis/28407 Documentation ... is difficult to read, making it hard to tell if support is possible. Apple iCloud https://stablebit.com/Admin/IssueAnalysis/28548 Providers that will not be added: Degoo No publicly accessible API. Without an API that support reading and writing files to the provider, there is no possibility of adding support for this provider. Sync.com No publicly accessible API Without an API that support reading and writing files to the provider, there is no possibility of adding support for this provider. Amazon Glacier https://stablebit.com/Admin/IssueAnalysis/16676 There is a 4+ hour wait time to access uploaded data. This means Amazon Glacier completely unusable to us. We couldn't even perform upload verification on content due to this limitation. HubiC https://stablebit.com/Admin/IssueAnalysis/16677 It's OpenStack - No need for a separate provider. CrashPlan? https://stablebit.com/Admin/IssueAnalysis/15664 The API provided appears to be mostly for monitoring and maintenance. No upload/download calls, so not suitable for StableBit CloudDrive, unfortunately. MediaFire Not suitable due to stability issues. Thunderdrive No publicly accessible API. LiveDrive No Publicly accessible API. Zoolz No Publicly accessible (developer) API
    1 point
  15. doing checkdisk fixed my corrupted file, i'm doing remeasure again as drivepool didnt finish remeasure because of the corrupted file. Any ideas what may be causing drivepool to prefer to write on older hdd?
    1 point
  16. Rob Platt

    Reparse.covefs.* files

    So far so good. Looks like my issues boiled down to reparse files and/or something wrong with DropBox (which I've since found is no longer really supported in DrivePool). Thanks for the help @Shane
    1 point
  17. gtaus

    Hard drive enclosure or NAS?

    I used to run a Windows Storage Spaces server for about 7 years. The last 2 years, as my pool kept larger and larger, I had more and more problems with Storage Spaces. I spent a long time considering other options including FreeNAS. I talked to people who were running, or used to use, FreeNAS and learned that FreeNAS has problems like Storage Spaces when the pool gets large. At that time, I was just over 80TB on the pool and having significant problems with Storage Spaces that I did not have when the pool was much smaller. The people I talked to about FreeNAS told me similar stories, it worked fine to a point and then when the pool got larger, they started having significant problems. In fact, the guys I talked to had already given up on FreeNAS and moved on to other options. I moved on to DrivePool and my experience has been much better. I am now over 80TB on my DrivePool server and, so far, have not seen the problems I experienced with Storage Spaces. There are some things I miss about the "promise" of Storage Spaces, but in real life, the performance of Storage Spaces falls short. My friends running FreeNAS told me the same story with using FreeNAS. I am not claiming that DrivePool is perfect, but it just seems to work better for me. After adding a SSD to DrivePool as a front end cache, I now get write speeds that exceeded my Storage Spaces setup. If you chose to duplicate some folders in DrivePool, then you have the option of using Read Striping and that can almost double your read speed in some scenarios. However, I chose DrivePool over other options not because it was faster, but rather because it just worked better for me. When a pool drive fails in DrivePool, you only lose the data on that one drive, not the entire pool (as happened to me in Storage Spaces). If you have duplication set on either the entire pool or just certain folders, you can rebuild the pool from the duplicated data. Also, when I have had HDDs fail, sometimes most of the data on that drive is still available and can be transferred back to the pool. In one instance, I had only 2 or 3 corrupt files on a 3TB HDD that was failing. I was able to move all good files off the drive before it finally, totally, failed.
    1 point
  18. Nicely done. Looks like Stablebit Scanner has more sorting options than Hard Disk Sentinel.
    1 point
  19. Tiru

    Removing a nested pool

    For anybody who comes across this thread in the future with a similar issue: I wasn't able to ultimately figure out how this occurred but I decided to remove the E:\ pool from the D:\ pool using the default removal settings and that removed the extra drive letter. A SnapRAID scan of all the constituent drives that comprise my E: pool didn't find any unexpected changes, so it appears to have been a safe operation given limited testing.
    1 point
  20. Yes. But YMMV. Mostly, having the cache on a different drive means that you're separating out the I/O load, which can definitely improve performance. Though, this depends on the SSDs you're using. Things like AHCI vs NVMe, and the IOPS rating of the drive make more of a difference.
    1 point
  21. Well, it may be worth testing on the latest beta versions, as there are some changes that may effect this, due to driver changes. If you're already on the beta or the beta doesn't help, please open a ticket at https://stablebit.com/Contact so we can see if we can help fix this issue.
    1 point
  22. Same here. The bonus with DrivePool is that if it fails you don't lose data, just the pooled drive itself. Worst come to worst, just reinstall the earlier version again.
    1 point
  23. yep - beta apps new ones posted today
    1 point
  24. You have not stated how much data you have in your DrivePool. I just use the default DrivePool Balancers, and over time, the drives will fill up with data more or less equally. If you just started moving data onto DrivePool, it may be that not all drives have been used yet. The DrivePool GUI gives a visual graph of the usage of your drives. If there is not much difference, I would not be concerned. If you see one drive half full and another empty, then you might need to check which Balancers you have turned on and your Balancing settings in the DrivePool GUI. I have my settings to automatically balance the drives as needed.
    1 point
  25. gtaus

    Removing drive from pool

    If setting the drive volume label to match the drive's serial number works for you, then stay with it. My approach is different and works better for me. First of all, I currently have 18 USB HDDs in my DrivePool. I don't use Drive Letters on any of the pool drives. There are just too many drives in the pool and DrivePool does not need Drive Letters anyway. I preferred to clean up my Windows File Explorer listing, so the Drive Letters had to go. What I do is just name the drives in logical order as they sit on the shelf. Being not too creative, my pool drives are labeled as DP01, DP02, DP03, etc.... I also put a label on each drive case. I have DrivePool GUI sort the pool list by name for easy reading. If there is any problem with a HDD, I immediately know which drive is affected. I have lots of unduplicated home media files in my DrivePool. But I also have a few folders that I want 2X duplication. Not only do I find the duplication options better in DrivePool than Storage Spaces, but the net result is that I am saving lots of money by not duplicating my entire pool when it is not required for about 85% of my stored media files. Also, I have had a couple HDD failures in the past month, and I have been able to recover almost all my data off the drives. In the meantime, DrivePool was still serving up all my other files like nothing happened. When my Storage Spaces crashed, it could takes weeks to rebuild. I don't miss Storage Spaces....
    1 point
  26. Shane

    Permissions Confusion?

    The poolpart folders do not need to inherit their permissions from their respective volume roots (though they default to doing so on a newly created pool using previously unformatted drives). The SYSTEM account must have full control of the poolpart folder, subfolders and files. The Administrators account is recommended to have full control of same. For more details, I have just created this thread.
    1 point
  27. Bump for attention. I came looking for a discord group to talk to people in. Live.. I've been really enjoying getting support from devs and their product support people directly through discord. I just made progress a few minutes ago on my setup specifically through discord. Also it's nice for everyone to be able to bounce ideas off each other.
    1 point
  28. I can't replicate this. The buttons appear to be working fine for me. Were there any extra steps that you took to make this happen? Edit: I found it. Only happens if no drives are set as "SSD". I released a new version to fix this.
    1 point
  29. In an attempt to learn the plugin API, I created a plugin which replicates the functionality of many of the official plugins, but combined into a single plugin. The benefit of this is that you can use functionality from several plugins without them "fighting" each other. I'm releasing this open source for others to use or learn from here: https://github.com/cjmanca/AllInOne Or you can download a precompiled version at: https://github.com/cjmanca/AllInOne/releases Here's an example of the settings panel. Notice the 1.5 tb drive is set to not contain duplicated or unduplicated: Balance result with those settings. Again note the 1.5tb is scheduled to move everything off the disk due to the settings above.
    1 point
  30. Have now been using your plugin for about a fortnight with no issues, in conjunction with the Stablebit Scanner and Prevent Drive Overfill balancers. Thankyou!
    1 point
  31. methejuggler

    Plugin Source

    I'm interested in extending the behavior of the current balancing plugins, but don't want to re-write them from scratch. Is there any chance the current balancing plugins could be made open source to allow the community to contribute and extend them?
    1 point
  32. methejuggler

    Plugin Source

    Hybrid SSDs are nice for normal use, but in mixed-mode operating environments they get overwhelmed pretty quickly and start thrashing (ie. NAS with several users). There's also the problem of things like DrivePool mixing all your content up across the different drives, so the drive replaces your cached documents with a movie you decide to watch, and then the documents are slow again, even though you weren't going to watch the movie more than once. If there was a way to specify to only cache often written/edited files for increased speed, then maybe? But I think that would still run into issues with the balancer moving files between drives. The Hybrid drive wouldn't know the difference between that and legitimately newly written files.
    1 point
  33. methejuggler

    Plugin Source

    Of course, I have Backblaze for cloud backup too, but re-downloading 10+ TB of data that could have been protected better locally isn't ideal. I'm glad to hear you've had good luck so far, but don't fool yourself - multiple drive failure happens. Keep in mind that drives fail more when they're used more. The most common situations of multiple drive failure is that one drive fails, and you need to restore those files from your redundancies. During the restore process, another drive fails due to the increased use. The most simultaneous failures I've heard of is 4 (not to me)... but that was in a larger raid. There's a reason for the parity drive count increasing every ~5 drives in parity based raids. So far, I've been quite lucky. I've never permanently lost any files due to a drive failure - but I don't want that to start due to lack of diligence on my part either, so if I can find ways to make my storage solution more reliable I will. In fact - one of the main reasons I went with DrivePool is that it seems more fault tolerant. Duplicates are spread between multiple drives (rather than mirroring, which relies entirely on one single other drive), so if you do lose two drives, you may lose some files, but not a full drive's worth. (Plus the lack of striping similarly makes sure that you don't lose the whole array if you can't restore.) I realize I don't need to explain any of this to someone who uses it, but just highlighting the reasons I found DP attractive in the first place - separating the duplicates amount multiple drives to reduce the chance of losses on failures. If that can be improved to further reduce those chances...
    1 point
  34. Well, there may be some scenario where drive letters are required. But with 16 HDDs in my pool, I am more than happy to assign names to the HDDs and not have to bother with drive letters. In theory, DrivePool is able to pool many more than 26 HDDs, so you would reach a point where you would run out of drive letters anyway. If you need to assign a drive letter to a DrivePool HDD that has a name only, you can easily do that in Disk Management and it will not affect DrivePool at all. Again, DrivePool does not read the drive letter at all. It only uses the hidden PoolPart directory for identification. As mentioned, you may have to restart your computer if you decide to reassign drive letters to your DrivePool HDDs, not for DrivePool itself, but sometimes other Windows program will not recognize the newly named drive without a restart. IIRC, Disk Management will warn you about that when you change/add/remove a drive letter.
    1 point
  35. Hi gtaus, I can see that your account is following this thread, so hopefully you'll get notified about this response. Maybe check that https://community.covecube.com/index.php?/notifications/options/ is set to your liking?
    1 point
  36. Hi! I have the same issue, but worse. After this error code I rebooted my PC, and it took about 5 minutes to boot up again, and when it booted I noticed that the drive - that produced the error code when I tried to make new folder - was missing. It isn't shown in file manager and in the disk manager neither. Now I don't know what to do...
    1 point
  37. ... Would that 6TB USB HDD happen to be a Seagate Backup Plus Hub by any chance? Because I bought one a ways back (P/N 1XAAP2-501) and it behaves exactly the same way you've described. The drive inside might actually be okay, just with a lemon enclosure. Since yours is out of warranty and you're planning to ditch it, consider instead shucking it and using the HDD as an internal drive (run the Seagate long test on it again of course). BTW just FYI, Seagate has used SMR drives in its Backup Plus Hubs and I don't recommend using SMR drives in a pool, so also check the part number of the drive itself to see if it is one, with (if you don't want to open it up to check) a utility like Crystal Disk Info or similar.
    1 point
  38. For whatever it's worth, in the past I have encountered problems with "copier" software silently missing files. Admittedly I was dealing with very large file sets, very long paths, and unicode names, all back when a lot of software would have trouble with just one of those let alone all three, and the less-than-reliable hardware I was (ahem) relying on at the time probably didn't help, but the important takeaway is that if you're working with "irreplaceable" data you might want to stress-test your copier and verify that it is actually doing what it says it's doing.
    1 point
  39. In the File Placement Rules section: your FIRST (uppermost) rule should be that all files matching "\movies\*" are to be placed on all drives INCLUDING that one (check all drives) your LAST (lowermost) rule should be that all files matching "*" are to be placed on all drives EXCLUDING that one (check all the others, leave it unticked, and in your particular case you also want to select "Never allow files to be placed on any other disks" for this rule). Any other rules you might have should be placed ABOVE the LAST rule, and should not have that drive checked (and again, you may wish to select "Never allow..."). This is because FPR checks the rules from uppermost to lowermost until it finds a matching rule for the file being balanced and then uses only that rule. NOTE that File Placement is only performed when balancing is triggered instead of in real time; you might wish to use the SSD Optimizer balancer plugin to mark at least one of the other disks as "SSD" so that new files are never even temporarily placed on the 6TB HDD, which is otherwise possible even if you have "Balance immediately" selected in the Settings tab.
    1 point
  40. denywinarto

    Bad drive causing BSOD

    1. Manually move all files to the outside of the poolpart.xxx folder on the bad disk so that drivepool can't see them any more. 2. Remove the old disk from DrivePool (it will be instantly complete since the poolpart folder is empty). 3. Insert new disk (new different PoolPart.xxx is created). 4. Manually copy the files from the old disk to the new poolpart folder on the new disk. I found out by doing this i didn't get BSOD compared to moving the disk using DP Above is why i think the BSOD is related to drivepool, what i'm worried about if one my drives go bad then it would just throw random BSOD and screw up my OS
    1 point
  41. I don't have any info on this other than to say that I am not experiencing these issues, and that I haven't experienced any blue screens related to those settings. That user isn't suggesting a rollback, they're suggesting that you edit the advanced settings to force your drive to convert to the newer hierarchical format. I should also note that I do not work for Covecube--so aside from a lot of technical experience with the product, I'm probably not the person to consult about new issues. I think we might need to wait on Christopher here. My understanding, though, was that those errors were fixed with release .1314. It presumes that existing data is fine as-is, and begins using a hierarchical structure for any NEW data that you add to the drive. That should solve the problem. So make sure that you're on .1314 or later for sure. Relevant changelog: .1314 * [Issue #28415] Created a new chunk organization for Google Drive called Hierarchical Pure. - All new drives will be Hierarchical Pure. - Flat upgraded drives will be Hierarchical, which is now a hybrid Flat / Hierarchical mode. - Upgrading from Flat -> Hierarchical is very fast and involves no file moves. * Tweaked Web UI object synchronization throttling rules. .1312 * Added the drive status bar to the Web UI. .1310 * Tuned statistics reporting intervals to enable additional statistics in the StableBit Cloud. .1307 * Added detailed logging to the Google Drive migration process that is enabled by default. * Redesigned the Google Drive migration process to be quicker in most cases: - For drives that have not run into the 500,000 files per folder limit, the upgrade will be nearly instantaneous. - Is able to resume from where the old migration left off.
    1 point
  42. Umfriend is correct. The service should be stopped to prevent any chance of balancing occurring during the migration when using that method. And that method is fine so long as your existing arrangement is compatible with DrivePool's pooling structure. E.g. if you have: drive D:\FolderA\FileB moved to D:\PoolPart.someguid\FolderA\FileB drive E:\FolderA\FileB moved to E:\PoolPart.someguid\FolderA\FileB drive F:\FolderA\FileC moved to F:\PoolPart.someguid\FolderA\FileC then your drivepool drive (in this example P: drive) will show: P:\FolderA\FileB P:\FolderA\FileC as DrivePool will presume that FileB is the same file duplicated on two drives. As Umfriend has warned, when it next performs consistency checking DrivePool will create/remove copies as necessary to match your chosen settings (e.g. "I want all files in FolderA to exist on three drives"), and will warn if it finds a "duplicated" file that does not match its duplicate(s) on the other drives. As to Snapraid, I'd follow Umfriend's advice there too.
    1 point
  43. Moving data to the Pool while retaining the data on the same drive is called seeding and it is advised to stop the service first (https://wiki.covecube.com/StableBit_DrivePool_Q4142489). I think this is because otherwise DP might start balancing while you are in the process of moving drive-by-drive. I am not sure but I would think you would first set settings, then do the seeding. (I am pretty sure that) DP does not "index" the files. Whenever you query a folder DP will on the spot read the drives and indeed show the "sum". Duplicate filenames will be an issue I think. I think that DP measures the Pool it will either delete one copy (I think if the name, size and timestamp are the same or otherwise inform of some sort of file conflict. This is something you could actually test before you do the real move (stop service, create a spreadhseet "Test.xlsx", save directly to a Poolpart.*/some folder on one of the drives, edit the file, save directly to Poolpart.*/some folder on another drive, start service and see what it does?). DP does not go mad with same folder names, some empty, some containing data. In fact, as a result of balancing, it can cause this to occur itself. I have no clue about snapraid. I would speculate that you first create and populate the Pool, let DP measure and rebalance and then implement snapraid. Not sure though. You may have to read up on this a bit and there is plenty to find, e.g. https://community.covecube.com/index.php?/topic/1579-best-practice-for-drivepool-and-snapraid/.
    1 point
  44. KingfisherUK

    My Rackmount Server

    So, nearly two and a half years down the line and a few small changes have been made: Main ESXi/Storage Server Case: LogicCase SC-4324S OS: VMWare ESXi 6.7 CPU: Xeon E5-2650L v2 (deca-core) Mobo: Supermicro X9SRL-F RAM: 96GB (6 x 16GB) ECC RAM GFX: Onboard Matrox (+ Nvidia Quadro P400 passed through to Media Server VM for hardware encode/decode) LAN: Quad-port Gigabit PCIe NIC + dual on-board Gigabit NIC PSU: Corsair CX650 OS Drive: 16GB USB Stick IBM M5015 SAS RAID Controller with 4 x Seagate Ironwolf 1TB RAID5 array for ESXi datastores (Bays 1-4) Dell H310 (IT Mode - passed through to Windows VM) + Intel RES2SV240 Expander for Drivepool drives (Bays 5-24) Onboard SATA Controller with 240GB SSD (passed through to Media Server VM) ESXi Server (test & tinker box) HP DL380 G6 OS: VMWare ESXi 6.5 (custom HP image) CPU: 2 x Xeon L5520 (quad core) RAM: 44GB ECC DDR3 2 x 750W Redundant PSU 3 x 72GB + 3 x 300GB SAS drives (2 RAID5 arrays) Network Switch TP-Link SG-1016D 16-port Gigabit switch UPS APC SmartUPS SMT1000RMI2U Storage pools on the Windows storage VM now total 34TB (mixture of 1,2 and 4TB drives) and still got 6 bays free in the new 24 bay chassis for future expansion. There's always room for more tinkering and expansion but no more servers unless I get a bigger rack!
    1 point
  45. Personally I use "Allway Sync".
    1 point
  46. I am having the same problem. I need to relocate the cloud drive to another computer and getting access denied.
    1 point
  47. Glad you got it resolved. Pretty sure what I suggested would have worked, but dealing directly with Support (Christopher and Alex) is top-notch too.
    1 point
  48. Setting the SMART queries to be throttled like that (720 minutes) means that the temperature (a SMART value) is only going to be updated every 12 hours. If you uncheck the throttling option here, you'll see it updated much more rapidly. And setting it to something like "60 minutes" (1 hour) should see it update less frequently, but will be updated more often, and may get a more accurate reading.
    1 point
  49. LOL. I just recently did this on my own pool, actually. There isn't a good way. You need to format the drives to do this (or use a 3rd party tool, and hope it doesn't corrupt your disk). The simplest way is to use the balancing system. Use the "Disk Space Limiter" balancer to clear out one or more disks at a time. Once it's done that (it make take a few hours or longer), remove the disk from the pool and reformat it. Re-add it and repeat until you've cycled out ALL of the disks. Specifically, the reason that I mention the Disk Space Limiter balancer is that it runs in the background and doesn't set the pool to be read only. And is mostly automated.
    1 point
  50. I've got a few different setups (NASes, Storage Pools, convential JBOD server and a couple DrivePools). My largest DrivePool is currently configured at 88.2TB usable and I've got a couple 6TB drives not in that figure used for parity (SnapRaid). This pool is strictly for media mostly used by Plex. I've got 4 Windows 2012 R2 servers and two are currently dedicated for multimedia. 2 are more generic servers and hold a lot of VHDX images using windows De-Duplication. Then I've got a few smaller NAS boxes used to hold typical family stuff. But getting back to DrivePool. I'll be increasing the storage space or the 88.2TB pool in about a month (guessing) when I add the next 8 to 15 bay enclosure for more movies and especially TV Shows. Currently stored on that pool: 160 - 3D Movies 6,200 - Movies 1,150 - Educational Video 18,700 - Music Videos 850 - NFL Games 10,100 - TV Episodes (132 Shows, 613 Seasons) Music: 4,400 Artists, 12,900 Albums, 105,000 Tracks Carlo
    1 point

Announcements

×
×
  • Create New...