Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation since 06/27/21 in Posts

  1. My backup pool has a 1TB SSD that I use to speed up my backups over a 10Gbs link. Works great, as I can quickly backup my deltas from the main pool --> backup pool and in slow time it pushes the files to the large set of spinning rust (100TB). However, when I try to copy a file that is larger than the 1TB SSD I get a msg saying there is not enough available space. Ideally, the SSD Optimiser should just push the file directly to a HDD in such cases (feature request?), but for now what would be the best way of copying this file into the pool? - Manually copy directly to one of the HDD behind DrivePools back then rescan? - Turn off the SSD optimiser then copy the file" or, - Some other method? Thanks Nathan
    2 points
  2. Not currently, but I definitely do keep on bringing it up.
    2 points
  3. I mean, alex is alive and well. but yeah, some issues that don't directly involve Covecube. If alex wants to answer what exacty so going on, he will. But stuff is still happening.
    2 points
  4. https://stablebit.com/DrivePool/Download Once you click the checkbox, there is a "what's new?" link that will take you to here: https://stablebit.com/DrivePool/ChangeLog?Platform=win Or you can see the beta changelog here: https://dl.covecube.com/DrivePoolWindows/beta/download/changes.txt
    1 point
  5. Ac3M

    Replacing drive with same data ?

    I think DrivePool will ignore folders that it hasn't created itself. Never tried this actually. I'm not using any backup solution myself, just JBOD hurdled together in DrivePool as a single drive, and I've had my fair share of disks that have failed (FUBAR cables, for the most part), and also disks that were full (Like 0 bytes left) which have led to having to move data, and I found that it was working to move data outside of the PoolPart-folder made by DrivePool, and then to/from source/destination. But, remember to pay heed to never move data from one PoolPart-folder to another since DrivePool tends to pull a David Copperfield on you and make stuff disappear, like magic. I have yet to try out recovery software to see if removed data can be restored, come to think of it. Not that it has anything to do with this, but it should work I think. I find it very simple to work with the folder structure (PoolPart) since anything within it, you can just move up one level and it's taken out of the pool, then you can merge that data with another folder on another disk or whatever then you can just put it back and et voila - It's back in the pool again.
    1 point
  6. I'm on version 2.2.2.934 and have "Check for New Versions" selected. Obviously, there have been several new versions, but I don't receive any notifications to update. I've tried "Force update check now" which says if a new version is found, I'll get a notification to update. I never get a notification. StableBit DrivePool Notifications is running and enabled at Startup (per Task Manager). I don't have "Focus Time" enabled or anything I can see that would suppress notifications. Yes, I can manually update the version, but I was wondering if anyone had any ideas on how I could get it back to checking for new versions automatically.
    1 point
  7. minden02

    Drive(s) dropping from pool

    Ah that's a tough one. If the disks SMART readings are ok then my first guess would be the JBOD chassis power supply or other hardware failing. Disk failures are usually more slow moving in that you start to see errors and corruption and will get some warning from the pool. If it completely fails suddenly then a restart wouldn't bring it back.
    1 point
  8. klepp0906

    Any word on pricing?

    mkay, i was suckered in. mostly to support covecube (totally need to change company name to stablebit, confusing) but id be lying if i didnt say it was also in part to keep remote notifications on my phone etc, just in case. now since i feel like im on the board, where's my dark mode. speak up fellow shareholders!~
    1 point
  9. Any update on WebDAV? Is there a specific issue preventing it being added? It's been a while...
    1 point
  10. I'm running into the same issue. I understand that balancing might not be designed to be super fast, but there's a difference between being "not fast" and having a read/write speed of maybe half the nominal speed of the drive, and being so painfully slow that it's literally ~75x slower than the nominal read/write speed of the drives. In my case, I have one full 12 TB HDD in my pool and added another 12 TB HDD recently. When I test transfers from one to the other without DrivePool (just plain old Windows Explorer copy), I get speeds of 100-200 MB/s. However when DrivePool tries to do the balancing, the Windows Task Manager shows me write speeds of 2 MB/s on the new drive... It seems to me that this is probably a bug that should be looked into. Because of this I am forced to disable the balancing otherwise it's going to take 4 months... Please consider looking into this.
    1 point
  11. phhowe17

    Any word on pricing?

    Looks like there will be a Cloud+ subscription at $1/device/month. https://blog.covecube.com/2022/03/stablebit-cloud-plus/ As a 3xDrivePool only customer, I don't see a use case.
    1 point
  12. so just did some more testing. stablebit installs 5 services. I disabled them all one by one and watched. the final one to disable was DrivePoolService. Voila, perpetual pinging disappeared. Here the entire time i figured it was scanner, its drivepool itself. so why is that hammering my drives in perpetuity and how do i fix it? I have auto balancing off, I have every single plugin disabled except for ordered file placement. is it some kind of unintended bug or? Summer is coming and a 10x room with 24 of these 7200rpm badboys spinning next to me is no bueno.
    1 point
  13. To clarify, StableBit DrivePool doesn't use the drive letters or mount points. It uses the volume ID for the disks. So you can change how the drive is mounted without any issues. Personally, I prefer mounting the drives to folder paths, as that hides them, but still allows them to be accessible.
    1 point
  14. As long as Drivepool can see the drives it'll know exactly which pool they belong to. I move drives around regularly between my server and an external diskshelf and they always reconnect to the correct pools.
    1 point
  15. Yes, you can access it and force attach it in case the other computer will not access it anymore. I think the license doesn't matter in this case. Your license is just about how many computer you can run CloudDrive on; it's independent from the number of providers or cloud drives at a certain provider or logins or whatever. If you connect to any provider that has cloud drives on it (from any computer, from any license), you will see what StableBit cloud drives exactly are at that location. And then it's up to you which one to attach to, detach from, or force attach, in case of disaster. Read this section of the manual and especially take a look at the screenshots, I think they will give you the impression you need: https://stablebit.com/Support/CloudDrive/Manual?Section=Reattaching your Drive
    1 point
  16. The 100gb has been there for a long while, TBH. And that's been the default for it since the start. And by a long while, I mean since the DrivePool v1 days> (so ~2011). Eg, back when drives rating in TBs were at the high end, for the most part. You and disable the option or set it to something more reasonable for your setup. But it sounds like unchecking it may be what you want here.
    1 point
  17. For balancing and duplication operations, the files are copied to a temp folder, and then renamed when the operation completes. this ensures that even if an interruption occurs, that the files are intact, whole and not corrupted. And yeah, that's probably a good way to set things up.
    1 point
  18. Yep. Also with having automatic balancing enabled, your pool will become unbalanced from time to time. For example if you've setup the "Balance Immediately -> No more often than ..." rule, or if you've setup a trigger like "If the balance ratio falls below...". As soon as there is 1 byte to move, the arrow should appear. But automatic balancing will not start until all the requirements you've setup for automatic balancing are fulfilled. Then you can force an early start of balancing with the arrow.
    1 point
  19. Nope. The surface scan is a sector by sector scan. The file system scan is an api call for CHKDSK. So unless you're using the file recovery option, it should not be reading files.
    1 point
  20. It follows the normal balancing, rules. But you can use the Drive Usage Limiter if you wnt to clear out a specific disk or two.
    1 point
  21. If you are using symlinks, information about these are stored in the metadata folder, and that is set to x3 duplication by default.
    1 point
  22. Thanks for letting us know. I can confirm this behavior, as well, and have flagged it. https://stablebit.com/Admin/IssueAnalysis/28645
    1 point
  23. You are very weclome! Correct. At least for that system. Reconnecting it should automatically re-activate, though.
    1 point
  24. klepp0906

    Drive Sleep

    Did you ever figure this out? I mean you figured out it was drivepool but beyond that a fix? Ive been dealing with this ever since i installed drivepool. I installed it just after installing my HBA and new disk chassis so assumed it was the old LSI card not being able to pass sleep commands. Everything I found said it should so i decided to start shutting things off. Turned off the 3 drivepool services (well 6) but for the 3 softwares as I have all 3 installed. Drives all sleeping like a baby now. Obviously need to be able to enable the drivepool service before I set up my pools but i'd love to figure this out first. I did try procmon myself and same thing, drives didnt pick up activity there but its repeatable via testing. been at it almost all day.
    1 point
  25. Ah, okay, I see what you mean about the UI saving, now. I'll flag that as a bug, since I can definitely reproduce. https://stablebit.com/Admin/IssueAnalysis/28642
    1 point
  26. Update: I've restarted the host machine for some unrelated maintenance, and DrivePool is still reporting 8.57TB for this 8TB drive. Any advice/suggestions, or reassurance that this won't turn into a problem as more data gets written, would be great. EDIT: Remeasuring the pool seems to have resolved it, both 8TB drives are listed as being exactly 7.28TB total.
    1 point
  27. Hey Ben, good you found it! Yeah, that balancer plugin was exactly what I was referring to! @Christopher (Drashna) , the balancing plugins manual page seems to be outdated here. It says "File Placement Limiter" while this balancer is now actually renamed to "Drive Usage Limiter".
    1 point
  28. Jonibhoni

    TrueCrypt and DrivePool

    Yes, I guess it should work. Keep in mind, that DrivePool cannot split single files into chunks, so your physical drives that make up the pool must be large enough to host those 4 TB files (DrivePool works on file level, not on sector level!). But if you have for example two 10 TB drives, I guess it should work well. The performance characteristics will differ from RAID or dynamic discs, storage spaces and whatever, but if in doubt, maybe just give it a try. DrivePool doesn't need formatted drives, any existing partitions will do fine and creating a pool on them will not harm the existing data. For completeness, I can maybe add that there is even another possibility to have encrypted data stored on a DrivePool for redundancy: Instead of VeraCrypt, you *could* also use StableBit CloudDrive to create a "Local Disk" cloud drive inside your pool, which will act as if your local DrivePool drive was a cloud provider. As CloudDrive (as opposed to DrivePool) creates an encrypted virtual drive that is based on chunk files, you will have both Encryption (CloudDrive) and Redundancy (underlying DrivePool) in a nicely clean virtual NTFS drive with just software from the StableBit bundle. But honestly, I'm using that solution because I thought it was clever (it is more friendly to file-based backup than having 4 TB files), but - despite it works conceptually well - perceived performance is unfortunately an order of magnitude slower than just using a VeryCrypt container. I'm hoping for some optimization by the devs there... but I guess, as CloudDrive was made for, well: the cloud, it will never be as fast.
    1 point
  29. Support suggested that I reset all settings. I did and it did not fix everything but at least it was a good start. Now, the original Pool reappeared but only one of the 3 original drives was randomly listed in Pooled, and the other two were listed in Non-pooled. Trying to add these Non-Pooled drives would create some kind of "cannot add drives again" error. In the end, I had to remove every Pooled drive from the "original pool" by restarting service, seeing a new random drive listed in Pooled, removing it and restarting service until all drives are removed from original pool. Finally then I created a new pool, and moved all files from the original pool folders into the new pooled folders. What a pain but it was not hard, at least I got all my files back without losing anything, which was shocking. Please give me a like if this post has helped or will help anyone who run into similar issues.
    1 point
  30. Yep - Drivepool really doesn't care how the drive is connected - just so it is connected. I've moved drives from internal bays on my R710 to an external disk shelf and to usb enclosures (and back again) and they're always found.
    1 point
  31. I think you're looking for this: https://wiki.covecube.com/StableBit_DrivePool_Q4142489
    1 point
  32. i would also like to see dark mode addedto all stablebit products
    1 point
  33. Recently I had an issue that caused me to move my array over to another disk. I think the issue caused a problem, as 5 disks show up with damage warnings. In each case the damaged block is shown to be the very last one on the disk. I assume this isn't some weird visual artifact where blocks are re-ordered to be grouped? Anyway, clicking on the message about the bad disk of course gives a menu option to do a file scan. I decide to go ahead and do this, and of the 5 disks, I think only one actually apears to do anything- you can start see it trying to read all the files on the disk. The others, the option just seems to return, pretty much instantly, which makes no sense to me. They don't error or fail, or otherwise indicate a problem, they all just act like maybe there are no files on those disks to scan (there are). The disks vary in size, 2x4TB and the others are 10TB. One of the 4TB disks scanned fine, and indicated no problems, but it's not clear as to why it would work and the others wouldn't. Ideas? Thanks!
    1 point
  34. AFAIK, copying, even cloning does not work. The easiest way is: 1. Install/connect new HDD 2. Add HDD to the Pool 3. And now you can either click on Remove for the 6TB HDD or use Manage Pool -> Balancing -> Balancers -> Drive Usage Limiter -> uncheck Duplicated and Unduplicated -> Save -> Remeasure and Rebalance. 4. Wait. Until. It. Is. Done. (though you can reboot normally if you want/need to and it'll continue after boot. Not entirely sure if you go the Remove route.
    1 point
  35. I've been pretty confused about how Scanner actually works. Here are some of my observations. Scanner will scan a random drive without regard to when others were last scanned Scanner doesn't pay attention to the setting that says to scan every 30 days (ie don't scan for another 30 days) Scanner will stop mid scan and move to another drive Scanner will scan a drive, but not update the Last Scanned, and will scan it again later Scanner will scan a drive even if the settings to NOT Surface or File System scan are checked Some things I learned: Scanner will scan one drive per "interface". So if I have 2 HBAs, it would scan one drive per HBA. This accounts for some of the seemingly random drive selections There are two scans (Surface and File System), and they are not done at the same time. Scanner does not tell you which scan it is doing, and I haven't found a way to know which scan was started/completed per drive. I have a drive that has been formatting for 14 hours, and another that I just fixed the file system on. I want to rescan the second one so it can hopefully confirm it's fixed and wipe out the File System Damaged error. I went through all the drives and checked the boxes to NOT surface or file system scan them. When I hit Start Check it wants to scan the drive that's in the middle of formatting, and another drive. So the ONLY drive that it should be scanning it doesn't scan. First off, that just makes no sense to me. Why is is scanning drives with those options checked? Why it is scanning a drive that's in the middle of formatting? How can I manually start a scan on this one drive by itself?
    1 point
  36. I'm having significant problems with the DrivePool hanging for minutes on basic things like showing the contents of a folder in Explorer. So far I cannot figure out how to locate the source. For instance today I opened a command prompt and simply tried to switch to the DrivePool's drive (i.e. I typed X: into the command prompt.) That caused the command prompt to freeze for several minutes. While it was frozen in that command prompt I opened up a second command prompt and switched to each of the underlying drives, and even listed out a hundred thousand or so files and folders on each of the source drives. So, it feels like the DrivePool drive is hanging for something independent of the source drives. On the other hand I don't know what that something else could be as in my quest to troubleshoot this I have setup a brand new computer with a clean fresh install of everything including DrivePool. The only thing I transferred was the source drives. Yet the problem followed the drives to the new computer. StableBit Scanner reports that all the underlying drives are in great shape, and so far I can't find any logs that report problems. How can I root out the source of this problem? Oh and I did also try the StableBit Troubleshooter. However, it hung infinitely at collecting system information (Infinity defined as well over 80 hours of trying). Also the hanging seems to be intermittent. I might go a day or so without or it might happen every few minutes. I have utterly failed to identify triggers so far.
    1 point
  37. You posted on another thread too. But You may need to enable the "Unsafe" "Direct IO" option to get the SMART data off of these drives, due to the controller. https://wiki.covecube.com/StableBit_Scanner_Advanced_Settings#Advanced_Settings if that doesn't work, enable the "NoWmi" option for "Smart", as well.
    1 point
  38. We have a number of people using Windows 11 already, but we won't say that it's officially supported until Windows 11 is no longer a beta. This is because the OS is in too much flux until it's RTM'ed, and takes a lot of work to officially support it before then.
    1 point
  39. I don't really know. I speculate that MB-SATA may not be designed to be optimal. For instance, I do not know whether they can actually deliver 600MBps on all ports simultaneously. I guess it can be found through Yahoo but not sure. PCIe SATA cards, I have read that they can deliver lackluster performance, as in shared bandwidth dependent on the actual number of PCIe lanes it uses, but never that they'd interfere with MB-Sata. Again, I don't actually know and I am sure that there may be different qualities out there. But really, I think your PCIe SATA card will be fine and give no issues. It should work for your transition. I'd leave the card in the PC once done so that, AIW you need it, you have two ports readily available. The SAS HBA route is one I would recommend if you expect storage to grow as measured by number of drives. For me, it works like a charm and as these are, AFAIK, all made with enterprise servers in mind, I am pretty comfortable about performance, compatability and endurance.
    1 point
  40. Yeah, it's ... manufacturer fun. SMART really isn't a standard, it's more of a guideline, that a lot of manufacturers take a LOT of liberty with. NVMe health is a *lot* better, in this regards (it an actual standard).
    1 point
  41. I don't think the SSD Optimizer works that way. When it hits the trigger point and flushes data, it appears to flush all the data in the cache. There is nothing left in the cache after the flush. I think there are some SSDs with learning software that will keep your most used programs/data on the SSD. Those are typically system SSDs. The DrivePool SSD Optimizer is not designed for keeping your most used files on the SSD. However, DrivePool can use your SSD as both a pool drive and a system drive. You could simply install the programs and/or put those files you use most often directly on the SSD and not as part of DrivePool. For example, my DrivePool SSD is drive Z:. I can install, store, or locate any files I want on my Z: drive as normal, or I can also use that SSD in DrivePool which adds the hidden DrivePool PoolPart directory on my Z: drive. One of the big advantages to DrivePool over other systems such as RAID or Windows Storage Spaces is that you are able to add and use an existing drive to DrivePool and, at the same time, also use that drive normally on your system. My older RAID and Storage Spaces took full control over my drives and those pool drives could not be used for anything else.
    1 point
  42. Well, it may be worth testing on the latest beta versions, as there are some changes that may effect this, due to driver changes. If you're already on the beta or the beta doesn't help, please open a ticket at https://stablebit.com/Contact so we can see if we can help fix this issue.
    1 point
  43. Diodato

    Drive Sleep

    Hello again Christopher, On the machine in question only DrivePool and Scanner are installed. I have CloudDrive but I use it on a different PC on a different LAN. I have verified thrice just to be sure and here is my setup and observations: - PC setup: Win10 1909 build 18363.815, DrivePool version is 2.2.3.1019 , Scanner version is 2.5.6.3336 - Three USB disks attached (drive letters D, E and F) (TerraMaster D5-300C enclosure) - Scanner has under SMART throttling set to 60 minutes, "only query during the work window or if scanning" set and "do not query if the disk has spun down" also is set. - DrivePool is not configured for any pool disk, as if just installed. DrivePool has "BitLocker_PoolPartUnlockDetect" set to "false" (both Default and Override) - the DrivePool Service is Stopped - the active power plan of Windows has disk spindown set to 3 minutes. All three disks are not spinning. - Procmon64 shows abolutely no processes accessing the D, E or F disks - Resmon (disk tab) shows absolutely no activity for disks D, E and F Now as soon as I enable the DrivePool Service the disks wake up and never again go to sleep. You can see in the Resmon that every second there is access to all three disks. In the image in my previous post accesses to disk D are obvious, but if look with attention, every second there are very subtle marks also for disks E and F (very small peaks). Procmon continues to show that nothing is accessing disks D, E and F. If I stop the DrivePool Service I stop seeing in Resmon disk access for these three disks and after three minutes (whatever is set in the power plan) the go to sleep. Once again if I restart the DrivePool Service the disks wake-up and I start seeing the same every second disk access activity in Resmon. Again the disks will never spin-down. Not until I stop the DrivePool Service. Now I do not want to say that the DrivePool service is the culprit but it took me a very long time to narrow down what is causing my external USB drives not to sleep. Because Procmon was useless (no process was accessing the disks) I started stopping different services and have observed the above described behavior with the DrivePool Service. As the authors of DrivePool I am kindly asking for suggestions what to do to have my external disks spin-down or how to continue investigating. It might be a Microsoft Windows or USB driver problem or other but I would like some help or directions with the next steps. Thanks in advance. Cheers.
    1 point
  44. OK. So, there is a lot here, so let's unpack this one step at a time. I'm reading some fundamental confusion here, so I want to make sure to clear it up before you take any additional steps forward. Starting here, which is very important: It's critical that you understand the distinction in methodology between something like Netdrive and CloudDrive, as a solution. Netdrive and rClone and their cousins are file-based solutions that effectively operate as frontends for Google's Drive API. They upload local files to Drive as files on Drive, and those files are then accessible from your Drive--whether online via the web interface, or via the tools themselves. That means that if you use Netdrive to upload a 100MB File1.bin, you'll have a 100MB file called File1.bin on your Google drive that is identical to the one you uploaded. Some solutions, like rClone, may upload the file with an obfuscated file name like Xf7f3g.bin, and even apply encryption to the file as it's being uploaded, and decryption when it is retrieved. But they are still uploading the entire file, as a file, using Google's API. If you understand all of that, then understand that CloudDrive does not operate the same way. CloudDrive is not a frontend for Google's API. CloudDrive creates a drive image, breaks that up into hundreds, thousands, or even millions of chunks, and then uses Google's infrastructure and API to upload those chunks to your cloud storage. This means that if you use CloudDrive to store a 100MB file called File1.bin, you'll actually have some number of chunks (depending on your configured chunk size) called something like XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-1, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-2, XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX-chunk-3, etc, as well as a bunch of metadata that CloudDrive uses to access and modify the data on your drive. Note that these "files" do not correspond to the file size, type, or name that you uploaded, and cannot be accessed outside of CloudDrive in any meaningful way. CloudDrive actually stores your content as blocks, just like a physical hard drive, and then stores chunks of those blocks on your cloud storage. Though it can accomplish similar ends to Netdrive, rClone, or any related piece of software, its actual method of doing so is very different in important ways for you to understand. So what, exactly, does this mean for you? It means, for starters, that you cannot simply use CloudDrive to access information that is already located in your cloud storage. CloudDrive only accesses information that has been converted to the format that it uses to store data, and CloudDrive's format cannot be accessed by other applications (or Google themselves). Any data that you'd like to migrate from your existing cloud storage to a CloudDrive volume must be downloaded and moved to the CloudDrive volume just as it would need to be if you were to migrate the data to a new physical drive on your machine--for the same reasons. It may be helpful to think of CloudDrive as a virtual machine drive image. It's the same general concept. Just as you would have to copy data within the virtual machine in order to move data to the VM image, you'll have to copy data within CloudDrive to move it to your CloudDrive volume. There are both benefits and drawbacks to using this approach: Benefits CloudDrive is, in my experience, faster than rClone and its cousins. Particularly around the area of jumping to granular data locations, as you would for, say, jumping to a specific location in a media file. CloudDrive stores an actual file system in the cloud, and that file system can be repaired and maintained just like one located on a physical drive. Tools like chkdsk and windows' own in-built indexing systems function on a CloudDrive volume just as they will on your local drive volumes. In your case this means that Plex's library scans will take seconds, and will not lock you out of Google's API limitations. CloudDrive's block-based storage means that it can modify portions of files in-place, without downloading the entire file and reuploading it. CloudDrive's cache is vastly more intelligent than those implemented by file-based solutions, and is capable of, for example, storing the most frequently accessed chunks of data, such as those containing the metadata information in media files, rather than whole media files. This, like the above, also translates to faster access times and searches. CloudDrive's block-based solution allows for a level of encryption and data security that other solutions simply cannot match. Data is completely AES encrypted before it is ever even written to the cache, and not even Covecube themselves can access the data without your key. Neither your cloud provider, nor unauthorized users and administrators for your organization, can access your data without consent. Drawbacks (read carefully) CloudDrive's use of an actual file system also introduces vulnerabilities that file-based solutions do not have. If the file system data itself becomes corrupted on your storage, it can affect your ability to access the entire drive--in the same way that a corrupted file system can cause data loss on a physical drive as well. The most common sorts of corruption can be repaired with tools like chkdsk, but there have been incidents caused by Google's infrastructure that have caused massive data loss for CloudDrive users in the past--and there may be more in the future, though CloudDrive has implemented redundancies and checks to prevent them going forward. Note that tools like testdisk and recuva can be used on a CloudDrive volume just as they can on a physical volume in order to recover corrupt data, but this process is very tedious and generally only worth using for genuinely critical and irreplaceable data. I don't personally consider media files to be critical or irreplaceable, but each user must consider their own risk tolerance. A CloudDrive volume is not accessible without CloudDrive. Your data will be locked into this ecosystem if you convert to CloudDrive as a solution. Your data will also only be accessible from one machine at a time. CloudDrive's caching system means that corruption can occur if multiple machines could access your data at once, and, as such, it will not permit the volume to be mounted by multiple instances simultaneously. And, as mentioned, all data must be uploaded within the CloudDrive infrastructure to be used with CloudDrive. Your existing data will not work. So, having said all of that, before I move on to helping you with your other questions, let me know that you're still interested in moving forward with this process. I can help you with the other questions, but I'm not sure that you were on the right page with the project you were signing up for here. rClone and NetDrive both also make fine solutions for media storage, but they're actually very different beasts than CloudDrive, and it's really important to understand the distinction. Many people are not interested in the additional limitations.
    1 point
  45. I have mine set to not scan removable drives - and do not have any issues with adding or removing usb drives/sticks - this is on win 10 FCU
    1 point
  46. jtc

    Unremovable nested folders

    Solved (Knock on wood,) Thanks to everyone for the prompt suggestions. For future reference, the list below contains commonly advised methods to delete stuck files when a space has been appended to the file names. Use quotes that enclose the trailing space: rd "my folder " Same as above, but add a trailing [back]slash: rd "my folder \" Try using <shift>+delete to bypass the recycle bin, as suggested by otispresley Specify the UNC path: rd "\\.\C:\temp\my folder " Try 8.3 names (if enabled): rd myfold~1 Finally, boot a Linux Live CD: rmdir "/media/blahblah/temp/my folder " What worked for me: I first uninstalled Dropbox, just in case there was some interaction as Christopher suggested. (I have a vague recollection that the troublesome folders were,in fact, related to a Dropbox incident.) Next, I launched an administrative command prompt and traversed the PoolPart folder on each of the two drives, one drive containing the original and the other the duplicated copy of the intransigent folders. Finally, after trying the UNC path solution and the Shift-Delete method with no success, I tried the second of the ideas listed above, adding a trailing backslash to the "shellext " folder name that was causing all the trouble. ( RD "shellext \" ) It worked. Once done, the remainder of the folder structure was removed with RD /S FIXME, "fixme" being the first-level folder. The errant folders have also disappeared from the DrivePool virtual disk, which did not happen when I had tried a Linux Live CD on my previous attempt. Anyway, thanks again.
    1 point
  47. Yes, you can do that, too. However, I generally prefer and recommend mounting to a folder, for ease of access. It's much easier to run "chkdsk c:\drives\pool1\disk5", than "chkdsk \\?\Volume{GUID}"... and easier to identify. Yes. The actually pool is handled by a kernel mode driver, meaning that the activity is passed on directly to the disks, basically. Meaning, you don't see it listed in task manager, like a normal program.
    1 point
  48. or not mount them at all appears to work fine as well - swapped to win 10 pro 64bit now - as clean install the mount points were lost but drivepool picked the drives up on reinstall without re mounting them just have the os drive and a test pool working well so far - interesting watching the pool fill up for the first time and turned on duplication half way through a backup job - handled very well by the app its also interesting to see that in task manager the "pool" has no disk activity reported while the underlying disks are running around moving data - i assume this is normal?
    1 point
  49. Thanks Have mounted then and removed drive letters and Drivepool picked up the changes
    1 point
  50. Hi the individual drive letters are not needed most people give the drives a name like bay 1 bay 2 etc purely to help identify the drives then remove the letters under disk management.
    1 point
×
×
  • Create New...