Jump to content

Jonibhoni

Members
  • Posts

    32
  • Joined

  • Last visited

  • Days Won

    4

Jonibhoni last won the day on March 12

Jonibhoni had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jonibhoni's Achievements

  1. I'm using Everything with DrivePool actually. Basically you have 3 choices: 1) You either can go the slow way and add your pool drive letter as a generic "folder" to the DB, which will work as expected but be very slow on indexing. 2) You can directly index ALL of the base drives of your pool, which will index very fast because NTFS-Journal can be used, but it will lead to duplicate entries for all your duplicated files with different (base drive) paths. Also, you shouldn't fiddle around with the files at the base drive paths because it will circumvent drive pool from working properly (DrivePool is the only one meant to handle the files directly on the pool's base drives). 3) Or you can use a technique mentioned here in order to make Everything scan the base drives but output the file with the pool's path. You will also get duplicate entries for duplicate files, but they will both have the expected pool drive letter path and you can also actually work with them (e.g. delete them from context menu) right in Everything. I use method 2 and 3 in conjunction, which makes each duplicated file appear 4 times in Everything (), but this way I can see which file is duplicated onto which base drive, and I can also work with the files on the pool drive letter (V: in my case) right from within Everything: Here I searched for one specific file "testprojekt" on a pool with 2x duplication, which is shown on the pool's path (V:) via method 3 as well as on the two base drives path where the duplicated files physically lie around via method 2. I did this with Everything 1.4. There may even be new possibilities for more easily achieving method 3 with Everything 1.5, without having to deal with cryptic semicolon-separated config files: Community member klepp has had some discussion here with David from voidtools. Anyway, I didn't read through it since my solution works well at the moment.
  2. Ah, ok, I guess you must have cloud enabled/connected for it to appear. Here is how it looks for me: v2.3.0.1385
  3. In the main window, in the upper right, click the gear icon -> Updates... -> gear icon. The settings there are quite detailed and have descriptive names. I guess this will answer your question.
  4. Hi klepp So my thoughts on this: This will only affect balancers which do trigger immediate balancing. The most prominent being "StableBit Sanner" balancer, which allows immediate evacuation of damaged drives once a defect/smart error is found. Otherwise, the evacuation would have to wait until the next scheduled balancing pass, so, eternally, in your case. If you plan to use StableBit scanner in conjunction with DrivePool eventually, you should check it. -> Only relevant if you actually use "File Placement" rules (rules per folder). "real-time file placement limits" are constraints on where to place NEW files, at the moment of creating/moving them to the pool. If you check this, all balancers which use "real-time file placement limits", e.g. StableBit Scanner, SSD optimizer, Drive Usage Limiter, will take precedence over per-folder file placement rules. Otherwise, per-folder file placement rules will override those limits, allowing files to be placed e.g. on defective drives (Scanner plugin), or drives other than the SSD cache drive (SSD optimizer) if you have respective rules set up. I guess you want to check this one, if you want no files to be placed on drives flagged as defective by scanner. Btw., you can see the kind of balancing rules active in the disk view. For example, "real time file placement limiters" (restrictions for placing NEW files) are shown as red markers: (in this example, new files will not be placed on F) While normal balacing targets (targeted disk usage for moving EXISTING files around) are shown in blue: (in this example, triggering a balancing pass will move files away from F until target is reached) https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Disks List#Balancing Markers -> Only relevant if you actually use "File Placement" rules (rules per folder). Same as "2 - File placement rules respect real-time file placement...", but for existing files rather than new ones. You basically decide if you have set a file placement rule for folder X to reside on drive X, but some balancer plugin says X should go on drive Y, which one takes precedence. But I'm unsure here (@Christopher (Drashna) ?), if this option a) only becomes important once there is no other solution to solve a balancing conflict, in the meaning of that balancers will care for file placement rules "if possible" even if this box is unchecked, or if they will b) completely ignore file placement rules if this box is unchecked, even if there were solutions that satisfy both parties. I would have it checked. If you have manual per-folder file placement rules, you probably have them for a reason, so you want balancers stick to them, too. (especially blunt, sweeping ones like "Disk Space Equalizer" or "Volume Equalization"). -> Only relevant if you actually use "File Placement" rules (rules per folder). A subcase of "Balancing plug-ins respect file placement..."). You should have this one checked. If you're emptying your drive because of ... Removing it via the GUI Evacuating a drive because of scanner alert ...you problably don't want any files to be left on it, even if it goes against your per-folder file placement rules. You want the scanner plugin to be first. Otherwise, files will be placed on defective drives because Scanner plugin rules are overriden by other balancers. (the top one has the last word)
  5. Hmm. I'm curious: What kind of IO errors? Was it a problem with reading the data from the cache drive? Or was it a problem retrieving the data from the cloud? Did you take a screenshot or something of those messages? I guess both shouldn't happen. In the Manage Drive menu there is an option Data integrity -> Upload verification. If you enable this, then each data uploaded will be immediately downloaded again to check whether it was stored correctly at your cloud provider. Of course this eats twice the bandwith, but at least you will immediately be aware of corruption with your cloud data, the connection or something, right at upload time. Think about enabling it for testing purposes.
  6. @Christopher (Drashna) Correct me if I am wrong, but wouldn't that break the drive? As far as I understood it, klepp meant to move the chunk/data files from the old cache drive and/or from the cloud drive's Dropbox folder (StableBit CloudDrive Data (xxx-xxx-xxx-xxx)) to the new one. But the data/chunk files are not compatible with a new chunk/sector size, of course, so this would break the drive. The described procedure works for DrivePools, because they store files natively, but not for CloudDrives. For changing the technical guts of your cloud drive you have to create a new drive, and then just regularly move/sync your files from the mounted old cloud drive to the new one (afaik). Mind by the way, that you can easily expand the drive's volume size just via the GUI. You wouldn't need to fiddle with the chunk size nor copy your data anywhere just to increase the drive's overall size. Chunk size and sector size are technical parameters how the data is stored behind the scenes, but independent of your cloud drive's volume size.
  7. Yes, you can access it and force attach it in case the other computer will not access it anymore. I think the license doesn't matter in this case. Your license is just about how many computer you can run CloudDrive on; it's independent from the number of providers or cloud drives at a certain provider or logins or whatever. If you connect to any provider that has cloud drives on it (from any computer, from any license), you will see what StableBit cloud drives exactly are at that location. And then it's up to you which one to attach to, detach from, or force attach, in case of disaster. Read this section of the manual and especially take a look at the screenshots, I think they will give you the impression you need: https://stablebit.com/Support/CloudDrive/Manual?Section=Reattaching your Drive
  8. I think the FAQ answers this, actually. The folder structure would probably look like you described. Your Dropbox would be populated with multiple cloud drive folders, distinguishable by ID. I don't use Dropbox myself, but with multiple "local disk" CloudDrives in the same location it looks like this:
  9. You also cannot change the virtual sector size. It's a thing I regretted when I discovered that MyDefrag cannot handle disks with 4096 bytes per sector.
  10. Hello again klepp, check the https://stablebit.com/Support/CloudDrive/Faq
  11. Yep. Also with having automatic balancing enabled, your pool will become unbalanced from time to time. For example if you've setup the "Balance Immediately -> No more often than ..." rule, or if you've setup a trigger like "If the balance ratio falls below...". As soon as there is 1 byte to move, the arrow should appear. But automatic balancing will not start until all the requirements you've setup for automatic balancing are fulfilled. Then you can force an early start of balancing with the arrow.
  12. Your balancing pass would have nothing to do. That's why there is no arrow. A green filled bar means your pool is perfectly balanced according to the balancing (and file placement) rules you've setup.
  13. Well, it's not just due to encryption. Also that CloudDrive stores data on a virtual drive block level and not file level. Means, that you couldn't just download your Macrium image file even without encryption, because it's spread over a lot of files. So, for your use case, you can create a CloudDrive at your Dropbox with any local drive as local cache drive (e.g. your spinners) - that will work. You can also access your images from another computer in case your kids burn down your house. BUT, as you said, you will need to install CloudDrive on another computer in order to download, decrypt and assemble your data. CloudDrive is NOT made to have a shared drive in the cloud that's simultaneously used from many computers, in fact, this will corrupt your data. But if you know for sure, that you will never access that drive from your old computer again (because it burned down), you can force attach it from another computer. https://stablebit.com/Support/CloudDrive/Manual?Section=Reattaching your Drive#Force Attaching As always, make sure your decryption key is not on a post-it in your burned down house, in case. Btw.: Other services like Google Drive or OwnCloud based ones offer to setup multiple synchronization folders per machine (not just one "D:\Dropbox" like Dropbox), that would also solve your problem.
  14. I don't think so. SSD Optimizer is no RAM disk or alike. It just makes files written to the designated SSDs in your pool first. But with those SSDs being a "proper" part of your pool already. So as soon as your data is written to the pool SSDs, it's already a regular part of your pool and duplicated etc. like any other file in your pool, regardless if it currently sits on the SSDs or later on the HDDs. But there are some differences in handling, e.g. that you have to supply two SSD for SSD optimizer, if you want duplication, or different behaviour when running full. I wrote something about it here:
  15. I always thought the straightforward way was to let PrimoCache cache the virtual pool drive itself (preventing it also from caching duplicates twice). But I guess it only lets you choose physical drives, judging from your question?
×
×
  • Create New...