Jump to content

Jaga

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    27

Posts posted by Jaga

  1. Since the "Reconnect.." menu item is present and available, one has to wonder if your cloud drive is actually connected.  Try choosing Reconnect and see if the menu adds choosable options.  Cleanup, Detatch, Reauthorize, and Resize are all greyed out, leaving me to think there's some communication issue with the host of the files.

    How did you configure this Cloud Drive?  More info might be helpful in determining what's going on.

    This is what I see for a simple File Share Cloud Drive when I open it's menu.

    V0rdtF8.png

     

    Changing my drive to Read Only just greys out Resize - all the others remain available.

  2. I think the generally accepted solution is:  Drivepool is a lifetime license, easily capable of being installed on any Windows platform and reading a pool that consists of drives from any other pooled machine (i.e. just connect the drives to the other machine and install DP).  Because of that ease-of-use just make the pool and use a drive letter for it, and don't even think about what files are on what physical disks.  Forget about them entirely - they are now merged into a new virtual disk.

    The only reason I could see this as inconvenient was if you were thinking of individual drive portability.  Shutting the machine down, taking one drive somewhere else and accessing complete folders there instead of where the pool was created.  BUT if you're seriously contemplating doing that, just use pool duplication and you'll have completely separate copies of your files on multiple drives.

    Methinks you're overworking the problem a bit.  :)  Unless..  there is another reason you need the physical drives so ordered and structured that we don't know about.  There's a reason it's called pooling, and if that's not useful then perhaps the software isn't what you need.

    You can *still* force specific folders onto specific drives, but to do it for thousands of folders you'll have to have a better hierarchical folder scheme.  Not just one top folder and thousands of sub-folders directly under it.  Sort them further by type, or by date, or by size, or by usage - anything to put them into deeper sub-groups so File Placement is more helpful.

  3. 3 hours ago, Jose M Filion said:

    So I changed my DNS to google and I updated to the latest version (beta) and now it's more stable I just get these messages now, but it's more stable.  I'm going to try out your suggestion @jaga , I think it's xfinity, since I switched from at&t  this started happening so It has to be the ISP, but other suggestions def. have helped it become more stable.  Thank you guys for the support! 

    Awesome, keep at it - make them give you what you pay for.  Comcast (Xfinity) can be *very* hit and miss, depending on your geographical location.  They are my ISP too, and as mentioned I had to involve the FCC with concrete data supporting the fact the old line I had was crap.  If you have telephone service from them and have issues with it as well, they are obligated by law to respond within 24 hours, due to 911 exposure.

    Good luck with it, and let us know how it goes.

  4. You'll want to use File Placement Rules on folders to ensure they stay on specific drives.  Balancing settings, File Placement tab, Folders tab.

    https://stablebit.com/Support/DrivePool/2.X/Manual?Section=File Placement

    If your files are currently spread out over multiple disks (per folder) you will want to setup the desired rules, then initiate a re-balance to have it move them into place.

  5. That should work just fine.  DrivePool isn't reliant on the OS version.  As long as it sees the hidden PoolPart folders on connected drives at boot/startup, it'll remount the pool without issue.

    Be sure to contact CoveCube support and ask them to deactivate the license you had issued DrivePool on that computer, or you won't be able to re-activate using it.

    I haven't used Veeam (yet), though I evaluated it last week after reading more positive comments elsewhere on it.  I'm currently using Macrium Reflect, which (for servers) has a rather heavy expense associated.  I'll probably end up using Veeam on my new WSE 2016 server upgrade.

  6. Suggestion:  pick up a copy of PingPlotter, enter in googleapis.com, set it for 5 or 10 second resolution, then run it.  You get 14 days on the Standard/Pro edition to collect unlimited samples, and can switch to the free edition after that.  See what kind of reliability you get over 24/48 hours, and what kind of pings you get on each hop.  It might indicate any problems along the route, or if your connection drops out entirely.  Attached a sample plot on my VPN connection to this post so you can see the kind of info you get from it.

    You can always just run a ping test from a command prompt, but lose detail on each hop that way.

    I used this method when proving to my ISP that there were connection issues.  It helped me to successfully get the FCC involved and force the ISP to fix their hardware.

    Edit:  I left mine running a little while after making this post, and the IP resolution changed on-the-fly.  That's DNS and/or Google doing it, not me.  Really nice info you can get on packets and hops this way.

     

    Plot1.png

  7. 4 minutes ago, madbuda said:

    More on this, say my primary server is down for an extended maintenance or failure... I would need to install cloud drive on the second server and use the local folder in order to read this data??

    From what I understand is this data is not readable without CD, as in I cannot just browse to a folder and see a file

    Yep, the Cloud Drive data is stored in hidden "CloudPart" folders (or in the case of File Sharing, a hidden "StableBit CloudDrive Share Data" folder) inside the storage volume / provider you used.  You could either copy across the configuration folder (the service folder mentioned a few posts up) from the first computer and then edit CD's config to point to the location, or just install Cloud Drive and re-configure so it knows where to see the data.  Keep in mind if the first server went down ungracefully, you'll have to contact CoveCube to release that license for activation on the other computer.

    And no - it's stored in chunks that are .dat files in there, so unreadable without Cloud Drive.

  8. 12 minutes ago, hammy said:

    It just seems odd that the "duplicated files" are so close to exactly double! It definitely doesn't count "duplicated" files twice (ie, once for their original file, and again for the duplicate counterpart)?  

    Also, just curious, in your opinion is it actually worth it to have duplication enabled for read i/o? I have (at most) 5-6 people accessing my home server at once just so family can watch some movies off the drives. Worth it for that or is that not an intensive enough scenario for it to matter?

    I'm definitely not the expert in duplication and metadata.  Perhaps one of the developers can check over the numbers and offer an opinion.  Normally when you set 2x duplication on a folder, once Drivepool -finishes- duplicating it, the files are *both* masters and completely identical (size, contents, permissions, flags, etc).

    If you can spare the space for the duplicates (and it looks like you can), then read i/o can definitely help with performance.  If you had all 5-6 people hitting the same drive, and more than one was streaming UHD..  that drive would be under a bit too much stress.  Not a high likelihood, but possible.  And it's super easy to disable folder duplication later on, regaining the lost space.

  9. From what I understand, the metadata is simply information describing the system, it's files and the overall folder structure of your Pool, including the duplicated items.  With rounding numbers, it sounds close to what it should be.  The extra space may not even be used by the metadata - it could be a miscalculation by Drivepool on space used.

    You can always force a re-calculation of duplicates by clicking the little gear in the upper right corner, choose "Troubleshooting", and then "Recheck duplication...".  If there were any inconsistencies, it may iron them out.

  10. Absolutely, quite easy in fact.  When you add the top-tier pool in the Hierarchy (which consists of all the smaller pools), have a SSD partition (or entire drive) connected to the same machine, and add it to that top-tier pool as the SSD Cache drive via the SSD Optimizer plugin.  https://stablebit.com/DrivePool/Plugins

    You'd effectively create a high-speed file-write buffer for the entire set of pools.  Though remember that anything residing on the SSD Cache drive isn't duplicated *until* it hits the sub-pools in the hierarchy.  I'm not completely sure how aggressive the plugin is at moving files from the SSD Cache to the rest of the pool - that's a question for @Christopher (Drashna) to answer I think.  So it's a point of failure, but a minor one.

  11. 8 minutes ago, hammy said:

    Thanks so much for the help. I did 2x for the individual movies folder, everything else is at 1x (except the metadata folder at 3x). As far as I can see, file duplication is not on/I didn't turn it on. 

    Would it be better to possibly partition my C drive then add that portion to the pool? Thanks again!

    That's normal for the Metadata.  Sounds like your duplication is as expected, just taking more space than you'd hoped due to the metadata usage.

    Partitioning is a great way to control use on a drive, yes.  If you can manage to shrink the C: volume enough, drop another partition on there solely for DrivePool.  OR you can consider that second SSD partition for use with the DrivePool SSD Optimizer plugin (to speed up the pool as a whole) (https://stablebit.com/DrivePool/Plugins).  It effectively adds the SSD partition to the Pool as the landing point for all new files (for faster access), then slowly offloads them to the rest of the pool over time and with use.  That's probably how I'd use the space if I were you.

  12. What level of folder duplication did you choose?  2x, 3x, 4x, more?  It will determine the number of copies across your drives, and the resulting used space for duplicates.  If you enabled both folder/file duplication and drive duplication (on the entire pool) at the same time it complicates figuring out total space used.  More information about your level of duplication for everything would help to know.

    While you can add the C drive to the pool (to take advantage of left-over space), you may have a hard time controlling the pool's use of that drive, and it may fill up the hidden poolpart folder more quickly than you want.  You can create file placement filters to try and help tell DrivePool what you want on there, to keep it under control.  The current contents of the C: drive will never show in a pool.

     

  13. There are many ways to accomplish it, using either 1 or 2 copies of DrivePool.  2 DrivePool installations makes things a bit smoother overall.  Here are two ways I'd envision setting it up:

     

    First solution (2 copies of DrivePool, 1 copy of Cloud Drive):

    • Server A: Pool 1 (local resources)
    • Server B: Pool 2 (local resources) - Pool drive shared to Server A
    • Either Server: Pool 3 (Gdrive resource via Cloud Drive)
    • Whichever server manages Gdrive via Cloud Drive, create your Top-Tier Pool there (using Hierarchical pooling) consisting of Pools 1/2/3 and share back to network.
    • The machine using Cloud Drive will control all duplication via DP on the top-tier pool.  Both machines would share duplication workload.

     

    Second Solution (1 copy of DrivePool, 1 copy of Cloud Drive):

    • Server A: Pool 1 (local resources)
    • Server B: share storage space to Server A via network
    • Server A: Pool 2 (network storage space) - Pool is built either using drive mappings from Server B, or via Cloud Drive's File Sharing feature (where Cloud Drive creates a Drive on the network resource and mounts a letter for it locally)
    • Server A: Pool 3 (Gdrive resource via Cloud Drive)
    • Server A in this case would contain all sub-pools, and the top-tier Hierarchical pool for sharing back to the network.  It would control all duplication, and have virtually all of the workload.
    • Server B in this case is just a resource for space.

     

    The cleanest and best balanced implementation would be the first, though it requires 2 copies of DrivePool.

     

    There are three ways to handle drive mappings for Pools across the network:

    • iSCSI drive mappings are faster than other methods and have no overhead, but not very flexible.
    • Mounted network folder shares are easy to setup, slower than iSCSI, but faster than Cloud Drive's File Sharing.
    • Cloud Drive's File Sharing, which allows you to control space used on the target resource.  Slower than other methods, highly flexible.

     

    @Christopher (Drashna) - do we have a way for multiple networked installations of DrivePool to see each other's Pools and include them as children in higher tier Pools, *without* first mapping drive letters for them?  Seamless interoperability across the network would be a nice feature for server clusters, and help cut down on drives/letters.

  14. 9 hours ago, madbuda said:

    Lastly, can you import an existing cloud drive on another install?

    It should be possible by copying over the settings' JSON file to the same location on the other machine.  You may also need to copy the "store" folder across along with it's contents.  I'd just copy the entire "service" folder across for good measure.  You can find info on the settings JSON here.  If you're using file share cloud drives, you'll want the resource shares to be mapped the same way to the second machine.

    Not sure if you can run dual-access (two Cloud Drive installations accessing the same cloud resources) as they'd probably fight if connected at the same time, though you can use multiple cloud drives per provider/service.

  15. Great, good luck with it.  If you find Cloud Drive slower than you need, either increase the local cache size, or switch to a direct iSCSI share added to the pool and re-test the speed.  It's just a question of flexibility with Cloud Drive vs raw speed with the iSCSI volume at that point.

  16. Using the File Share option in Cloud Drive is relatively simple.  Point it to a network path and give it authentication info.  You can create whatever size volume you need on that share and add it directly to the Pool.

    https://stablebit.com/Support/CloudDrive/Manual?Section=File Share

    Due to CloudDrive's flexible volume parameters and local caching, I would personally prefer it for this over a static iSCSI volume share.  Though ultimately, the iSCSI volume added directly to the pool would be faster.

     

  17. A file placement rule would unfortunately not work here - it would take all files you told it to watch for (a filemask of *) and fill up the drive you were trying to limit (to a %) before all other drives.

    The Ordered File Placement plugin will similarly fill up drives in the order you specify.  They are sequentially queued for filling, which you don't want.

    You can currently only limit drives in the pool using the Drive Usage Limiter plugin to a global percentage full, equally.  Perhaps someday that plugin will allow separate sliders per drive.  I'd personally like to see that as an option.  i.e. a 90% global limit across all pool drives, where a single 4TB drive is limited to 50% space used equates to 1.8TB usable space for DrivePool.  Or..  the most limiting setting is the effective one.

    The easy way I see this being solved for you is separating the physical drive into multiple partitions, sizing them according to how much space you want to give DrivePool on the first, and yourself on the second.  You will need to shrink the existing partition on the drive to get enough space for the second.

  18. 1 hour ago, johnnj said:

    I really need to stop messing around with these cheesy jbods and do what you do with SAS and a multibay server case.  This box was only about 2 months old and it's out of service already.

    I think Mediasonic offers a 1 year warranty on all enclosures they sell.  That should hold for those sold through 3rd party sellers as well.  Unless of course, it was refurbished or sold used.

  19. 3 hours ago, PossumsDad said:

    It was actually easy to upgrade. I downloaded the Media Creation Tool from Microsoft.

    I opted to save the files as a ISO. I extracted the files from the ISO into a install directory on my NVMe drive. Then typed setup.

    It installed Windows 10 Pro since I was upgrading from Windows 8.1 Pro. Once the upgrade was completed Windows activated itself. I wasn't prompted for a product key or anything. So it looks like you can still get the upgrade for FREE! I'm not complaining.

    I hope this info can help someone else get a free copy of Windows 10.

    You might want to double-check the activation on that installed copy of W10, and verify.  Even using the media creation tool, supposedly the resulting copy of W10 isn't licensed.  It would be nice to think that it was, but..

    https://www.zdnet.com/article/heres-how-you-can-still-get-a-free-windows-10-upgrade/  (scroll to the bottom)

    and..

    https://www.ghacks.net/2015/08/01/check-if-windows-10-is-activated/

    The only exception Microsoft built into their ruleset was if the PC was previously upgraded to W10 during the free upgrade period.  It can then be re-upgraded down the road for free.

  20. 4 hours ago, jergo said:

    Just thought of something...my Windows temp file has been remapped to a different drive, due to having a solid state drive as my C drive and being in the process of re-encoding about 1000 movies with Handbrake. I didn't want to wear out my solid state drive, so I moved the temp file to a different, regular drive. Could this be my problem?

    The Scanner settings JSON files (all that I can find anyway) are in the "C:\ProgramData\StableBit Scanner\Service\Store\Json" folder, so it shouldn't have anything to do with the Windows temp folder locations.  You might want to check that directory and see if they exist in there, and check some of their file creation dates.  If (after rebooting) it is re-creating the files from scratch (or wiping the folder contents), you should be able to tell.

  21. 32 minutes ago, Christopher (Drashna) said:
    • The new JSON format for the setting store (located in "C:\ProgramData\StableBit Scanner\Service\Store\Json") should be much more resiliant. 
      In fact, I tested out copying the folder elsewhere, nuked the settings, and restored the JSON folder.  I did this on my production server, and can verify that it works.  So, if you backup the folder in question, you setting should be safe, from here on out.

    Might be a nice option to have an import/export function for the settings in the various products.  Primocache and Primo Ram Disk have it, and I use them when re-creating configs all the time.  It has saved my bacon more than once.  Just bundle the JSON folder into a .zip and allow the user to specify export destination.  On import, unbundle and write (with optional overwrite verification).  Just a suggestion.  :) 

×
×
  • Create New...