Jump to content

Carlo

Members
  • Posts

    32
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Carlo

  1. I'm sure I'll get used to the GUI but it's just not intuative in Windows programs to not have a menu choice or "tools" icon that you click on to bring up options. In this case there is sort of a tools icon but it doesn't do what you initially think it should do. Having to look half way down the page and then finding a VERY SMALL drop down under a graphic is not intuitive nor common practice in Windows programming. That's what I meant by not "conventional" in the above post. Carlo PS But GUI design is 2nd in my book to stability and functionality.
  2. Yes, but what I'm finding is that it's completely filling up whatever drive I assign to the cloud drive since my pool is over 25TB but any single drive is at most 6TB in size. In my last case I assigned an empty 4TB drive I just setup for DrivePool use since I needed more local storage. The CloudDrive "sync" feature then started building it's local cache of the 25TB of data. It of course filled up the 4TB drive. After it fills up I get a slew of errors/retrys and of course this 4TB of storage is removed (in practical terms) from my pool use since it has no empty storage left. Considering that during setup I select NONE as the cache size this seems to be a bug from all logical account of thinking. Otherwise what is the cache setting used for? I've set up limits for each drive and all my drives are set to hold only the non-duplicated content ONLY except for the cloud drive which is set to duplicate only. While I know the CloudPool is a seperate product, but it should honor the setting used by DrivePool since the CloudDrive has in fact been added to the DrivePool in order to duplicate files. Many of us are going to use the cloud drive as nothing more than a "backup" or put another way a repository for an off-site set of duplicated files. But if this doesn't honor the limits we put on drives then it's a game changer and we might be best off just using xcopy or similar where we retain complete control. I personally don't care if it wants to use "some" local storage space, but it shouldn't just take over the local drive and suck up every bit of storage space (then start to error out). Does this make sense? Carlo
  3. I'm presently using CloudDrive with ACD. I have the Cloud Drive as Drive D currently set up as a 50TB drive and have added this to my DrivePool (F:) I set drive D up to hold DUPLICATE files ONLY and set all other drives for NON-Duplicate files. So in a nutshell the cloud drive will be the only place to receive the duplicate data. All so far is fine. HOWEVER, when I create a cloud drive it asks for the local cache size and I choose NONE but I'm still forced to choose a local drive letter. My first go-round I choose my local 1TB SSD drive with 640GB still free on it thinking it would be used for a file or two during transfers which I'm ok with. However, an hour or two into testing my C drive (SSD) is full and looking at the drive I find a new hidden folder starting with "CloudPart." and it has 640GB of files. WHY? I choose NONE for local caching so I'd think it should not use any space but directly transfer to the cloud. I can understand if I choose a large local cache that it would use this space but for me I'd prefer it not use any local space beyond reason for working purposes but I certainly don't want it to try and duplicate my data locally first before uploading as that defeats the purpose and will absorb my "free space" better served for local media. I killed the previous Cloud drive and created a new one this time pointing the local "cache" (even though, set to NONE) to a HDD spindle drive. This to seems to be doing the same thing. It's just growing and growing. This is/was a brand new 4TB drive that I added to DrivePool for more storage, so I'm curious to see how much it will use if not all of it before throwing errors that the drive is full (what happened before). Regardless, it is using up space that I'd rather have dedicated to local storage (IE DrivePool F:) then to act as a cache for cloud upload. Am I missing something in my setup or is this a bug? Carlo PS It's going to try and duplicate around 25TB.
  4. I think I might have figured this out. If someone could verify I've done this correctly I'd appreciate it. 1) So I create a cloud drive and add this to DrivePool 2) From DrivePool open the "Balance" option 3) GoTo Balancer Tab up top 4) Select the "Drive Usuage Limiter" option 5) Click the "Drive Usuage Limiter" option on the left to select your drives on the right 6) Set all local drives as "Unduplicated" and set cloud drives as "Duplicated" Is this correct or is there a better way? Carlo
  5. Never mind. Found the option. I must say this GUI design takes some getting used to and doesn't follow convential designs at all!
  6. I'm at a loss trying to figure out how to delete a cloud drive. I originally created a 50GB Amazon Cloud Drive but meant it to be 50TB. For the life of me I can't figure out how to remove this 50GB drive. Any suggestions? Carlo
  7. I'm trying to follow the directions for this but coming up at a loss. Any chance specific directions could be given to show how to set this up? I for example have about a dozen drives as part of my local storage and part of DrivePool. I have created a 50TB drive using DriveCloud using Amazon Cloud Drive. What I want to be able to do is have it Duplicate all LOCAL media to the 50TB cloud drive. 1) no duplicates should be local 2) cloud drive is ONLY duplicates of the LOCAL media Essentially this would become an "automated backup" if it works. If this can be done I'd like to ask for an option to be added to the next version. 1) The ability to set this up to NEVER delete a file in the cloud drive if it is locally deleted unless given permission. Updates yes but no deletes. 2) Should be configurable per cloud drive 3) Have the option to via the GUI to perform the deletes ONLY when clicked. This would help with user error (accidential delete of directory for example) or if something happens to a local drive and the drive pool thinks the files are gone (but not supposed to be). Essentially what I'm asking for is an automated "XCOPY" operation that can create or update media but never delete it "automatically". IMHO backups (online or local) serve 2 purposes. They protect against hardware failure and user screwups. I hate to admit it but I've lost more data over time due to user goofs then hardware issues. By not automatically deleting data in the cloud I can get some protection against the 2nd cause of loss of data. Thanks, Carlo
×
×
  • Create New...