Jump to content
Covecube Inc.

kenwshmt

Members
  • Content Count

    32
  • Joined

  • Last visited

  • Days Won

    3

kenwshmt last won the day on September 23 2017

kenwshmt had the most liked content!

About kenwshmt

  • Rank
    Advanced Member
  1. am I supposed to only use that for offlining? find the relevant directories in the cloud drive drive pool path and assign them off line? .. kinda clumbsy
  2. can someone explain how this is supposed to work? if the cloud drive uses a real drive, but its assigned to a pool drive, the pool drive immediately fills it up and makes the cloud drive slow down. the only thing i can think to do is to set it to nothing on the drive pool so it will start to evacuate the drive, but even then, i wont be able to write to it.
  3. How can a page dedicated to watching ants carry pieces of plants up and down a tree not have a link to glassware? glassware, bandwidth monitor, see whats going out and in, over time. https://www.glasswire.com/
  4. not intending to use this for games, but for most directories it will relocate your directory path, by hard link, to the virtual one used by cloud drive. so far, it doesn't work with any other active shares of that directory. dropbox, megasync. I hadn't attempted to disable them, run the hard linker first, then run them again to see if they work at that point. I have been actively moving several 'passive' paths, not used, this way. they are still on the old path by hard link, and open normally, but they are going to the cloud drive path. I DID use it on megasync paths while meg
  5. What are the best practices on a new install? I have a system with a 1tb work partition, I created a cloud drive of that same 'local cache' size, and am currently migrating files too it. the local cashe is 1 tb, but the cloud drive is 10 tb. My question is, how can I 'offline' directories without moving them offline? retrieval speed doesn't matter to me if ive marked them as offline, but I don't see that as an option, I'm manting somethink sort of like dropbox 'offline', directory level would be fine, so long as moving an online file to an offline path by a file move doesn't mea
  6. the system the drive pool is on is increasingly problematic, I think i can run it long enough for the migration to get off of the internal drives. how can i transfer the whole set to another machine?
  7. REALLY liking the odrive.. while its a bit of 'all the eggs in one basket'. its great for offlining stuff you need to have but probably never retrieved again. it defiantly has a file size limit, it seems to have problems with amazon cloud at about 6 gig, so if you use it there you might file split less than 4 or so gig. its not perfect, and beware updates. they can devalidate your encryption.. and that's a messy 'remove / reattach'.. and its really messy if you do the 'out of the odrive' settings.. not update friendly at all. so live in the odrive, in as few encrypted paths as possible. b
  8. I wasn't thinking of using it in the drive pool path, more as a backup destination. If drive pool knew what the additional suffix meant .cloud .cloudf (folder) then it would be considered backed up, and not take any space locally. or more to the point, still consider it backed up when it became zero length and renamed. I don't know how things are 'restored' from a duplicate backup path in drive pool, but this would require launching it by association. odrive does allow for command line retrieves. the file is restored, and renamed.
  9. If there were a tweek or you can make to pool drive to be odrive aware.. that would be wonderful. This is an unmapped directory, double clicking on it retrieves the next level of the directory structure and any download markers it contains: Spaces.cloudf and this is an offline file: NetDrive2_Setup_2_6_1_689.exe.cloud retrieval only works locally, while you can share a sync path, you can only contribute to it. when its offlined you will have to retrieve it from a station running odrive. encryption / passwords are done at the directory level
  10. yes. Think of it as an un-syncing dropbox. You install it where ever. all of the stations can contribute to the same path, but if they aren't configured to retrieve anything all you will see on them is a download marker. these are all using the same account. it does no duplicate removal or compression, for that you need something like arq. I've managed to clean off nearly 1 TB so far to amazon cloud.
  11. I started using this week. and its $100 a year.. so there is that. The professional service does a 'delete' after a successful upload.. you put 10 gig in it, it will delete 10 gig, replace it with download markers, when its succeded. there does seem to be a 4,6,or 8 gig file size limit, so be sure to split the really big files. https://www.odrive.com/account/myodrive
  12. backup is backup... what ever makes it.. brave on with one method, but if another happens to work too.. why not.. its another backup. one isn't necessarily superior.. as things like amazon cloud are limitless and both can be used concurrently. I will mention if it is a folly or not. I will have one tuesday.
  13. synology nas: https://www.synology.com/en-us/dsm/5.2 'Also, Cloud Sync now offers one-way sync options, making things easier when backing up data to a public cloud or vice versa, while data encryption guarantees files sent to the public cloud remain for your eyes only. ' the nas drives with dsm / disk station manager 5.2 and later can do one way transfer to cloud services, with encryption.. and if thats not enough Im sure something like box crypt would work on the device would substantially improve the encryption. Do any of you use this approach? I am buying a 2tb nas to try i
  14. After nearly $200 of experimenting on different software combinations, i finally have one that works. I cant say it works on the drivepool system, as ive not gotten it to work there, but that isn't unique to itself, its a finicky system. it is just as well, netdrive 2 and boxcryptor can be resource hogs. netdrive2 can use amazon cloud drive (unlimited at $60 per year), and map it as a virtual drive. boxcryptor then maps to that virtual drive (another virtual drive) giving you file name encryption(it looks like unicode korean, at least 64 characters) and otherwise transparent, random acces
  15. i had to go with the final solution to get the drivepool working again, and its normal once more. with the drive disconnected from the system i removed it from the pool. it all worked again. I have it plugged into another system working on getting the data off. I will transfer the directory structure, if the drive reads correctly. 1.5tb win 7 pro. 2 gig seagate in the black standing box.. ive had 3 of this model fail.. not a good design..
×
×
  • Create New...