Jump to content

kenwshmt

Members
  • Posts

    34
  • Joined

  • Last visited

  • Days Won

    3

kenwshmt last won the day on September 23 2017

kenwshmt had the most liked content!

kenwshmt's Achievements

Advanced Member

Advanced Member (3/3)

11

Reputation

  1. I have been using robocopy with move, over write. it 'deletes' the file from the cloud drive. its not shown any longer, and the 'space' is shown to be free. the mapping is being done to some degree, even as a read only path. if there were some way to scan the google drive path and 'blank' as in, bad map locally in the BAM map that would force a deletion of the relevant files from the google path. this would do wonders. glasswire reads that I'm getting about 2mb down. this is no where near the maximum, but it looks like I get about 100gb a day.
  2. I need some assistance, fast. about a year ago, google changed their storage policy. I had exceeded what was then the unlimited storage by about 12 tb. I immediately started evacuating it. I only got it down 10 gig from 16, and google suspended me with read only mode. it can delete, but it cannot write. I have continued to download the drive. it is now down to about 7 tb so far as the pool drive is concerned, but unless drive pool has a mode to 'bad map' and delete the sector files from the drive pool, I am still at 12 tb and way over its capacity. I do appreciate that cloud drive is discontinuing google drive, but I still have some time, but I need to delete as much from the google path as possible, and I can't do that without being able to write back to it, which I cannot do. 'delete bad/read only but empty' would be a big help.
  3. am I supposed to only use that for offlining? find the relevant directories in the cloud drive drive pool path and assign them off line? .. kinda clumbsy
  4. can someone explain how this is supposed to work? if the cloud drive uses a real drive, but its assigned to a pool drive, the pool drive immediately fills it up and makes the cloud drive slow down. the only thing i can think to do is to set it to nothing on the drive pool so it will start to evacuate the drive, but even then, i wont be able to write to it.
  5. How can a page dedicated to watching ants carry pieces of plants up and down a tree not have a link to glassware? glassware, bandwidth monitor, see whats going out and in, over time. https://www.glasswire.com/
  6. not intending to use this for games, but for most directories it will relocate your directory path, by hard link, to the virtual one used by cloud drive. so far, it doesn't work with any other active shares of that directory. dropbox, megasync. I hadn't attempted to disable them, run the hard linker first, then run them again to see if they work at that point. I have been actively moving several 'passive' paths, not used, this way. they are still on the old path by hard link, and open normally, but they are going to the cloud drive path. I DID use it on megasync paths while megasync was running and it broke them, however, if megasync is killed before the process, they may restablish. https://www.traynier.com/software/steammover
  7. What are the best practices on a new install? I have a system with a 1tb work partition, I created a cloud drive of that same 'local cache' size, and am currently migrating files too it. the local cashe is 1 tb, but the cloud drive is 10 tb. My question is, how can I 'offline' directories without moving them offline? retrieval speed doesn't matter to me if ive marked them as offline, but I don't see that as an option, I'm manting somethink sort of like dropbox 'offline', directory level would be fine, so long as moving an online file to an offline path by a file move doesn't mean it has to be uploaded again. suggestions?
  8. the system the drive pool is on is increasingly problematic, I think i can run it long enough for the migration to get off of the internal drives. how can i transfer the whole set to another machine?
  9. REALLY liking the odrive.. while its a bit of 'all the eggs in one basket'. its great for offlining stuff you need to have but probably never retrieved again. it defiantly has a file size limit, it seems to have problems with amazon cloud at about 6 gig, so if you use it there you might file split less than 4 or so gig. its not perfect, and beware updates. they can devalidate your encryption.. and that's a messy 'remove / reattach'.. and its really messy if you do the 'out of the odrive' settings.. not update friendly at all. so live in the odrive, in as few encrypted paths as possible. between 2 amazon accounts ive both backed up and offlined nearly 50 TB. and still going. if you do system images, use something that can break it into 4 gig chunks. easeus or something, if its a windows image, use a file splitter. I haven't attempted anything with drive pool drive yet. I would love to disable all the local duplication and have odrive simply copy it to amazon, but ive not tried this yet. it can do simple mirroring. and its very fast. its upload can saturate your bandwidth, if the receiver is capable of it. I have a 100 mb pipe, and I've done better than 500 gig a day.
  10. I wasn't thinking of using it in the drive pool path, more as a backup destination. If drive pool knew what the additional suffix meant .cloud .cloudf (folder) then it would be considered backed up, and not take any space locally. or more to the point, still consider it backed up when it became zero length and renamed. I don't know how things are 'restored' from a duplicate backup path in drive pool, but this would require launching it by association. odrive does allow for command line retrieves. the file is restored, and renamed.
  11. If there were a tweek or you can make to pool drive to be odrive aware.. that would be wonderful. This is an unmapped directory, double clicking on it retrieves the next level of the directory structure and any download markers it contains: Spaces.cloudf and this is an offline file: NetDrive2_Setup_2_6_1_689.exe.cloud retrieval only works locally, while you can share a sync path, you can only contribute to it. when its offlined you will have to retrieve it from a station running odrive. encryption / passwords are done at the directory level
  12. yes. Think of it as an un-syncing dropbox. You install it where ever. all of the stations can contribute to the same path, but if they aren't configured to retrieve anything all you will see on them is a download marker. these are all using the same account. it does no duplicate removal or compression, for that you need something like arq. I've managed to clean off nearly 1 TB so far to amazon cloud.
  13. I started using this week. and its $100 a year.. so there is that. The professional service does a 'delete' after a successful upload.. you put 10 gig in it, it will delete 10 gig, replace it with download markers, when its succeded. there does seem to be a 4,6,or 8 gig file size limit, so be sure to split the really big files. https://www.odrive.com/account/myodrive
  14. backup is backup... what ever makes it.. brave on with one method, but if another happens to work too.. why not.. its another backup. one isn't necessarily superior.. as things like amazon cloud are limitless and both can be used concurrently. I will mention if it is a folly or not. I will have one tuesday.
  15. synology nas: https://www.synology.com/en-us/dsm/5.2 'Also, Cloud Sync now offers one-way sync options, making things easier when backing up data to a public cloud or vice versa, while data encryption guarantees files sent to the public cloud remain for your eyes only. ' the nas drives with dsm / disk station manager 5.2 and later can do one way transfer to cloud services, with encryption.. and if thats not enough Im sure something like box crypt would work on the device would substantially improve the encryption. Do any of you use this approach? I am buying a 2tb nas to try it out with. netdrive 2 and boxcryptor has been working wonderfully, but i am finding i need a non computer approach, and if a nas can backup and delete, so much the better.
×
×
  • Create New...