Jump to content
Covecube Inc.

jaynew

Members
  • Content Count

    11
  • Joined

  • Last visited

About jaynew

  • Rank
    Member

Recent Profile Visitors

180 profile views
  1. jaynew

    Longevity Concerns

    Ah! Good to know. I've followed the forums occasionally. Didn't realize they have publicly stated plans "in case". Thanks!
  2. jaynew

    Longevity Concerns

    Hey all. ~1.5 year CloudDrive user here. All the usual praise - greatest program since the original bread-slicing program, Plex, GSuite, etc. My question is, as I am set to embark on a backup to a completely secondary GSuite account, what expectations do we all have in case CoveCube closes up shop & then Google changes something API-related. I recall about a year ago where we were all having issues connecting to Google Cloud due to numerous API errors. A relatively quick update from the team fixed it and we were back up and running. But since then I've had reservations about having two separate copies both chunked/encrypted forcing me to rely on this amazing program. Where does everyone else stand on this topic? FYI - if data didn't total ~55 TB (or 10 TB drives were so expensive) I would totally have a different medium/local backup. I know - but I'm frugal
  3. @ccoleman I get that often. For whatever reason, my 16TB drive does this frequently and it takes almost 2 hours to finish. Thankfully, everytime it does happen, it comes back. Sick of these mini-heart-attacks, though. @Christopher Sad to report that with 10 different GDrive encrypted containers mounted - restarting my server takes forever, and almost always results in the same situation as @JohnKimble. Perhaps CloudDrive can't gracefully unmount 10 before Windows forces and completes a restart. Quite unnerving.
  4. Still not anywhere close to what I hoped to get, @Colbalt503. My expectations were that *my* bottlenecks would be my server's hardware & download/upload speeds (200/35). But hey, Google shouldn't have those limitations! Well, I did hit 1.08 GBps uploading from cache (again, Gdrive > Gdrive). But that was only while the VM wasn't downloading at the exact same time. The bottleneck here, IMO, may be CloudDrive more than anything. Downloading to a cache/prefetcher, then transferring to either the same physical disk (or separate local disk) and then into the cache/"To Upload", and then finally uploading. My overnight test resulted in ~78 GB/hour transferred. Only slightly faster than my last tests. And that was with completely revamped virtual hardware (1TB HDD cache drive, 6 vCPU, 6GB RAM). Unfortunately, I don't have many other choices. One way or another, 40TB has to be duplicated. So I'll use this method until the credits dry up. -Jason *edit for spelling, punctuation
  5. jaynew

    How to transfer?

    It appears to be the best way, from what I've been reading. There's a thread over in the 'Providers' forum where I laid out a lot of the settings I'm using (and currently tweaking) with Google Cloud Platform (their version of a VPS) http://community.covecube.com/index.php?/topic/2904-help-moving-65-tb-from-1-gdrive-to-the-same-gdrive/
  6. (Sorry for the long post, but your situation has been my life for the past 2 days) I'll chime in, because I'm in the same boat as you. My reasons were a little different. Coming from 10 local HDD's, I had the bright idea to create 10 CloudDrive "drives" on one GSuite account, to match the content that had been stored locally. This was a mistake. Attaching, pinning data 10x, each using 5 threads to upload/download to/from the same account. Oh, and the OCD of not having the drives in order in the CloudDrive UI unless they are attached in said order? Nightmare. Decided to consolidate. So after a lot of research here and on reddit, the best solutions I've seen are to either rent a VPS (as per CoveCube guru Christopher, among others), or in line with what you said - Google Cloud Platform. I took the plunge yesterday around 4PM EST with GCP. And boy am I glad I did. Aside from the $300 credit they offer, it's a helluva learning curve to get everything working just right. With most VPS services I've seen, you pick what you want and sign up. With GCP you can create new instances in real-time to get everything exactly the way you want/need. 6 virtual machine instances later, I think I have settings that appear to work well. To be fair, I'm actually still testing settings to see what can work better. ***I'd absolutely love to hear from others who have done this. maybe I don't have to re-invent the wheel here.*** Current VM settings: Compute Engine 2 vCPU cores (they do max out, but 4 didn't = faster uploads, I found) 6GB RAM (maybe too much - never goes over ~60% usage) 50GB SSD for Windows Server 2016 (cost is the same for all Windows Server versions, 50GB is minimum allowed) 1 375GB "Local" SSD scratch disk for CloudDrive cache This configuration cost is $196.18/mo or about $6.50 a day Couple of things: That 375GB "local" drive is AWFUL. Apparently when you shutdown the VM, you lose the drive and can never reload the VM again. I read about it, understood it, tested it, and will not shut down again. Tired of reinstalling and re configuring Windows. The scratch drive is SLOW. Be it SCSI or NVMe. Google Claims it's local to the exact vCPU you're using. I call bull on that. Or it's garbage equipment. Transfer speeds from the OS SSD to the NVMe drive are about 25MB/s. My personal PC transfers from the HDD to the NVMe SSD at a steady 100+ MB/s. And NVMe > NVMe (copy to self) are also pretty awful on GCP. THAT all being said. Here's my current dilemma. Ran a transfer overnight. Cloud Drive > Cloud Drive through the VM. Started at 1:37 AM EST and completed at 3:18 PM EST for 957 GB of data. Windows "copy" speeds go from 50 MB/s on the download to 0, then back. CPU's max out (not that I think that matters - even with 4 vCPU cores I didn't see a difference in total time to transfer while not maxing out the vCPU). Overall the transfer rate worked out to just under 70GB/hour. That seems kinda slow, for me. I've tested Source Cloud Drive > OS SSD "Cache" > Local "Cache" NVMe > Destination Cloud Drive, and speeds did not improve. Still tweaking these with every mini-transfer, so please chime in if you have any input on these current settings: -Prefetching on the source drive is currently on - 1 MB trigger, 1024 MB forward, 1800 sec. -Cache for source is 50GB FIXED, cache for Destination is set at 100GB FIXED (although source cache is listed as 0 used, all data is in pre-fetch. Destination shows all 100GB used) -20 download threads for source, 2 upload with "Background I/O" OFF. -20 upload threads for destination, 5 download with "Background I/O" ON. -No pinning of metadata or directories for SOURCE. I found that it would randomly start pinning data and kill my total overall transfer time. Unfortunately, the bottleneck appears to be the download & upload at the same time. When I test mini-transfers DOWN to the cache drive, I get very reasonable speeds, average 140GB/Hour. When I test those same mini-transfer UP, I actually can hit 1GBps. But as a straight Cloud > Cloud transfer, speeds max out at about 250Mbps. Until the cache is full. Once the cache is full, things do pickup a bit. Again. Sorry for the long post. Please feel free to ask questions. I'll be checking the forums here religiously as I sit here for the next 3 days finishing as much of this transfer as I can (end of vacation fun!) -Jason
  7. jaynew

    How to transfer?

    @17thspartan - I hadn't been able to get Google Drive to allow me to "Make a Copy" of any folder. Only files. That's with a GMail account, a legit paid GSuites account and a few throwaway "purchased" accounts. In fact, in my testing this theory a few days before your post (heavy research included) it seemed this was a serious point of contention people have with Google Drive - the inability to make a "copy" of folders. Only files. And using a renamer to get rid of "Copy of " on 511 chunks of one large, 10GB StableBit CloudDrive encrypted file was arduous. Is there a hidden setting, or something I'm missing, here? Your method certainly seems like the best - but when I had this same idea and tried to flesh out the procedure, turned out to be a dud. Also couldn't transfer ownership of folders between "organizations". Also frustrating. Thanks. -Jason
  8. jaynew

    access time

    Has anyone found a solution to this issue? I have the exact same problem. However, I don't know that it is tied to the "prefetch time window" option, as folders that I accessed two or three days ago (and two or three detach/attaches) ago still load up near-instantly. But folder I've never clicked into in Windows Explorer still take 1 - 2 minutes to show contents. This is with my original Prefetch Time Window setting of 30seconds, and the updated 180seconds recommended in this thread.
  9. jaynew

    File Explorer hanging

    @bnbg - ever find a solution to this? I've got the exact same issue on a brand new install of Server 2016 Standard and it's killing me. Makes the entire system unusable to surf around. Did you also find that once you load a folder, the contents are then forever loading quickly after that initial 1 - 2 minute load time? Thanks -Jason
  10. Gotcha. Thanks for the quick, detailed response. I've read everything you mentioned in 20 places before, but in the semi-annoyed-panic of deleting essentially 2 weeks worth of work, I was reviewing every possibly scenario to merely restore the files quickly without having to reupload. Looks like we're back to the drawing board! Thanks, again!
  11. Hey all. Hoping someone can shed some light on what's going on in my situation. Currently evaluating CloudDrive and have been uploading like a madman during my trial period, testing all manner of things to see if this is the product for my needs. Last night I accidentally let a folder synchronization program delete ~4TB of content from Google Cloud Drive. While exceedingly frustrating (for upload time, not data loss - it was all duplicate data) I'm noticing some unusual behavior. After the data was deleted, I noted that neither StableBit CloudDrive *nor* Google drive reflect the new amount, which should be 1.06TB used. However, Windows Server 2012 does report that exact amount used, with 8.93TB free on a 10TB drive. Contradictory information on the same drive. My questions are 1. Does data deleted through an attached CloudDrive drive ever really go away? 2. And can it be "undeleted" without traditional recovery methods (to another disk, that is) 3. Further, had the deletion been intended rather than accidental, how would one "reclaim" this previously used space? While I have not clicked on the Drive Options > "Cleanup" - I would love to avoid having to re-upload the data, if possible. Some information: Recycle Bin (which the sync program was told to use) is empty. Obviously 4TB is a lot for a bin. I allowed the program "Recuva" to search the attached drive, and after 30 minutes it found 391 out of some 3,000 missing files. I did not proceed. After deletion, I stopped all uploads and detached the drive after some cached data completed uploading. Edit: I also looked through my GSuites admin account and tried to "restore" any data for the 30 minute window before the files were deleted. This seemed to have done nothing.
×
×
  • Create New...