Jump to content

wid_sbdp

Members
  • Posts

    49
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by wid_sbdp

  1. Just throwing my two cents in there, I think Backblaze should be a priority during/after ACD post-public release. If you look at prices... obviously Google Drive is the cheapest at a general $10-15 for "lifetime" unlimited storage off particular auction websites. ACD is not expensive at $60/yr for unlimited, and Backblaze I believe follows the same pricing scheme at $60/yr. Ideally you'd expect ACD to perform better due to their size and number of datacenters, so probably the smarter priority, but Backblaze should be up there... especially against other services.
  2. Have you given a manual reauthorize a shot?
  3. Just detach the drive in the GUI, shutdown the two processes and reboot.
  4. Happens rarely to me, I usually just close out of the GUI, restart the CloudDrive process and give it another shot and it mounts.
  5. Quick question... so CloudDrive with encryption enabled, is that included in the "types" of programs that can cause issues with "Bypass file system filters" checked, or is that not considered an issue because the two programs are kind of built to be compatible with each other? Basically... checked or unchecked on a DrivePool if using it to pool encrypted CloudDrives.
  6. So it may just be a fluke, only have about 9 hours of running so far proving it, but you might want to try adding exceptions to windows defender (or any other antivirus/malware programs you use) for the CloudDrive processes. I noticed the windows antimalware process was always running pretty high in CPU and figured maybe there's something to it, so I added all of the CloudDrive processes to the exception list and added all of my "virtual cloud drives" to the folder exception as well as my cache drive. Seeing as my initial downloads hit my main drive and are then moved over to their respective locations I figured they're already being scanned there so there's no point in wasting resources scanning a bunch of drives especially when they're technically network drives. Anyways, after 9 hours still chugging along at ~600ms latency (compared to probably 7-8,000ms or more normally at this point) and getting great speeds. Maybe there is some bad interaction between scanning and all of these I/O operations? Unfortunately it's hard to say if that was what helped or not because I also loaded up DrivePool at the same time and it's going through its balancing routine... it'll upload a few GB on one CloudDrive, then stop and upload a few GB on the next CloudDrive. Also a possibility the non-excessive constant transfer to one drive is causing it to play nice.
  7. I've done it three times now, resetting CloudDrive to default settings, reinputing my drive key, etc fixes it 100% of the time until it "gets messy" 8-10+ hours down the road again. So I know it's not my connection (as a straight reboot doesn't always fix the issue, and that clears any dormant connections on the network side). I know it's not Google because outside of going down to 0 connections for the 45sec it takes to start CloudDrive back up again and continue uploading it has no idea I reset CloudDrive settings. It's something to do with CloudDrive. I've let it run at 2 upload/download threads and it still eventually "bogs down" (much like the it did on the memory side of things a few versions ago). Not saying it's unusable, even at the slow end I'm still averaging 15-25mbs upload (it does seem to cause some issues though on the download if someone starts watching something that's not in cache) which when running 24/7 is still plenty of bandwidth to upload a good 400-500GB a day or more. But it's still kind of depressing to see when after a settings reset I'm getting 200-300mbs using only 3 upload threads.
  8. It's not uploading 50TB worth of stuff to make the drive. You'll probably have a little bit of initial upload to setup the drive in the cloud but that's it.
  9. Server 2016 Datacenter. Seems like any time I reset to default settings the stuff works great again for awhile. Does CloudDrive "go through" stablebit servers at all? Or is response time directly to Google's servers? Just weird how one minute it's 400ms and a few hours later it's 15,000ms. But then if I reset the settings to default it generally drops down to a couple thousand ms and transfers work well again.
  10. Generally when these services provide developer API it's provided with the intent that it's utilized by the developer. Countless other apps utilize one API key and work with them to ensure stable operation of their application. That's kind of the whole point, as the registration to "develop" a new app on say ACD is pretty detailed and not something that is intended for every user of a particular service to be doing. So while it may work in the short term, and people may see better performance, there's probably a decent chance it would catch the eye of Amazon/Google/etc and they'd be like "woah hold on, why are there 900 CloudDrive projects going on right now?" That could lead to strict sanctions being placed on the applications ability to access the API.... because if there's an issue who does say Amazon contact now? You? Me? CloudBit? We're all registered as running that app...
  11. I don't know what the issue is. Or how to fix it, or if it's fixable. Upgraded to 749, reboot server, launch CloudDrive. It was blazing even at 5 threads (200mbs+). After about 16 hours it dropped to 15-30mbs and holding there (with a majority of the time closer to 15mbs). The ONLY thing weird I've noticed is when I started with the reboot + upgrade the response time in Provider I/O was around 400ms. At this stage in running (it has slowly increased the entire time) it's up to 13,800ms... which is ridiculous. I don't know if I'm maybe just trying to feed it too much data without a break? (is that even a thing?). I mean I'm generally never under 40GB "to upload." edit: Yep, just for fun.. rebooted my server. Launch CloudBit... 250-350mbs upload to Google and back to 400-500ms latency. What could cause that sort of degradation over a period of time?
  12. 749 has seemed to make Google Drive a lot more responsive, not sure if you guys included any new optimizations in there between 745 and that. I/O doesn't seem to be hanging and I'm getting very consistent speeds.
  13. Any updates on the Google Drive optimization? It's pretty bad right now. I've been keeping an average of my upload speed and it's around 16-20mbs. I've tried 10 threads, 20 threads. They just sit there. Like right now I have 10 threads. 3 are uploading and the rest are just sitting there (write threads) with durations over 1min. Every 5-10min it'll have a "burst" of speed where all of the chunks in queue will upload at once and i'll hit respectable speeds but then go back to 10-15mbs uploading one or two chunks. Speed tests are good to various servers all around so I know it's not my bandwidth.
  14. Looks like 745 fixed it. Been uploading for over 5 minutes now and it hasn't broken 100MB(!!!!) of memory use.
  15. This is what it shows when memory basically gets maxed out. It'll start out with the upload basically freezing (no new connections, showing the same connections for like 30sec, same speed) then the GUI turns to this: http://i.imgur.com/GP7AxO6.png
  16. Spoke too soon. Still having issues. CloudDrive service will get upto 55GB of Memory used (which is >90% overall) and the program starts to basically break down. Not uploading, not showing the mounted drive, etc.
  17. The newer builds added a new memory management thing for caching file_ids. I think maybe the combination of an "old" config without the settings and the new features maybe messes things up. I'm back up to ~50GB on GD but it's not freezing up or anything, it's dropping stuff from memory constantly.. I can see it going down and then back up unlike before it was just a beeline to max and then CloudDrive acting weird.
  18. Resetting default settings seemed to help. I noticed looking around that the service config file options were not the same type of options that the config.default had. Reset to default, started back up. Only at a couple GB now after like 5min of uploading. I'll be curious to see if it still just climbs to system limit and starts to hang, or if it'll flush some of the memory once the uploading is done. edit: Yeah I'm going to go ahead and say a reset to default config worked. Watching it work... It got up to 11GB Memory used, dropped down to 9GB automatically, going back and forth around 10-15GB. Definitely doing some pruning of itself now.
  19. Just some more info. If I disable upload threads so there's no uploading the memory starts to drop by a few MB a second. So without completely stopping the service and starting fresh (which interrupts any access to the virtual drive) the only way to "work around" it at the moment is to pause all uploading manually.
  20. Definitely seems to be an issue. Starting a fresh instance of the service, uploading at 200-250mbs to GD. Memory use starts at basically zero and starts to climb about 100-150MB/sec. Once the server gets around 85-90% Memory utilization (CloudDrive Service using 45-55GB) it slows to a crawl and the program GUI starts to act all funky.
  21. I'm on build 742 and it's currently using 55GB of memory.
  22. I don't know what you guys did between 730 and 742.. but I went from ~8-16mbs upload on Google Drive to like 250mbs. Much much much better.
  23. Just want to clarify before I do anything. I need to reformat the drive that my cache currently resides on.. best practice? Are there any steps that need to be taken in advance to prep the clouddrive so that there's minimal risk of impact? Will it automatically download files to fill the cache when I remount it? Thanks!
×
×
  • Create New...