Jump to content

eldite

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by eldite

  1. Are you talking about the block index database build? ... "this is a one-time operation..." .... I'm on a very quick connection and have about 20TB on a 64TB drive. I didn't sit and watch it, but I think it was a shade under 2 hours.
  2. Turns out it's because the throttling is now hidden. Enable the Information level logging on the ApiHttp module and you can now see that is the throttling message, but since the throttling message has become 'Information' level and the turtle picture is gone, there is no way to know. I suggest that the throttling message be made into a warning again, or the 'Server is temporarily unavailable...' is also moved into information. I'd prefer the prior. Personally, I liked the turtle. Who doesn't like turtles. 21:26:41.0: Information: 0 : [ApiHttp:202] Server is throttling us, waiting 1,792ms and retrying. 21:26:41.0: Warning: 0 : [ApiHttp:115] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 21:26:41.0: Information: 0 : [ApiHttp:115] Server is throttling us, waiting 1,137ms and retrying. 21:26:41.0: Warning: 0 : [ApiHttp:6] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 21:26:41.0: Information: 0 : [ApiHttp:6] Server is throttling us, waiting 1,162ms and retrying. 21:26:41.2: Warning: 0 : [ApiHttp:132] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 21:26:41.2: Information: 0 : [ApiHttp:132] Server is throttling us, waiting 1,716ms and retrying. 21:27:42.8: Warning: 0 : [ApiHttp:190] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 21:27:42.8: Information: 0 : [ApiHttp:190] Server is throttling us, waiting 1,477ms and retrying. 21:27:42.8: Warning: 0 : [ApiHttp:132] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 21:27:42.8: Information: 0 : [ApiHttp:132] Server is throttling us, waiting 1,912ms and retrying. 21:28:54.0: Warning: 0 : [ApiHttp:115] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 21:28:54.0: Information: 0 : [ApiHttp:115] Server is throttling us, waiting 1,953ms and retrying.
  3. Unfortunately no, it's not that It can be replicated easily. It happens every day and drops the drive out pretty quickly if there is any sustained I/O. Just reconnected it 5 minutes ago and it's already happening again. Turning the threads down down makes it less likely to occur. 22:26:18.0: Warning: 0 : [ApiHttp:61] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 22:26:18.1: Warning: 0 : [ApiHttp:188] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 22:41:54.3: Warning: 0 : [ApiHttp:198] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 22:41:54.3: Warning: 0 : [ApiHttp:41] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 22:44:07.0: Warning: 0 : [IoManager:89] Error performing Read I/O operation on provider. Retrying. The read operation failed, see inner exception. 22:44:07.1: Warning: 0 : [IoManager:89] Error processing read request. Thread was being aborted. 22:44:07.1: Warning: 0 : [IoManager:89] Error in read thread. Thread was being aborted. I suspect it's a google limitation, but it would be nice to know what that limitation is.
  4. Hi, I've noticed frequent drive unmounts recently. The log looks like this - the drive still works, but very poorly, then eventually it just drops the drive entirely. It seems like some google side rate limiting, but I'm interested if anyone else is seeing this, or has any ideas of a way to work around it? Cheers 18:03:34.6: Warning: 0 : [ApiHttp:215] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:03:34.6: Warning: 0 : [ApiHttp:166] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:03:34.6: Warning: 0 : [ApiHttp:220] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:03:34.6: Warning: 0 : [ApiHttp:209] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:02.4: Warning: 0 : [ApiHttp:209] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:02.4: Warning: 0 : [ApiHttp:205] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:02.4: Warning: 0 : [ApiHttp:166] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:02.4: Warning: 0 : [ApiHttp:212] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:02.4: Warning: 0 : [ApiHttp:219] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:02.4: Warning: 0 : [ApiHttp:206] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:22.7: Warning: 0 : [ApiHttp:203] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:23.5: Warning: 0 : [ApiHttp:216] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:05:24.3: Warning: 0 : [ApiHttp:42] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:06:58.9: Warning: 0 : [ApiHttp:215] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:06:58.9: Warning: 0 : [ApiHttp:218] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:06:58.9: Warning: 0 : [ApiHttp:208] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:06:58.9: Warning: 0 : [ApiHttp:216] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:06:59.0: Warning: 0 : [ApiHttp:203] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:06:59.4: Warning: 0 : [ApiHttp:49] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:07:27.2: Warning: 0 : [ApiHttp:219] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:07:27.2: Warning: 0 : [ApiHttp:203] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:07:27.2: Warning: 0 : [ApiHttp:166] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:07:27.2: Warning: 0 : [ApiHttp:213] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:07:27.2: Warning: 0 : [ApiHttp:220] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:07:27.6: Warning: 0 : [ApiHttp:215] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:08:02.2: Warning: 0 : [ApiHttp:205] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:08:02.3: Warning: 0 : [ApiHttp:215] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable). 18:08:02.3: Warning: 0 : [ApiHttp:208] Server is temporarily unavailable due to either high load or maintenance. HTTP protocol exception (Code=ServiceUnavailable).
  5. Thanks, yep 45TB. We have gigabit capable fibre to most urban houses now in NZ. It's fantastic and it means using my CloudDrive is faster than many of my external hard drives. I did a small test using MultCloud to copy between Google Drive accounts and it seems to work well. I'll try it with my primary cloud disk. It's $7 per month, but I can live with that. I've really started to rely on CloudDrive (and as a result Google Drive) now, I'm storing all our photos, videos, pc backups, a number of virtual machines, database backups. etc. and of course a fair bit of media. So ensuring I have a redundant copy of the data in another account is quite important to me.
  6. Found my answer to #2 here: http://community.covecube.com/index.php?/topic/2891-migrating-data-from-one-gdrive-to-another/?hl=%2Bcopy+%2Bclouddrive Shame you can't mount both drives on the same system. Having the ability to "re-id" a drive would be very nice, but I understand the challenge from a development perspective. I might have to run another copy of CloudDrive in a VM. Any info on the drop-outs would still be appreciated.
  7. Hi, A couple of questions 1. Recently I've noticed by CloudDrive has dropped out a lot more than usual. I was able to partially mitigate this by increasing the retry count in the settings to 30. The message says something like "cloud drive was not able to download data from the provider" or something like that. The log usually has a lot of this in it "[ApiGoogleDrive:44] Google Drive returned error (userRateLimitExceeded): User Rate Limit Exceeded". It used to be very reliable, but recently much, much, much less so. It drops out a lot with heavier usage. Has anyone else been experiencing this? 2. I run two cloud drives on google, under separate accounts which synchronize from to the other for redundancy. When I upgrade cloud drive, I attach one disk, test it for a week, then attach the other and re-enable the sync. Recently my ISP called up and said I'm running close to the line on their "fair use policy" (45TB last month). Q2a) If I do a direct google drive to google drive copy on another account of the raw files, then I attach cloud drive to that replicated drive, will it work? Q2b) Does anyone know of a service or way to do a google drive to google drive copy between accounts? Cheers
  8. With your connection, set you chunk size to 20MB, set your minimum download size to 20MB, 400 to 800MB prefetch and 20 threads, you should get solid download performance (100Mbps+) It will take longer to load initially on startup, but then it will play smoothly throughout the video. Make sure you've got a solid SSD cache disk.
  9. eldite

    File copy very slow

    It is likely your cache disk has less than 5GB free? When that happens it throttles the copy speed down to the connection speed.
  10. eldite

    Memory Leak

    Today I see this comment in the changelog: 1.0.0.800 * [D] [issue #27311] Fixed a memory leak in the prefetcher. Thank you !!
  11. It's not your settings, it's a problem with the software. All you can do is wait until it's fixed. I think it is a problem with disks created in the latest version, or with large disks. I'm using 1.0.0.784. I have an old virtual disk created with version 1.0.0.4xx and that disk gets 40-50Mbit download. I'm was not happy with that speed because I've got a 100Mbit connection. I had read that you could get MUCH faster speeds with a 20MB provider chunk size. So I decided to upgrade to the latest version and create a new disk with 20MB chunks. It is super slow (10Mbit) download from my new disk. I noticed it was using a lot of CPU (and it's a good cpu) so I tried an unecrypted drive too with 20MB chunks. That's the same speed (10Mbit) and uses a bit less CPU. The virtual disk that was created under the old version still pulls 40-50Mbit on the latest version, but the virtual disks created under the latest version max out at 10Mbit. My conclusion is that disks created with the latest version are not usable. I would give it a few months and try again then.
  12. eldite

    Memory Leak

    More of a bug report than a forum question. There is a memory leak when the drive dismounts (which happens all the damn time, by the way). Attached are the imagery of PoolMon showing CLFS (Cloud File System Driver) taking up 7GB of my 8GB of RAM. It's hard to get this screenshot because the PC is basically unusable when you have a non-paged pool memory area taking up your entire RAM. This is on the latest version (784 at time of writing), but I have also experienced the exact same thing on all other versions. This means that when connectivity fails, it then uses up all your ram and freezes the PC. I see other people are reporting the same thing. Hopefully it can be prioritized with this photographic evidence These ongoing issues (retry count exceeded, memory leak, corrupt disks after upgrade) are slowly wearing down my love of this product. Thanks guys
  13. I was on an old version 1.0.0.4xx? I think. I upgraded yesterday (24th) and a similar thing. All data on the drive become totally corrupt, e.g. none of the files would open succesfully. After clearing cache and then detaching and re-attaching. The drive no longer mounts in windows. Tried re-authing too. Cloud drive connects, but windows it shows an a brand new disk and windows disk management asks me if I want to initialize this disk. Interestingly, I had 2 drives, under separate accounts, one is encrypted, one is not. This was the encrypted one, the other one seems okay. It appears to be toast. Luckily I have it all backed up, but it will be 25 day upload for me to restore it... not really looking forward to that.
  14. Yes, not having to click the "Retry" for remounting would be a BIG feature, it could auto retry every after 5, 10, 20, 30, 1hrs, 2 hrs, etc.
  15. As mentioned, after I upgraded to the latest version the "CRC Errors" turned into "I/O Errors". Then I checked my event log, I found windows was reporting timeouts trying to read from the disk. I removed the disk and restarted CloudDrive. I used a new cache drive and all good. So I assume it was my cache disk (however I have recently done a full read/write of the entire drive with no issues). Running fine since changing the cache disk. I ran a chkdsk on the Cloud drive because obviously some data will be missing/invalid that was on the cache disk and could not be uploaded.
  16. Hi, Go here: http://dl.covecube.com/CloudDriveWindows/beta/download/ Scroll to the bottom and download the latest version (1.0.0.722 at time of writing). That has a bandwidth limiter in it. It's under "Drive Options" > Performance > I/O Performance. You can safely upgrade without causing any issues, unless you're using Amazon Cloud Drive, in which case, ask Stablebit first. You can change your cache drive by Detaching the drive, then re-attaching it and it will give you the option of picking a new cache drive. You can do this safely as long as you don't have any data queued to upload. You'll lose your cache when you do it of course. Cheers
  17. Now the other drive is failing too, the error has changed with the upgrade, I get this on both drives now. Nothing uploads at all. 22:46:12.1: Warning: 0 : [IoManager:14] Error performing I/O operation on provider. Retrying. The request could not be performed because of an I/O device error 22:46:30.5: Warning: 0 : [IoManager:14] Error performing I/O operation on provider. Retrying. The request could not be performed because of an I/O device error I tested downloading something and it works, but it gives a lot of "Rate Limit Exceeded" errors. Not sure if that is relevant, or that's a separate issue. Could be related. I would really appreciate someone from Stablebit stepping in here because this makes the product unusable. What information do you need from us? Cheers
  18. You just re-attach your existing drive on your sever and all your data will be there. Give it 5 minutes to attach the drive and sync the file structure, then it will show up. Open the software on your desktop, then click on the cog on the top right, click "manage license" then click deactivate. Then you can re-activate it on your server.
  19. I attached a new drive on the same machine and it's running happily so not a connectivity problem. I can't detach the messed up drive because it still has data to upload that it can't upload. So I'm thinking I'll stop the service, rename the cache folder, restart the service and re-attach. It's 2am now and work tomorrow, so that's a problem for tomorrow.
  20. Hi, Not that it helps you, but I have the exact same issue. Just stuck on the same block, giving a CRC error, continuously for over 24 hours. 168GB left to upload. I've upgraded to v722, restarted the machine, cleared the cache. No dice. It downloads data just fine, so I'm thinking it's unlikely to be a connectivity issue. First I'm going to try attaching another drive on the same machine just to confirm the issue is specific to that virtual disk and not connectivity etc. Then I'm thinking I'll detach the drive, delete the hidden cache folder and re-attach. I'll end up with file links to data that doesn't exist because it hasn't been uploaded yet, but I'm hoping a chkdsk will fix that. CloudDrive.Service.exe Warning 0 [IoManager:29] Error performing I/O operation on provider. Retrying. Data error (cyclic redundancy check) 2016-09-13 11:27:46Z 908032716 CloudDrive.Service.exe Warning 0 [IoManager:29] Error performing I/O operation on provider. Retrying. Data error (cyclic redundancy check) 2016-09-13 11:27:57Z 943501727 CloudDrive.Service.exe Warning 0 [IoManager:29] Error performing I/O operation on provider. Retrying. Data error (cyclic redundancy check) 2016-09-13 11:28:08Z 978982975 CloudDrive.Service.exe Warning 0 [IoManager:29] Error performing I/O operation on provider. Retrying. Data error (cyclic redundancy check) 2016-09-13 11:28:19Z 1015000538 CloudDrive.Service.exe Warning 0 [IoManager:29] Error performing I/O operation on provider. Failed. Data error (cyclic redundancy check) 2016-09-13 11:28:30Z 1051238298 CloudDrive.Service.exe Warning 0 [IoManager:29] [W] Error writing range: Offset=2,889,435,054,080. Length=10,485,760. Data error (cyclic redundancy check). 2016-09-13 11:28:30Z 1051571749 Let you know how it goes.
  21. I'm no expert, but I've experienced similar problems. Open Disk Management in windows and assign a drive letter to the disk.
  22. eldite

    Checking Cloud Data

    Thank you for your response. Yes there could have (and likely were) issues with the cache disk due to the power. I ran a chkdsk and it picked up some invalid file links. It wants to unmount the disk to fix them, so I'll do that later, but looks like it will sort that out. The cache is only 500MB. CloudDrive shows 208 Local data now (incl cache, to-upload and pinned), but size on disk is 280GB. I'll see what it shows when it's finished uploading. Would you recommend upgrading to the latest version from the Wiki, is it just as stable as any other version? I didn't realize there were so many newer releases available when I started with it. Cheers
  23. eldite

    Checking Cloud Data

    Hi First of all, what a great product. There is just nothing else out there remotely similar to this. I've purchased 2 licenses now! I do have a couple of questions though 1. I'm migrating my data to the cloud, it's going to take several weeks, but that's fine. The other day, we had a power cut, the UPS couldn't take it and the server shut down. When it started back up, the CloudDrive wasn't attached any more and I had to Re-Attach it. It warned me that an existing computer was still using it (which it wasn't, but I can see why it did that). Then when it re-attached, it cleared my cache drive. It appears as though everything that was queued to upload has fully uploaded and I'm not sure that is actually the case. I think it is possible that the FAT shows the files there, but they're not really there, I've tried a few files to test this theory and they were fine, but there are too many to open every file individually to test them. Is there some way to verify the data that is there? Would Stablebit scanner help with this at all? I was thinking of checking the first 1MB of each file or something - validating the full file or performing a CRC over all data is impractical due to volume of data and available bandwidth. 2. I've got 334GB queued to upload right now according to Stablebit CloudDrive (total local usage), but the size on disk of my cache disk for this data is 374GB. Is that normal? It seems unusual. I'm on 1.0.0.463, my cloud drive is with Google. Cheers
×
×
  • Create New...