Jump to content

darkly

Members
  • Posts

    86
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by darkly

  1. Is this behavior only on the beta? I was on the main release before and I remember the drive entirely dismounted my first night I left a transfer running. Also can someone clarify some information on the ban? I've read that it's based off the number of API calls but I've also read that it's based on total bandwidth used. Even so I've heard that it's at around 750GB, but I've only managed an average of 500GB a day over the time I've been uploading. Are there two bans based on both API calls and bandwidth?
  2. So reading and writing to this pooled disk would only trigger one set of cache read/writes in the CloudDrive service? Interesting. What about in conjunction with the last thing I said? Could I, for example, have 2 gdrive accounts, set up a 500TB clouddrive in each, configure each clouddrive with 10x50TB partitions, pool each set of 10 partitions together into 2x500TB clouddrives so that any data written to either one is split between all 10 partitions, then pool the two 500TB clouddrives into a 1PB clouddrive that copies all data onto both of the two 500TB sides on two different gdrives? EDIT: It just occurred to me, does creating multiple partitions inside one clouddrive address the original issue of this thread, the time it takes to reindex a large drive due to the indexing process starting over every time its interrupted? Is each partition indexed separately?
  3. I'm not sure it's just this though, because I've hit the bandwidth cap before and the message is different It says nothing about the thread being aborted. And also, it happens rapidly like a dozen times but then continues with the upload. When I hit the bandwidth cap, I get another message entirely and I keep getting it for the rest of the day until the cap is removed and I'm able to continue uploading. So, if I'm understanding correctly, all information about files and their activity, as well as all web traffic? Fair enough, but is there a way (email?) to send this your way without slapping it on here for all to see? I'm glad I wasn't drinking something. I think this is yours now. EDIT: I'm now getting the very distinct "user rate limit exceeded" error that will continue through the rest of the night until my ban time is up, unlike the "thread was being aborted" error which pops up throughout the day (sometimes in rapid succession) before allowing me to continue uploading .
  4. I think the biggest concern I have is that I'm running Windows Server 2012 (R2 is not compatible on my hardware. Last time I had it, it worked for a while until an update made it boot loop and the OS was unrecoverable.), which, if I've understood what I've read correctly, uses an older version of ReFS that was updated in either R2 or 2016. I'm not too filesystem savvy so I'm not sure on the technical details, but I'm guessing if it was updated, there's a good reason for it . . . It wouldn't be hard to switch to 55TB drives pooled but I'm concerned how that might affect my performance as well with so many drives reading and writing from cache. Here's an option for me though. What if I created another g suite account and made another drive of the same size off of its gdrive. Could I pool the two together and have it mirror everything between the two drives? Am I correct in my assumption that I'd have to move everything again as the files are currently on the first drive itself and not the pool? That would at least save me if one of the drives did go bad like in your case, right?
  5. I also forgot to mention, since upgrading to the beta, I've been seeing this error pop up several times throughout the day (usually quite a few times in a row before continuing to upload): Cloud drive <my_drive_name> is having trouble uploading data to Google Drive in a timely manner. Error: Thread was being aborted. This error has occurred n times. This error can be caused by having insufficient bandwidth or bandwidth throttling. Your data will be cached locally if this persists. -------- I never saw this before switching to the beta. Any ideas what causes this?
  6. Can I ask what information is collected in the logs?
  7. I should also point out that I'm in a better position to switch to that structure as all my files have up til this point been on local storage, and I just started the upload to gdrive a couple days ago.
  8. Other than this reindexing issue (assuming it'll get fixed eventually) and gdrive's upload caps (got plenty of time on my hands), any serious downsides to what I'm doing then? Basically, if there's no good reason to switch TO that method, are there any good reasons to get OFF OF this method?
  9. Wouldn't this only affect it if I had just started uploading? I'm talking about an upload queue that already has 50+GB of data in it, so it should already have plenty to group together and upload. By itself it'll upload at 600+mbps but if I start copying something else into the virtual drive, the speed plummets. I'm not ruling this out, I just wanted to see if it was something others had reported before going through a bunch of i/o tests. The exact version is 1.0.2.929
  10. @srcrist, any updates on your experience since pooling 64TB drives instead of using one large one? I'm currently using a massive ReFS volume and I'm considering switching over to what you did. Worth the time?
  11. Can someone explains why this happens? Firstly, if I understand correctly, this is the process for uploading a file: I copy a file from a local disk to the virtual disk That file is copied to a cache stored on the local disk that queues the file for upload CloudDrive uploads to GDrive (my cloud provider) whatever is sitting in the upload queue What I'm noticing is that when my upload queue has things in it and I'm not currently copying any more, I get upload speeds upwards of 600mbps, but when I start copying files to the virtual drive, the upload throttles down to 90-120mbps. What would cause this? Using the latest beta.
×
×
  • Create New...