Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by steffenmand

  1. Came back entirely! Was just a weird thing - maybe related to rate limit exceeded issues
  2. came back after a few reboots :-)
  3. The point of clouddrive is the chunk system. Just mount your drive in clouddrive and you should be able to download. Everything is encrypted if you selected such during creation
  4. did you also change the minimum download size? default = 1MB
  5. they are transcoding in the cloud and streaming to you. They are eliminating the need for a local plex server
  6. Well, the Plex Cloud is a direct cloud storage and play solution where they are getting people to upload their content to amazon through the amazon app. so only different part from current stablebit clouddrive would be an option to read data for usage with their media server player back on their servers, instead of playback from the drive. So pretty much just an option in plex to type in the encryption key and then integration with clouddrive and plex to read to their media server. that way people will have a local cloud drive and also get their stuff playable directly in plex :-) They also lack the encryption and speed improvements with chunks. which you have set up a good path with already Could give you lots of customers By partnering early, you could get some good stuff going, where if not, Plex could themselves move towards a similar solution, making stablebit cloud drive obsolete for that group of users! Wouldn't it be nicer to have a good potential client base ? - I know they know of you, and often a good partnership just takes the one part to begin the communication I know this from my own company also!
  7. Have you guys thought about talking with Plex about a partnership? You got some proprietary software which could really improve it for their service and I'm almost certain that the new Plex Cloud is gonna cost subscription money even for Plex Pass holders, so I smell a really good potential revenue share model you could achieve here... More money for both sides :-)
  8. It's not easy to reproduce it completely. But it happens after several drives dismounts due to IO errors. Lets say i have two drives #1 named "Drive 1" and #2 "Drive 2" They both dismount and i press retry on both within a few seconds. Then the weird part happened. After the remount both drives were now named "Drive 2" and everytime i copied items to #2, all the "To Upload" was shown on #1, as if #1 thought it was #2. The folders however was fine on each drive. detach and reattach fixes the issue, but the saved name may change permanently, so i have to remember to rename when attaching the drive again. I have disabled the IO auto dismount feature in my settings now to avoid the issue, as i sometimes woke up seeing both my drives being dismounted due to "unable to download from" bla bla. Regarding Rate Limit Exceeded, i get it all the time with 20 MB chunks and 20 MB partial reads. My line simply is too fast(1 gbit) and i would love to get larger chunks to solve the issue (maybe 50MB or up to 100 MB). The issue comes because my stuff finish, so fast that i get too many api calls. So instead i get the error and have to retry tons of threads Stuff are like 50% orange(retry color) if i look in the technical window I have uploaded several logs to you the last few months - they should all show the same
  9. Happened again. It happens if both drives dismounted and you hit retry on both at the same time. Then both gets registered as the same drive
  10. Well i had lots pinned when installing from 721 to 722. Then after a reboot, woop and the pinned data was gone and now i can only get 420 kb (it simply don't pin the folders and metadata. pinning finishes in 15 sec).
  11. Amazon is not made for public use yet. It's in developer mode awaiting approval from Amazon. Use Google Drive Unlimited instead and get 20 threads
  12. and what is the minimum download size ? also 10? Just checked amazon myself.I can get max 200 mbit with 20 threads and 20 mb chunks. In general i cant get above 10 mbit pr. file on Amazon - maybe its because it is being run in developer mode. Google gives way better speeds, but unfortunately we are limited on the chunk size on that provider, which leads to Rate Limit Exceeded errors
  13. I've started getting only 420 kb pinned again in .722 - did something change?
  14. Have you increased the minimal chunk size ? The delay is mostly http delay for me, so i have 20 MB pieces giving me around 400 mbit upload
  15. I have two drives for two different google accounts. For this i have two different drives setup. Here comes the weird part. After a dismount (due to rate limit (please raise chunk size!)) all copied to drive #2 will suddently appear as "To Upload" on drive #1. I'm kind of worried that the files won't be placed at the right place, as i use them individually for different items. EDIT #1: I noticed that both drives suddently now got the same drive name, eventhough the names are different inside the Stablebit overview. Did it somehow think it is the same drive twice ? Entering the drive they have the correct files on each drive, but the names are just the same... What to do ? Logs don't show anything different than normal upload stuff with lots of "Rate Limit Exceeded" and "User Rate Limit Exceeded" and "Internal Error" Edit #2: After having detached both drives, they now show up as the same drive when i want to attach Would seem that somehow stuff from the first drive got copied to the second drive, so they think they are the same drive. Hopefully i didn't loose everything on the drive Edit #3: After mounting both again, they are now fine again. Really weird issue!
  16. Hi Christoffer, I know i have asked lots of times, but what is the options for larger chunks on google ? - Im getting a lot of rate exceeded and larger chunks would help slow down the api calls as well as increase the general upload/download as the response time is the real speed killer atm. You had it working on amazon before the new api, shouldn't it be easy to migrate to the google provider ? - I know you most likely wont notice a difference, but us with really high speeds really do and 20 MB is still too small atm, which results in the rate exceeded due to excessive api calls
  17. try increasing the cache in plex! i would also change the prefetch amount to an amount that fits your amount of threads. So threads x minimal download size. i got 400 mb (20 x 20mb) and very large cache in plex which runs ok
  18. Making proper passwords is ib my world just great. cant be that hard to remember a password with a number and special sign
  19. Auto scan runs horribly for me so i switched to manual. but try it out and see :-)
  20. He means the importing of titles to the library. This will get better when they get an advanced pinning engine done, but for now the indexing is really slow, so remember to turn off auto update in plex and disable prefetch while indexing if you want to save bandwidth
  21. mine are still bad. 2000-12000ms. will try fiddler soon.
  22. I think it means you are reaching the max api calls per second and that the thread got cancelled. it will just be retried again
  23. its only prefetch which will activate more threads. if you have set the prefetch amount to very little it might be able to grab that in a single thread. i prefetch 400 mb (20x20mb) at 160 sec timeout. It will start by getting with 20 threads and then 1-2 as it grabs while you read
  24. using the version listed in the OP i got like 5 ms from Google. Over time the application just got a slower and slower response time and its not due to something with the network. It is most definately something in Cloud Drive causing the extra time. when i first tried google i got 400ms, now it sometimes peak above 12000ms. Jsing .670 response times are perfect - most likely because some code is disabled here, i went from 300 mbit download to 700-800 mbit on the same chunk size - too bad the release were flawed :-D
  25. A quick note: Some of the new features will require you to make a new drive if you want to utilize them. for instance larger minimal download and more. depends on your connection if it is needed but i recommend making your drive with the changes before storing too much
×
×
  • Create New...