Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Posts posted by steffenmand

  1. Have uploaded logs to that dropbox link - wasn't able to ref this thread when I uploaded, but hopefully it ends up where it needs to go.

     

    I watched the pre-fetcher get maybe ~50mb of data in consecutive 1mb reads - ideally it should have been consolidated into 5ish 10mbish reads. The drive was created with default settings (10mb chunk size).

     

    did you also change the minimum download size? default = 1MB

  2. is plex trancoding (well amazon) in the cloud then streaming to a device? or is it downloading to the plex server being transcoded and then sent to the device

     

    from what i have read not clear to me - but i can only see the marketing bumf  

     they are transcoding in the cloud and streaming to you. They are eliminating the need for a local plex server

  3. Well, the Plex Cloud is a direct cloud storage and play solution where they are getting people to upload their content to amazon through the amazon app. so only different part from current stablebit clouddrive would be an option to read data for usage with their media server player back on their servers, instead of playback from the drive.

    So pretty much just an option in plex to type in the encryption key and then integration with clouddrive and plex to read to their media server.

     

    that way people will have a local cloud drive and also get their stuff playable directly in plex :-)

     

    They also lack the encryption and speed improvements with chunks. which you have set up a good path with already

     

    Could give you lots of customers :)

     

    By partnering early, you could get some good stuff going, where if not, Plex could themselves move towards a similar solution, making stablebit cloud drive obsolete for that group of users! Wouldn't it be nicer to have a good potential client base ? :) - I know they know of you, and often a good partnership just takes the one part to begin the communication :) I know this from my own company also!

  4. Have you guys thought about talking with Plex about a partnership? You got some proprietary software which could really improve it for their service and I'm almost certain that the new Plex Cloud is gonna cost subscription money even for Plex Pass holders, so I smell a really good potential revenue share model you could achieve here...

     

    More money for both sides :-)

  5. It's not easy to reproduce it completely. But it happens after several drives dismounts due to IO errors. 

     

    Lets say i have two drives #1 named "Drive 1" and #2 "Drive 2"

     

    They both dismount and i press retry on both within a few seconds.

     

    Then the weird part happened. 

     

    After the remount both drives were now named "Drive 2" and everytime i copied items to #2, all the "To Upload" was shown on #1, as if #1 thought it was #2. The folders however was fine on each drive. detach and reattach fixes the issue, but the saved name may change permanently, so i have to remember to rename when attaching the drive again.

     

    I have disabled the IO auto dismount feature in my settings now to avoid the issue, as i sometimes woke up seeing both my drives being dismounted due to "unable to download from" bla bla. 

     

    Regarding Rate Limit Exceeded, i get it all the time with 20 MB chunks and 20 MB partial reads. My line simply is too fast(1 gbit) and i would love to get larger chunks to solve the issue (maybe 50MB or up to 100 MB).

    The issue comes because my stuff finish, so fast that i get too many api calls. So instead i get the error and have to retry tons of threads :) Stuff are like 50% orange(retry color) if i look in the technical window

     

    I have uploaded several logs to you the last few months - they should all show the same :)

  6. From what build to 722?

     

    And the change log, while not super detailed, does include references to ALL changes. 

     

    Well i had lots pinned when installing from 721 to 722. Then after a reboot, woop and the pinned data was gone and now i can only get 420 kb (it simply don't pin the folders and metadata. pinning finishes in 15 sec).

  7. and what is the minimum download size ? also 10?

     

    Just checked amazon myself.I can get max 200 mbit with 20 threads and 20 mb chunks. In general i cant get above 10 mbit pr. file on Amazon - maybe its because it is being run in developer mode.

     

    Google gives way better speeds, but unfortunately we are limited on the chunk size on that provider, which leads to Rate Limit Exceeded errors

  8. I have two drives for two different google accounts. For this i have two different drives setup.

     

    Here comes the weird part.

     

    After a dismount (due to rate limit (please raise chunk size!)) all copied to drive #2 will suddently appear as "To Upload" on drive #1. I'm kind of worried that the files won't be placed at the right place, as i use them individually for different items.

     

    EDIT #1: I noticed that both drives suddently now got the same drive name, eventhough the names are different inside the Stablebit overview. Did it somehow think it is the same drive twice ? Entering the drive they have the correct files on each drive, but the names are just the same...

     

    What to do ? Logs don't show anything different than normal upload stuff with lots of "Rate Limit Exceeded" and "User Rate Limit Exceeded" and "Internal Error"

     

    Edit #2: After having detached both drives, they now show up as the same drive when i want to attach :( Would seem that somehow stuff from the first drive got copied to the second drive, so they think they are the same drive. Hopefully i didn't loose everything on the drive :(

     

    Edit #3: After mounting both again, they are now fine again. Really weird issue!

  9. Hi Christoffer,

     

    I know i have asked lots of times, but what is the options for larger chunks on google ? - Im getting a lot of rate exceeded and larger chunks would help slow down the api calls as well as increase the general upload/download as the response time is the real speed killer atm.

     

    You had it working on amazon before the new api, shouldn't it be easy to migrate to the google provider ? - I know you most likely wont notice a difference, but us with really high speeds really do and 20 MB is still too small atm, which results in the rate exceeded due to excessive api calls

  10. Now I understand, I'll disable prefetch when I do my initial index. If I turn it back on after the initial library load, will it work okay to be auto scan for new titles?

     

    Auto scan runs horribly for me so i switched to manual. but try it out and see :-)

  11. What do you mean by importing?

     

    He means the importing of titles to the library.

     

    This will get better when they get an advanced pinning engine done, but for now the indexing is really slow, so remember to turn off auto update in plex and disable prefetch while indexing if you want to save bandwidth

  12. using the version listed in the OP i got like 5 ms from Google. Over time the application just got a slower and slower response time and its not due to something with the network. 

     

    It is most definately something in Cloud Drive causing the extra time.

     

    when i first tried google  i got 400ms, now it sometimes peak above 12000ms.

     

    Jsing .670 response times are perfect - most likely because some code is disabled here, i went from 300 mbit download to 700-800 mbit on the same chunk size  - too bad the release were flawed :-D

×
×
  • Create New...