Jump to content
Covecube Inc.
  • 0
HotDogSandwich

Plex Recommended Settings

Question

Seems like this topic is discussed a lot, so I'd like to see if we can agree on some recommended settings for Plex and post them here.

 

So the question is... assuming a 1000/1000 gigabit connection, what CloudDrive settings are ideal to achieve reliable library scanning, as well as the ability to stream/transcode multiple (5-10) simultaneous streams.

 

I'll start with the settings I'm currently using, but please chime in as necessary and I can update the thread as we go. Thanks for your input!

 

Create Drive Settings:

 

ER4lPA1.jpg

 

I/O Performance Settings

 

M13bI3w.jpg

Share this post


Link to post
Share on other sites

13 answers to this question

Recommended Posts

  • 0

To be honest, those settings look good, for the most part.

 

I'd reduce the threads to 10/10, so you're not overusing the API rates.   And I'd set to "Prefetch forward" to 100MB.  

 

Otherwise, I'd say use it and see how it goes. 

Share this post


Link to post
Share on other sites
  • 0

To be honest, those settings look good, for the most part.

 

I'd reduce the threads to 10/10, so you're not overusing the API rates.   And I'd set to "Prefetch forward" to 100MB.  

 

Otherwise, I'd say use it and see how it goes. 

 

Thanks - I'll try those modifications and see if there is any improvement.

 

 

Side question - currently I'm using a dedicated 240gb SSD as my cache drive, thinking that an SSD will perform better for my use case. Would I be better off having a larger non-ssd drive, like say a 4tb spinning drive, or is the smaller SSD route sufficient? 

Share this post


Link to post
Share on other sites
  • 0

Thanks - I'll try those modifications and see if there is any improvement.

 

 

Side question - currently I'm using a dedicated 240gb SSD as my cache drive, thinking that an SSD will perform better for my use case. Would I be better off having a larger non-ssd drive, like say a 4tb spinning drive, or is the smaller SSD route sufficient? 

 

 

Welcome! 

 

 

As for the cache drive, that depends. 

 

You ahve higher than 150MB/s download speed for your ISP, and from the provider you're using?  

If so, then an SSD may be of benefit (not by a huge amount, for now).    If you're able to hit these speeds, then the SSD's ability to handle random IO better than HDD's may help with the read cache. 

 

Otherwise, in either case, the SSD will absolutely benefit writes to the drive. 

Share this post


Link to post
Share on other sites
  • 0

It might be necessary to adjust the prefetching depending on whether you are importing or watching content.  From what I understand of prefetching, it's saying that anytime more than 1MB (the prefetch trigger) is downloaded, it will continue to download another 100MB (the prefetch forward) from that same file with the assumption that will will need that part too. This is a good assumption when you are playing the whole file, and it certainly seemed to smooth out many of the playback issues I encountered when I first started using CloudDrive with Plex. However, if you are importing a ton of media to your Plex library, it might be overkill to download that extra data (especially set at 100MB forward), and it will probably end up slowing down the import process quite a bit. At the moment, I am experimenting with reducing or disabling prefetch during import and turning it back up when I'm done. 

Share this post


Link to post
Share on other sites
  • 0

It might be necessary to adjust the prefetching depending on whether you are importing or watching content.  From what I understand of prefetching, it's saying that anytime more than 1MB (the prefetch trigger) is downloaded, it will continue to download another 100MB (the prefetch forward) from that same file with the assumption that will will need that part too. This is a good assumption when you are playing the whole file, and it certainly seemed to smooth out many of the playback issues I encountered when I first started using CloudDrive with Plex. However, if you are importing a ton of media to your Plex library, it might be overkill to download that extra data (especially set at 100MB forward), and it will probably end up slowing down the import process quite a bit. At the moment, I am experimenting with reducing or disabling prefetch during import and turning it back up when I'm done. 

 

What do you mean by importing?

Share this post


Link to post
Share on other sites
  • 0

What do you mean by importing?

 

He means the importing of titles to the library.

 

This will get better when they get an advanced pinning engine done, but for now the indexing is really slow, so remember to turn off auto update in plex and disable prefetch while indexing if you want to save bandwidth

Share this post


Link to post
Share on other sites
  • 0

He means the importing of titles to the library.

 

This will get better when they get an advanced pinning engine done, but for now the indexing is really slow, so remember to turn off auto update in plex and disable prefetch while indexing if you want to save bandwidth

 

Now I understand, I'll disable prefetch when I do my initial index. If I turn it back on after the initial library load, will it work okay to be auto scan for new titles?

Share this post


Link to post
Share on other sites
  • 0

Now I understand, I'll disable prefetch when I do my initial index. If I turn it back on after the initial library load, will it work okay to be auto scan for new titles?

 

Auto scan runs horribly for me so i switched to manual. but try it out and see :-)

Share this post


Link to post
Share on other sites
  • 0

However, if you are importing a ton of media to your Plex library, it might be overkill to download that extra data (especially set at 100MB forward), and it will probably end up slowing down the import process quite a bit. At the moment, 

 

True, but also remember that Plex (and other solutions) may also read the file header, as well (using FFMPEG or similar) to grab the codec and other information.   So, reading ahead, at least a bit, may help with performance while scanning the library. 

 

 

But definitely let us know what you find to be optimal.

Share this post


Link to post
Share on other sites
  • 0

He means the importing of titles to the library.

 

This will get better when they get an advanced pinning engine done, but for now the indexing is really slow, so remember to turn off auto update in plex and disable prefetch while indexing if you want to save bandwidth

 

You can get around this by importing the content locally, and then moving it to your Clouddrive drive.

 

For example I have 2 folders in my 'TV' library for plex, X:\Video\TV and E:\Video\TV, X: is local, E: is Clouddrive.

 

All content is originally added to X:, where it is indexed. Then, as I watch a whole season or something, and I want to 'archive' a show, I move it to E:

 

Plex will not re-index since it notices that it is just the same file that has moved, updates the pointer to the file instantly, and everything works perfectly.

Share this post


Link to post
Share on other sites
  • 0

Hi. I have 3 different Gsuite accounts with unlimited space. I try to use the space for my Plex colletion ( i use plex for my family collection of movies and photos mainly , not the conema movie collection).

So i have 3 virtual stablebit drives on 3 accounts . Averything is encrypted. 

After a few days, all 3 accounts don`t accept stablebit connections, i get errors. If i shutdown stablebit cloud drive for 48 hours or so, everything is back to normal for 2-3 days...

What am i doing wrong ? The uploaded data is not too much ( 2-3 gigs/day , rarely more than 10 )  . For me it`s some kind of API ban from gdrive....

Laso, i have scanner disabled in plex .

Here are the errors 

2:20:42.5: Warning: 0 : [IoManager:58] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
2:20:42.5: Warning: 0 : [IoManager:129] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
2:20:42.5: Warning: 0 : [ChecksumBlocksChunkIoImplementation:59] Expected checksum mismatch for chunk 137, ChunkOffset=0x00C00300, ExpectedChecksum=0x87e9be2e581ad821, ComputedChecksum=0x41aff241ae641c63.
2:20:42.7: Warning: 0 : [IoManager:83] Error performing read-modify-write I/O operation on provider. Retrying. Checksum mismatch. Data read from the provider has been corrupted.
2:20:43.2: Warning: 0 : [ReadModifyWritePartialChunkRecoveryImplementation:59] Error reading chunk 137 for write (Offset=0x0000000000000000, Length=20971520). Retrying with duplicate chunk 439804651247 (DuplicatePart=2).

Attached data 

1.png

2.png

New Text Document (2).txt

Share this post


Link to post
Share on other sites
  • 0

It probably would have been better to submit a new thread for this question, as it only tangentially relates to the topic of the thread, and the Plex information in this particular thread is a bit outdated. So for anyone stumbling on this thread for up-to-date Plex information, you'll likely want to continue your search.

Now, that being said, the errors that you're seeing are data integrity check errors. CloudDrive is telling you that the data that it's getting from Google does not match the data that it expected to receive. So that leaves us with three possibilities: 1. CloudDrive has a bug and is, for some reason, expecting the wrong data; 2. Google is returning the wrong data when CloudDrive requests it; or, 3. That the data on Google's services is genuinely corrupt. My intuition says that 2 is the most likely--both because a bug like that should be more widely reported, and because genuine data corruption shouldn't resolve itself once in awhile--but determining which of them is actually the problem will require that you submit a ticket to the official support channel at https://stablebit.com/Contact so that they can troubleshoot with you.

Those are not, notably, API ban errors. And CloudDrive is effectively immune from the API bans that tools like rClone get simply by virtue of its method of operation anyway. So whatever is going on here is likely unrelated to the use of the API, unless Google has changed something without warning (again).

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

×
×
  • Create New...