Jump to content
  • 0

Optimal settings for Plex


Zanena

Question

I've bought the full StableBit suite a couple weeks ago and I must admit my money were well spent. Now I'm running a Plex media server with these specs:

Xeon E3-1231v3

16GB ddr3

2x4TB drives

1000/1000 mbps dedicated connection

Everything is running smoothly, but reading some threads on the forums, doubts started arising on the fact my settings are not ideal and since I still haven't uploaded much data, I'm still able to create a new drive and upload everything again easily. My main concern is that when I've setup the drive i chose 10mb as chunk size instead of 20mb, which I see many are using and I wonder whether there are any major differences between the 2.

Also i'd like to know which settings you're using to get the best out of CloudDrive and Plex.

 

Link to comment
Share on other sites

Recommended Posts

  • 0
On 6/6/2020 at 9:29 AM, otravers said:

Thanks! Sounds like this is handled without needing a reupload of these files, right?

I'm still on the fence because I don't know that I like the idea of losing control of the volume on which specific files/folders end up being stored.

Correct. Nothing needs to be reuploaded to just move the data around on the same volume. Note that you can still control the placement of files and folders with the DrivePool balancing settings, though not quite as granularly as you could with a single volume. 

Link to comment
Share on other sites

  • 0
On 1/22/2019 at 4:35 PM, srcrist said:

So let's say you have an existing CloudDrive volume at E:.

  • First you'll use DrivePool to create a new pool, D:, and add E:
  • Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space.
  • Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive.
  • Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F:
  • Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool.
  • Then just navigate to D: and all of your content will be there, as well as plenty of room for more.
  • You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. 
  • NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. 

For the step where you're moving the data from the hidden PoolPart folder on your E: drive, are you moving the data to the new D: drive pool, or to the F: drive you created from the new 55TB of space? I have a slow upload speed, so I really don't want to wait months to reupload 7TB of data.

 

And what is the purpose of creating the F: drive? Why create a new drive rather than just expand the size of the E: drive and add it to the D: pool all on its own? Is there an advantage to having 2 partitions rather than just 1?

Link to comment
Share on other sites

  • 0
12 hours ago, Niloc said:

For the step where you're moving the data from the hidden PoolPart folder on your E: drive, are you moving the data to the new D: drive pool, or to the F: drive you created from the new 55TB of space? I have a slow upload speed, so I really don't want to wait months to reupload 7TB of data.

And what is the purpose of creating the F: drive? Why create a new drive rather than just expand the size of the E: drive and add it to the D: pool all on its own? Is there an advantage to having 2 partitions rather than just 1?

If you're following the instructions correctly, you are simply reshuffling data around on the same drives. They are file system level changes, and will not require any data to be reuploaded. It should complete in a matter of seconds.

And the purpose of the multiple partitions is covered above: 1) to keep each volume below the 60TB limit for VSS and Chkdsk, and, 2) to avoid multiple caches thrashing the same drive and to make bandwidth management easier. If you have multiple cache drives then you can, of course, use multiple CloudDrive drives instead of volumes on the same drive, but make sure that you can actually support that overhead. I'd suggest an SSD cache drive per CloudDrive drive--the cache can be surprisingly resource heavy, depending on your usage. In any case, though, there isn't really a compelling need to use multiple CloudDrive drives over volumes--especially if the drives will all be stored on the same provider space. There just isn't really any advantage to doing so.

Link to comment
Share on other sites

  • 0
8 hours ago, srcrist said:

If you're following the instructions correctly, you are simply reshuffling data around on the same drives. They are file system level changes, and will not require any data to be reuploaded. It should complete in a matter of seconds.

Thanks for explaining. Does this include moving the .covefs folder?

Link to comment
Share on other sites

  • 0

what would be the optimal setting for plex if i want to buffer more locally before plex is able to start. i am beginning to quite ofthen getting choppy video and messages that my connection is to slow for these settings. i have a 1000/1000 connection and 1000 ethernet in the house. i am using gdrive

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...