Jump to content
  • 0

Optimal settings for Plex


Zanena

Question

I've bought the full StableBit suite a couple weeks ago and I must admit my money were well spent. Now I'm running a Plex media server with these specs:

Xeon E3-1231v3

16GB ddr3

2x4TB drives

1000/1000 mbps dedicated connection

Everything is running smoothly, but reading some threads on the forums, doubts started arising on the fact my settings are not ideal and since I still haven't uploaded much data, I'm still able to create a new drive and upload everything again easily. My main concern is that when I've setup the drive i chose 10mb as chunk size instead of 20mb, which I see many are using and I wonder whether there are any major differences between the 2.

Also i'd like to know which settings you're using to get the best out of CloudDrive and Plex.

 

Link to comment
Share on other sites

Recommended Posts

  • 2

If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex:

  • 20MB chunk size
  • 50+ GB Expandable cache
  • 10 download threads
  • 5 upload threads, turn off background i/o
  • upload threshold 1MB or 5 minutes
  • minimum download size 20MB
  • 20MB Prefetch trigger
  • 175MB Prefetch forward
  • 10 second Prefetch time window
Link to comment
Share on other sites

  • 1

Nope. No need to change anything at all. Just use DrivePool to create a pool using your existing CloudDrive drive, expand your CloudDrive using the CloudDrive UI, format the new volume with Windows Disk Management, and add the new volume to the pool. You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. You will also want to disable most if not all of DrivePool's balancers because a) they don't matter, and b) you don't want DrivePool wasting bandwidth downloading and moving data around between the drives. 

So let's say you have an existing CloudDrive volume at E:.

  • First you'll use DrivePool to create a new pool, D:, and add E:
  • Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space.
  • Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive.
  • Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F:
  • Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool.
  • Then just navigate to D: and all of your content will be there, as well as plenty of room for more.
  • You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. 
  • NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. 

My drive layout looks like this:

 

DriveLayout.png

Link to comment
Share on other sites

  • 0
10 hours ago, srcrist said:

If you haven't uploaded much, go ahead and change the chunk size to 20MB. You'll want the larger chunk size both for throughput and capacity. Go with these settings for Plex:

  • 20MB chunk size
  • 50+ GB Expandable cache
  • 10 download threads
  • 5 upload threads, turn off background i/o
  • upload threshold 1MB or 5 minutes
  • minimum download size 20MB
  • 20MB Prefetch trigger
  • 175MB Prefetch forward
  • 10 second Prefetch time window

Thanks for the suggestion, I thought 20mb chunks would be worse when indexing or retrieving metadata of large libraries (cause it wouldn't download just the file's header, but also a small part of the actual video). 

Also what about the sector size and chunk cache size options?

Link to comment
Share on other sites

  • 0
On 1/12/2019 at 6:12 AM, Zanena said:

Thanks for the suggestion, I thought 20mb chunks would be worse when indexing or retrieving metadata of large libraries (cause it wouldn't download just the file's header, but also a small part of the actual video). 

Also what about the sector size and chunk cache size options?

The difference between fetching a single 10MB chunk and a single 20MB chunk on a 1gbps connection is a matter of fractions of a second. Not worth worrying about. CloudDrive is also capable of accessing partial chunks. It doesn't necessarily need to download the entire thing at a time. That is a separate setting. 

You can set your chunk cache size to as large as your system's memory can handle, and the cluster size is really only important with respect to the maximum size of the drive. You'll just need to do the math and decide what cluster size works for your intended purpose. 16KB clusters can create volumes of 60TB, which is the limit for chkdsk and shadow copy, so you want to keep each volume less than that. So I might go with that, if you're going to have an NTFS drive. 

Link to comment
Share on other sites

  • 0
On 1/12/2019 at 9:19 PM, srcrist said:

The difference between fetching a single 10MB chunk and a single 20MB chunk on a 1gbps connection is a matter of fractions of a second. Not worth worrying about. CloudDrive is also capable of accessing partial chunks. It doesn't necessarily need to download the entire thing at a time. That is a separate setting. 

You can set your chunk cache size to as large as your system's memory can handle, and the sector size is really only important with respect to the maximum size of the drive. You'll just need to do the math and decide what sector size works for your intended purpose. 16KB clusters can create volumes of 60TB, which is the limit for chkdsk and shadow copy, so you want to keep each volume less than that. So I might go with that, if you're going to have an NTFS drive. 

Before i proceed and reupload everything I still have a couple questions if you don't mind, looking through some threads related to my issue, I've read you've lost your data once due to corruption and had to reupload everything, do you know why that happened and have any suggestions on how to avoid it? Do you think full drive encryption is needed for my use case? Or does it add unnecessary overhead? Is it possible to store 2 different drives cache on the same disk? I ask this cause when I make a new drive there is an option to format the disk where I want to store the cache.

Link to comment
Share on other sites

  • 0
12 hours ago, Zanena said:

Before i proceed and reupload everything I still have a couple questions if you don't mind, looking through some threads related to my issue, I've read you've lost your data once due to corruption and had to reupload everything, do you know why that happened and have any suggestions on how to avoid it? Do you think full drive encryption is needed for my use case? Or does it add unnecessary overhead? Is it possible to store 2 different drives cache on the same disk? I ask this cause when I make a new drive there is an option to format the disk where I want to store the cache.

You've got a lot packed in there so I'll bullet them out one at a time:

 

  • I lost a ReFS drive. It wasn't because of CloudDrive, it was because of the infamous instability of ReFS in Windows 10 (which has now been completely removed as of the creators update). The filesytem was corrupted by a Windows update which tried to convert ReFS to a newer revision, and the drive became RAW. To mitigate this, simply do not use ReFS. It isn't necessary for a Plex setup, and there are ways to accomplish most of what it offers with NTFS and some skillful configuration. ReFS simply is not ready for prime time, and Microsoft, acknowledging as much, has removed support for the format from Windows 10 entirely. 
  • Is full drive encryption *necessary*? Probably not. But I suggest you use it anyway. It adds effectively no overhead, as modern processors have hardware support for AES encryption and decryption. The additional peace of mind is worth it, even if it isn't strictly necessary. I use it in my setup, in any case.
  • It is entirely possible to store two caches for different CloudDrives on the same disk, but probably unnecessary to actually use two CloudDrives in the longer term. I use one drive divided up into multiple volumes. That is: one CloudDrive drive partitioned into multiple 55TB volumes that are then placed into a DrivePool pool to be used as one large file system. I suggest you use that. This way you can throttle your upload to comply with Google's 750GB/day limit (70-80mbps, I use 70 to leave myself some buffer if I need it), and manage your caching and overall throughput more easily. I actually had multiple drives and took about 6 months to migrate all the data to one large drive just because management was easier. 
  • If you just need a drive to place a cache to copy from one CloudDrive to another, temporarily, one drive will be just fine. You can also use a relatively small cache for the older, source drive too. You don't need a large cache to just read data and copy it to another drive. The destination drive will, of course, need plenty of space to buffer your upload. 
Link to comment
Share on other sites

  • 0
24 minutes ago, srcrist said:

You've got a lot packed in there so I'll bullet them out one at a time:

 

  • I lost a ReFS drive. It wasn't because of CloudDrive, it was because of the infamous instability of ReFS in Windows 10 (which has now been completely removed as of the creators update). The filesytem was corrupted by a Windows update which tried to convert ReFS to a newer revision, and the drive became RAW. To mitigate this, simply do not use ReFS. It isn't necessary for a Plex setup, and there are ways to accomplish most of what it offers with NTFS and some skillful configuration. ReFS simply is not ready for prime time, and Microsoft, acknowledging as much, has removed support for the format from Windows 10 entirely. 
  • Is full drive encryption *necessary*? Probably not. But I suggest you use it anyway. It adds effectively no overhead, as modern processors have hardware support for AES encryption and decryption. The additional peace of mind is worth it, even if it isn't strictly necessary. I use it in my setup, in any case.
  • It is entirely possible to store two caches for different CloudDrives on the same disk, but probably unnecessary to actually use two CloudDrives in the longer term. I use one drive divided up into multiple volumes. That is: one CloudDrive drive partitioned into multiple 55TB volumes that are then placed into a DrivePool pool to be used as one large file system. I suggest you use that. This way you can throttle your upload to comply with Google's 750GB/day limit (70-80mbps, I use 70 to leave myself some buffer if I need it), and manage your caching and overall throughput more easily. I actually had multiple drives and took about 6 months to migrate all the data to one large drive just because management was easier. 
  • If you just need a drive to place a cache to copy from one CloudDrive to another, temporarily, one drive will be just fine. You can also use a relatively small cache for the older, source drive too. You don't need a large cache to just read data and copy it to another drive. The destination drive will, of course, need plenty of space to buffer your upload. 

Thanks for the in depth answers, I was worried about the cache thing, cause I don't remember how I formatted the cache drive when I first created it (probably 4kb) and was worried that reformatting it with a different cluster size would fuck up the old cloud drive. As of now I'm taking in consideration your third suggestion, but i'm not sure about using 16kb clusters, cause in the CloudDrive manual the devs have stated NTFS drives should be smaller than 20TB for optimal performance.

 

EDIT: btw I tried creating a new drive and for some reason CloudDrive has only created the virtual disk, without initiliazing it (I had to do it on my own), any idea why that happened'

Link to comment
Share on other sites

  • 0
1 minute ago, Zanena said:

Thanks for the in depth answers, I was worried about the cache thing, cause I don't remember how I formatted the cache drive when I first created it (probably 4kb) and was worried that reformatting it with a different cluster size would fuck up the old cloud drive. As of now I'm taking in consideration your third suggestion, but i'm not sure about using 16kb clusters, cause in the CloudDrive manual the devs have stated NTFS drives should be smaller than 20TB for optimal performance.

For the record, your cache size and drive can be changed by simply detaching and reattaching the drive. It isn't set in stone. You are given the option to configure it every time you attach the drive. 

As far as the drive size goes, you don't need optimal performance. Remember that CloudDrive can be configured for many different purposes. For some, a smaller chunk size might be better, for some no prefetcher might be better, for some a longer delay before uploading modified data might be better. It just depends on use case. Your use case is going to be very large files with a priority on read speed over seek time and write performance. 

That being said, I just looked because I never remembered seeing any such disclaimer, and I don't see the section of the manual you're referring to. There is a disclaimer that clusters over 4KB can lead to certain application incompatibilities in Windows, and that Windows itself may be less efficient, but those are not important for you. Plex has no such issues, and, from experience, a larger cluster size is plenty efficient enough for this purpose. Remember that you're creating what is predominately an archival drive, where data will be accessed relatively infrequently and in long, contiguous chunks. I can tell you from two years in an active, production environment that larger clusters has caused no issues for me, and I haven't found a single application that hasn't worked perfectly with it. But it's ultimately your call. 

Link to comment
Share on other sites

  • 0
14 minutes ago, srcrist said:

For the record, your cache size and drive can be changed by simply detaching and reattaching the drive. It isn't set in stone. You are given the option to configure it every time you attach the drive. 

As far as the drive size goes, you don't need optimal performance. Remember that CloudDrive can be configured for many different purposes. For some, a smaller chunk size might be better, for some no prefetcher might be better, for some a longer delay before uploading modified data might be better. It just depends on use case. Your use case is going to be very large files with a priority on read speed over seek time and write performance. 

That being said, I just looked because I never remembered seeing any such disclaimer, and I don't see the section of the manual you're referring to. There is a disclaimer that clusters over 4KB can lead to certain application incompatibilities in Windows, and that Windows itself may be less efficient, but those are not important for you. Plex has no such issues, and, from experience, a larger cluster size is plenty efficient enough for this purpose. Remember that you're creating what is predominately an archival drive, where data will be accessed relatively infrequently and in long, contiguous chunks. I can tell you from two years in an active, production environment that larger clusters has caused no issues for me, and I haven't found a single application that hasn't worked perfectly with it. But it's ultimately your call. 

This is what i was talking about: "In NTFS, cluster sizes larger than 4 KB may trigger inefficient I/O patterns in Windows. Therefore, creating drives larger than 16 TB with NTFS may not be optimal."

I knew about the possibility to change the chace and drive's size, but is it the same also for the cluster size?

Link to comment
Share on other sites

  • 0

No. You can't change cluster size after you've formatted the drive. But, again, I mentioned that disclaimer. It won't affect you. You want the optimal settings for plex, not the theoretical minimum and maximum for the drive. The only salient question for you is "will a larger cluster size negatively impact plex's ability to serve numerous large video files on the fly, or the ability of other applications to manage a media library?" And the answer is no. It may have some theoretical inefficiencies for some purposes, but you don't care about those. They won't affect your use case in the least. 

Link to comment
Share on other sites

  • 0
5 minutes ago, srcrist said:

No. You can't change cluster size after you've formatted the drive. But, again, I mentioned that disclaimer. It won't affect you. You want the optimal settings for plex, not the theoretical minimum and maximum for the drive. The only salient question for you is "will a larger cluster size negatively impact plex's ability to serve numerous large video files on the fly, or the ability of other applications to manage a media library?" And the answer is no. It may have some theoretical inefficiencies for some purposes, but you don't care about those. They won't affect your use case in the least. 

That makes sense, but now my issue would be that the disk I want to use for the cache has already been formatted to use 4kb clusters, would reformatting it for 64kb create issues with the old CloudDrive?

Link to comment
Share on other sites

  • 0
18 hours ago, srcrist said:

Oh, the format of the cache drive is irrelevant. Doesn't matter. It can cache any format of cloud drive. 

An NTFS drive with 4KB clusters can cache a CloudDrive with 16KB clusters just fine. All of my physical drives use 4KB clusters. 

Thanks you've been very helpful, the only issue I have now is that for some reason the clouddrive won't mount the virtual disk after the setup and I have to do it manually, I'm waiting for support to get me back on this matter as I don't want to risk uploading everything for nothing.

Link to comment
Share on other sites

  • 0
4 hours ago, Zanena said:

Thanks you've been very helpful, the only issue I have now is that for some reason the clouddrive won't mount the virtual disk after the setup and I have to do it manually, I'm waiting for support to get me back on this matter as I don't want to risk uploading everything for nothing.

Are you using Windows Server by any chance? I think that's a Windows thing, if you are. I don't believe Windows Server initializes new drives automatically. You can do it in a matter of seconds from the Disk Management control panel. Once the disk is initialized it will show up as normal from that point on. 

I'm sure Christopher and Alex can get you sorted out via support, in any case. 

Link to comment
Share on other sites

  • 0

hello srcrist,

I saw you mentioned in this thread that it is optimal to create a large drive partitioned into multiple 55TB partitions, which are then pooled together using DrivePool for this usecase. I have a lot of uploaded data now and am running out of space on my CloudDrive, so I am now looking to figure out a proper long term solution without having to reupload all my data. How did you set this up? My current allocation size is 32kb (max drive of 128 TB), and I do have Drive Pool. Would I have to reformat my drive and delete all the data to properly configure this? 

 

Link to comment
Share on other sites

  • 0
1 hour ago, srcrist said:

Nope. No need to change anything at all. Just use DrivePool to create a pool using your existing CloudDrive drive, expand your CloudDrive using the CloudDrive UI, format the new volume with Windows Disk Management, and add the new volume to the pool. You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. You will also want to disable most if not all of DrivePool's balancers because a) they don't matter, and b) you don't want DrivePool wasting bandwidth downloading and moving data around between the drives. 

So let's say you have an existing CloudDrive volume at E:.

  • First you'll use DrivePool to create a new pool, D:, and add E:
  • Then you'll use the CloudDrive UI to expand the CloudDrive by 55TB. This will create 55TB of unmounted free space.
  • Then you'll use Disk Management to create a new 55TB volume, F:, from the free space on your CloudDrive.
  • Then you go back to DrivePool, add F: to your D: pool. The pool now contains both E: and F:
  • Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool.
  • Then just navigate to D: and all of your content will be there, as well as plenty of room for more.
  • You can now point Plex and any other application at D: just like E: and it will work as normal. You could also replace the drive letter for the pool with whatever you used to use for your CloudDrive drive to make things easier. 
  • NOTE: Once your CloudDrive volumes are pooled, they do NOT need drive letters. You're free to remove them to clean things up, and you don't need to create volume labels for any future volumes you format either. 

Thanks! This worked perfectly. I was wondering if there was any performance hit from doing this? 

Link to comment
Share on other sites

  • 0

Thanks for your help. It seems to be working really well so far. 

To also quickly contribute to the topic, I use the below settings with bandwidth of 1000/40:

  • Download Threads: 10 (not sure here)
  • Upload Threads: 5  (not sure here)
  • No background I/O
  • No Download Throttling
  • Upload Throttling: 30mbps 
  • Upload Threshold: 1 MB or 5 Minutes
  • Minimum Download Size: 50 MB (not sure about this)
  • Prefetch Trigger: 1 MB (not sure here, have some small files like books/comics, so I thought I should put a smaller value here. Also, maybe the video would start sooner)
  • Prefetch Amount: 300 MB (not sure here)
  • Prefetch Time: 10s (not sure here)

Basically, I am not sure if the prefetch is optimally configured.

Link to comment
Share on other sites

  • 0

I wouldn't make your minimum download size any larger than a single chunk. There really isn't any point in this use-case, as we can use smart prefetching to grab larger chunks of data when needed. 

Your prefetcher probably need some adjustment too. A 1MB trigger in 10secs means that it will grab 300MB of data every time an application requests 1MB or more of data within 10 seconds...which is basically all the time, and already covered by a minimum download of 20MB with a 20MB chunk size. Instead, change the trigger to 20MB and leave the window at 10 seconds. That is about 16mbps, or the rate of a moderate quality 1080p encode MKV. This will engage the prefetcher for video streaming, but let it rest if you're just looking at smaller files like EPUB or PDF. We really only need to engage the prefetcher for higher quality streams here, since a 1gbps connection can grab smaller files in seconds regardless. 

A prefetch amount of 300MB is fine with a 1000mbps connection. You could drop it, if you wanted to be more efficient, but there probably isn't any need with a 1gbps connection. 

Link to comment
Share on other sites

  • 0
On 11/28/2019 at 6:19 PM, red said:

Just out of interest, why do you split the Cloud drive into multiple volumes?

The main reason is that Windows cannot run various fixes and maintenance on drives more than 64TB (e.g. chkdsk). It cannot mount a partition larger than 128TB either. Some people are also concerned that historically, various partitions got corrupted due to outages at Google, so by limiting the size of any individual partition, you also limit the potential losses 

Link to comment
Share on other sites

  • 0
On 1/22/2019 at 6:35 PM, srcrist said:

"You'll want to MOVE (not copy) all of the data that exists on your CloudDrive to the hidden directory that DrivePool creates ON THE SAME DRIVE, and that will make the content immediately available within the pool. [...] So let's say you have an existing CloudDrive volume at E [...] Now you'll want to navigate to E:, find the hidden directory that DrivePool has created for the pool (ex: PoolPart.4a5d6340-XXXX-XXXX-XXXX-cf8aa3944dd6), and move ALL of the existing data on E: to that directory. This will place all of your existing data in the pool."

 

@srcrist Thanks for your very detailed post, I'm reaching the limit of my first 16TB (Google Drive) volume and your setup looks more streamlined than mounting more and more separate CloudDrive volumes. I'm uncertain about the part I quoted above and am very frightened to make a mistake that might nuke my existing CloudDrive volume. I have slow upload speed, it took me months to upload 16TB.

How do you accomplish the above? Can I move the Cloudpart directory of my initial CloudDrive volume to the Drivepool folder?

Link to comment
Share on other sites

  • 0
On 6/4/2020 at 10:03 AM, otravers said:

@srcrist Thanks for your very detailed post, I'm reaching the limit of my first 16TB (Google Drive) volume and your setup looks more streamlined than mounting more and more separate CloudDrive volumes. I'm uncertain about the part I quoted above and am very frightened to make a mistake that might nuke my existing CloudDrive volume. I have slow upload speed, it took me months to upload 16TB.

How do you accomplish the above? Can I move the Cloudpart directory of my initial CloudDrive volume to the Drivepool folder?

You're just moving data at the file system level to the poolpart folder on that volume. Do not touch anything in the cloudpart folders on your cloud storage. Everything you need to move can be moved with windows explorer or any other file manager. Once you create a pool, it will create a poolpart folder on that volume, and you just move the data from that volume to that folder. 

Link to comment
Share on other sites

  • 0
8 hours ago, srcrist said:

You're just moving data at the file system level to the poolpart folder on that volume.

 

Thanks! Sounds like this is handled without needing a reupload of these files, right?

I'm still on the fence because I don't know that I like the idea of losing control of the volume on which specific files/folders end up being stored.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...