Jump to content
  • 0

Slow write speed to drive.


Shidapu

Question

8 answers to this question

Recommended Posts

  • 0

DrivePool just forwards the drive requests to the underlying drives, so slow performance while writing to the pool is likely just caused by poor underlying drive performance. But there's honestly no way to tell what might be causing the issue just from this information. I think the first step might be to provide all of your settings for both DrivePool and CloudDrive, as well as an account of the hardware you're using for the cache and the underlying pool. 

Link to comment
Share on other sites

  • 0
24 minutes ago, srcrist said:

DrivePool just forwards the drive requests to the underlying drives, so slow performance while writing to the pool is likely just caused by poor underlying drive performance. But there's honestly no way to tell what might be causing the issue just from this information. I think the first step might be to provide all of your settings for both DrivePool and CloudDrive, as well as an account of the hardware you're using for the cache and the underlying pool. 

Okey.

8 Local disks, each 10TB each. Formated with 64Kb.
1 SSD, 120GB connected via USB. Acting as a cache disk with SSD Optimizer plugin.
1 Clouddrive at 256TB with Google Drive Business Plan (Unlimited).

In Drivepool I've added the Clouddrive to my pool.

W1tWRh3.png

1b4P8s6.png

JAqxlJl.png

cg1LR6z.png

Link to comment
Share on other sites

  • 0

Well the first problem is that you only allow the CloudDrive volume to hold duplicated data, while you only let the other volumes hold unduplicated data, which will prevent your setup from duplicating data to the CloudDrive volume at all (none of the other drives can hold data that is duplicated on the CloudDrive volume). So definitely make sure that all of the volumes that you want to duplicate can hold duplicated data.

Second, before you even start this process, I would caution you against using a single 256TB NTFS volume for your CloudDrive, as any volume over 60TB exceeds the size limit for shadow copy and, also, thus, chkdsk. That is: a volume that large cannot be repaired by chkdsk in case of any file system corruption, and is effectively doomed to increase corruption over time. So you might consider a CloudDrive with multiple partitions of less than 60TB apiece.

That being said, NEITHER of these things should have any impact on the write speed to the drive. The pool should effectively be ignoring the CloudDrive altogether, since it cannot duplicate data to the CloudDrive, and only the other drives can accept new, unduplicated data. The SSD balancer means that all NEW data should be written to the SSD cache drive first, so I would look at the performance of that underlying drive. Maybe even try disabling the SSD balancer temporarily and see how performance is if that drive is bypassed, and, if it's better, start looking at why that drive might be causing a bottleneck.

What sort of drive is your CloudDrive cache drive, and how is that cache configured? Also, what are the CloudDrive settings? Chunk size? Cluster size? Encryption? Etc.

Link to comment
Share on other sites

  • 0
32 minutes ago, srcrist said:

Well the first problem is that you only allow the CloudDrive volume to hold duplicated data, while you only let the other volumes hold unduplicated data, which will prevent your setup from duplicating data to the CloudDrive volume at all (none of the other drives can hold data that is duplicated on the CloudDrive volume). So definitely make sure that all of the volumes that you want to duplicate can hold duplicated data.

Second, before you even start this process, I would caution you against using a single 256TB NTFS volume for your CloudDrive, as any volume over 60TB exceeds the size limit for shadow copy and, also, thus, chkdsk. That is: a volume that large cannot be repaired by chkdsk in case of any file system corruption, and is effectively doomed to increase corruption over time. So you might consider a CloudDrive with multiple partitions of less than 60TB apiece.

That being said, NEITHER of these things should have any impact on the write speed to the drive. The pool should effectively be ignoring the CloudDrive altogether, since it cannot duplicate data to the CloudDrive, and only the other drives can accept new, unduplicated data. The SSD balancer means that all NEW data should be written to the SSD cache drive first, so I would look at the performance of that underlying drive. Maybe even try disabling the SSD balancer temporarily and see how performance is if that drive is bypassed, and, if it's better, start looking at why that drive might be causing a bottleneck.

What sort of drive is your CloudDrive cache drive, and how is that cache configured? Also, what are the CloudDrive settings? Chunk size? Cluster size? Encryption? Etc.

Thanks for the quick and informative response!

I guess i'm about to remove that 256TB Disk then because of the issues with it, had no idea.

The performance of the SSD is great, until it  fills up, and has to wait for my slow transfers to finish to the archive :D
Yes i have tried removing the SSD to see if the speeds differs, and no, write speed doesn't change though, still "capped" at my upload speed. :(

Just noticed that running an SSD with a USB cable is not that great..

EZwdV2F.png

I'm not duplicating a whole volume, i'm duplicating folders.

y53knVu.png

 

Clouddrive is configured with Encryption, 64Kb Cluster Size, 10MB Chunk Size i guess, standard, didn't change that.

 

IU3m1IA.png

 

So you mean this shouldn't be possible to do? What is it duplicating? XD Im not so sure anymore.

Link to comment
Share on other sites

  • 0

I'm on way to remove the Gdrive and create a smaller one for the essential stuff only.
Gonna keep my business plan, just in case.

Any other software that can do what i want? Backup directories without sacrificing on write performance?

Too bad i cant duplicate as i wanted to, not really in the feel of creating 2 pools, I need 1 big pool for my stuff on Emby.

Thanks for helping out.

Link to comment
Share on other sites

  • 0

Removed everything, but i still have like 4TB of Duplicated files locally on my drives.

How can i delete the duplicated stuff just bogging up storage space?
Edit: Short answer -> Remeasure.

xi100NZ.png

H5proGu.png

Created a new Drivepool with 2 Clouddrives at 50TB each. Data Duplication 1x, Encrypted, 16Kb Cluster Size, 20MB Chunk Size.

PKZiKOR.png

 

So now i need a software that can auto upload stuff from Directory on D: to my new pool at a time schedule.

I'm guessing Drivepool can't do that?

Link to comment
Share on other sites

  • 0
4 hours ago, Shidapu said:

So now i need a software that can auto upload stuff from Directory on D: to my new pool at a time schedule.

I'm guessing Drivepool can't do that?

In theory, the balancer settings can facilitate that, actually.

https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing Settings

Link to comment
Share on other sites

  • 0
16 hours ago, Christopher (Drashna) said:

In theory, the balancer settings can facilitate that, actually.

https://stablebit.com/Support/DrivePool/2.X/Manual?Section=Balancing Settings

Sweet, i'd like to know more how to do that. From what iv'e read on the forums, Combining 2 pools, local and clouddrive and doing something else should be possible to achieve what i want.

Gonna have to dive deeper into this.

Edit: Went with another software to do my backup to google.
But a way to do it with Clouddrive more efficiently would be greatly appreciated.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...