Search the Community
Showing results for tags 'metadata'.
Hi there I've been using Drivepool for a couple of years now and it's been really stable and has generally great. However, I have noticed an issue over the past few months in relation to file watches. Specifically, in relation to Subsonic server that is running on my machine. The scenario is as follows: I have 7 data disks in my pool and most of my music collection is placed only on disk 1, as per a file-placement rule (see attached screenshot). Subsonic (which is a Java application) has code that keeps a watch on the filesystem so that when new music files are added into my musi
I have been troubleshooting an issue with erratic timestamps when attempting to backup my pool using Bvckup2. Due to timestamps being different between the backup source and backup destination, files are needlessly copied, even though they are actually the same. Only a small percentage of files in my pool are affected, but some are huge (>50 GB) so this eats up backup disk space quickly. For some of these files, timestamps are incorrect but consistent. For other files, the timestamps change almost every time the file is accessed/queried. The Bvckup2 thread below contains all of my troublesh
Im currently planning to have 2 pools with a few drives each and no duplication on either. Im wondering how backup and recovery can be accomplished with this setup. Specifically, id like to answer 2 questions: 1. How can I recover only missing files (with the correct folder structure) from a backup in the event of drive failure. (programs, suggestions and/or anecdotes welcome, Im trying to find the optimal solution) 2. How long can a secondary pool made for offline backup stay without being updated/connected? WIll future updates make older pools unreadable and if so
I have 10TB expandable drive hosted on Google Drive with a 2TB cache. Whenever I add a large amount of data to the drive (for example I added 1TB yesterday) CloudDrive will occasionally stop all read and write operations to do "metadata pinning". This prevents Plex from doing anything while it does its thing, and took over 3 hours to do yesterday. I don't want to disable metadata pinning, but I would like to be able to schedule it, if necessary, for 3AM or something like that. In the meantime, is there a drawback to having such a large local cache? Would it improve operations like metada