Jump to content

Desani

Members
  • Posts

    16
  • Joined

  • Last visited

  • Days Won

    4

Desani last won the day on May 3 2017

Desani had the most liked content!

Desani's Achievements

Member

Member (2/3)

12

Reputation

  1. I just ended up downgrading from 1.1.2.1178 back to the Beta 1145 due to persistent issues where some times it would be fine and upload and other times it would show this error and be in limbo of not doing anything.
  2. If you would like to keep them separated and I don't really see a benefit of adding the coulddrive to the pool, than I would recommend a really nice, lightweight program called SyncFolders. It will do everything you like and give you a bit of options if you would like it to have any kind of backup or just a 1 to 1 sync. Good news is it is completely free and I have been using it for years. http://www.syncfolders.elementfx.com/ Thanks, Desani
  3. I would do the following in this case: 1. Add both drives to a new drive pool with a duplication of x2 on the entire pool. The drive that has the 1 tb of data already on it will show up as having 1 tb on un-duplicated data in the pool. 2. In windows turn on hidden folders and move the 1 tb of data you already have on the drive into the new hidden pool-part folder with whatever new structure you want. 3. DrivePool should now recolonize that there is data on one volume and not the other that needs to be duplicated and you can "Re-measure Duplication" then what will happen is DrivePool will start coping the 1 TB of existing data over to the second CloudDrive. During this process all of the data will be accessible through the DrivePool drive, even though it is currently only on 1 drive. 4. Point Plex at the new library location and let it scan for the media on the DrivePool Drive One thing to note: When I setup a new drive like this and add it to a pool with existing data, I like to set the cashed size to a fixed amount (non-expanding) so that DivePool does not attempt to fill my entire drive when copying over the 1 tb of data.
  4. Browsing social media and other sources, I am seeing more and more a use case of StableBit Cloud Drive in conjunction with Plex as one of the recommended ways to have unlimited storage as a back end for media libraries. I personally have been using it for about 5 months after testing for 1 month and have loved the results. You have to sacrifice some upfront load time and seek time but the result is you have unlimited storage. What I am requesting is because I see this coming up more and more, is a setting for CloudDrive and possibly even DrivePool, that would automatically fine tune the performance of the Drive/Pool for use with Plex. I see many different recommended setup setting being thrown around with from users that don't fully understand what the setting actually do for performance. As the creators of the software, you would have more insight on what settings to actually use, for each provider, to maximize performance for the Plex use case, instead of power users hunting down and tweaking their drives using information on the internet that could negatively impact them. What I propose, is on the I/O Performance window you can create a checkbox to say "Use recommended settings for Plex" which would automatically fill in the boxes with the settings that would work best for a cloud drive with large media libraries. That would take a lot of the guess work out for new users looking to setup a cloud drive for plex and ensure they are not negatively affecting performance in their initial testing phase of the product and could convert more demo uses to sales. This checkbox could also change settings that the user won't have access to traditionally if needed, so it is more even more incentive to be using the "recommended settings". You could also take this into consideration when you have the wizard for creation of a new drive. I would have loved to have a checkbox when I initially created my drives that said "This drive would be for use with Plex" then it automatically sets the chunk size for the provider to maximum and sets the drive size to the largest sector size for easy drive expansion in the future, without having to re-create the drive. Please let me know if you have any questions. Thanks, Desani EDIT: You could also continually optimize the settings with future releases as different options or better settings become available as the product matures. Thus ensuring that the settings I have on the drive today, don't stay as the potential back end of the program changes and better settings become available.
  5. This is exactly how you should be handling redundancy if you would like it. If you copy the data yourself to a secondary account, it is an entirely manual process and cloud drive will not auto mount the drive in the new location if your main one fails. Instead, attach multiple drives using CloudDrive to the different Accounts/Providers and created one drive using DrivePool. Use the duplication options in DrivePool to turn on duplication on the entire volume so the data is always stored in both locations. This way if one drive fails or is having technical issues, your users on Plex won't even be able to tell there is an issue because you will still be able to read from the DrivePool drive until the second drive come online. Personally I have to set to the following for redundancy: 3 Cloud Drives as following: 2x Google Cloud Unlimited Drives 1x Amazon Cloud Drive Each one mounted as a 100 TB volume and added 1 on DrivePool Drive. I have the pooled drive set at 3x redundancy on the entire volume. You just have to remember, the more cloud drives you have, the larger the upload pipe you need because when you add a file to the pool, it will begin uploading to all three drives. You need to make sure you still have upload left for anyone watching Plex. Thanks, Desani
  6. Is the approval still in a pending status with Amazon or is there not even an open request at this point and you will try to get approval some time in the future?
  7. Yup it will work fine. Drivepool won't care if they are encrypted or not it will just pool them. Use CloudDrive to decided if you would like to have it encrypted or not and it will all happen in the background before it is uploaded. The OS and DrivePool will just see a normal drive.
  8. Hi Christofer, When you create the could drive you are asked if you would like to make it an encrypted volume. Chose this and you will not have to worry about encryption after the fact and it will happen all behind the scenes. Also when you create a Pool I believe that it also automatically sets it up as a local dive. So your steps would be: Create (x4) Google Cloud Drives with encryption. Create (x1) Drive Pool with all of the cloud drives and set the duplication on the pool at x4. After that you will have a pooled drive and any file you put on there will be duplicated across all of the google drives.
  9. Hi Christopher, I believe when I was testing different versions of the BETA it is possible that the google drive had to run a recovery. I know there was an issue a few times were it prevented the server from rebooting and I had to manually shut it down, thus causing a recovery to run on the drive after the server was powered back on. The date modified had stayed consistent and that it why it did not flag for a CRC check on the file. I did a scan of all files modified in the time frame for error file and found a few more files that had an issue but it leads me to believe this was all related to a single event a last year in which there potentially may have run a drive recovery. Thanks for the update. I will restore the files from the working drive and continue off as normal. Is there currently a way of running a CRC check on the files on the cloud drive without having to re-download the file to the local cache? I would like to run a check on all the files, just to make sure there isn't anything I am missing.
  10. I have three cloud drives that are all pooled together with DrivePool. I have it set to duplicate the data across all three cloud drives for 3x duplication for redundancy. Drive 1: 100 TB Amazon Cloud Drive Drive 2: 100 TB Gdrive Drive 3: 100 TB Gdrive I went to access a media file on the pooled drive and I was unable to play the file. The file appeared corrupt. At first I thought that it may have gotten corrupted in transit for the initial transfer. Then I checked other files in the same show and other seasons in the same show, all tv eposides for the one show exhibit the same corruption and refuse to play even copying the file locally from the cloud. I manually went into each drive in the pool and found the file and downloaded it. What I found was, the file was corrupt and would not play on both GDrives volumes but the file was working properly off of the ACD volume. I believe when I added the show I only had 2 cloud drives, 1 ACD and 1 GDrive. When I added the 3rd drive I think it replicated from the 1 GDrive that already had the error with the file and thus duplicating the error to the second GDrive. My question is, how is the file in an inconsistent state across the pool? Shouldn't the file be an exact copy on each volume? I tested removing one of the episodes on both GDrives and it proceeded to mirror the file and it is now working as expected and plays without issue. I would like to be able to tell if there are more shows like this and correct the issue before it becomes unrecoverable. Drivepool should be able to see if the files are in an inconstant state and perhaps even prompt me for which version I would like to keep and mirror to the other drives. I have left the rest of the show in an inconstant state so that I can assist with troubleshooting how to track down and fix the issue. OS: Windows Server 2012 R2 x64 CloudDrive Version: 10.0.0.842 DrivePool Version: 2.2.0.740
  11. I have 3 cloud drives that are all pooled together and are set at duplication x3 for the entire pool so each drive has the same data. Drive 1: 100 TB ACD Drive 2: 100 TB GDrive Drive 3: 100 TB GDrive What I would like to accomplish is when accessing the data that is duplicated on all three drives, I want to assign a weight or priority to accessing the data to the two google drives as they have much better access time and speed and avoid using the ACD as it is there just as another level of redundancy. Ideally this would not be needed it DrivePool was able read from all the drives at the same time for the file being accessed. Please let me know if this is a possibility. Thanks
  12. The problem is still occurring. I will download and apply the latest beta build now and let you know if it makes a difference. Edit 1: Updated beta version has been applied. It is just working connecting the Amazon cloud drive right now and then when that is done I assume it will do the same for the two GDrive accounts. I will update you when it has completed. Edit 2: All drives have mounted successfully and the issue with the 3rd drive appears to be gone and it is now uploading the changes it was waiting on earlier successfully. I will keep an eye on it and start adding some more data to the pool to make sure all is well. I will add another post to this thread if I run into any more of the same issues but right now I would consider it solved.
  13. I recently upgraded to the latest beta of 10.0.0.834 and I am now having issues with uploading to one of the Google attached Drives on my system. I keep getting an error: Object Reference is not set to an Instance of an object. I have a total of three drives on this system, 1 ACD and 2 Gdrive. The amazon drive and the other google drive don't appear to have any issues. It is only the 3rd drive that I believe was made with an earlier version. I have attempted to re-authorize the drive and I am still having the same issues.
  14. Upgraded and it is now cleanly shutting down and restarting in about 10 minutes. Thanks for the advice!
×
×
  • Create New...