Jump to content

Desani

Members
  • Posts

    16
  • Joined

  • Last visited

  • Days Won

    4

Reputation Activity

  1. Like
    Desani got a reaction from bequbed in Cloud Drive Redundancy   
    I would do the following in this case:
     
    1. Add both drives to a new drive pool with a duplication of x2 on the entire pool. The drive that has the 1 tb of data already on it will show up as having 1 tb on un-duplicated data in the pool.
    2. In windows turn on hidden folders and move the 1 tb of data you already have on the drive into the new hidden pool-part folder with whatever new structure you want.
    3. DrivePool should now recolonize that there is data on one volume and not the other that needs to be duplicated and you can "Re-measure Duplication" then what will happen is DrivePool will start coping the 1 TB of existing data over to the second CloudDrive. During this process all of the data will be accessible through the DrivePool drive, even though it is currently only on 1 drive.
    4. Point Plex at the new library location and let it scan for the media on the DrivePool Drive
     
    One thing to note: When I setup a new drive like this and add it to a pool with existing data, I like to set the cashed size to a fixed amount (non-expanding) so that DivePool does not attempt to fill my entire drive when copying over the 1 tb of data.
  2. Like
    Desani got a reaction from danjames9222 in [Feature Request] Recommended settings for use with Plex   
    Browsing social media and other sources, I am seeing more and more a use case of StableBit Cloud Drive in conjunction with Plex as one of the recommended ways to have unlimited storage as a back end for media libraries.
     
    I personally have been using it for about 5 months after testing for 1 month and have loved the results. You have to sacrifice some upfront load time and seek time but the result is you have unlimited storage.
     
    What I am requesting is because I see this coming up more and more, is a setting for CloudDrive and possibly even DrivePool, that would automatically fine tune the performance of the Drive/Pool for use with Plex. I see many different recommended setup setting being thrown around with from users that don't fully understand what the setting actually do for performance. As the creators of the software, you would have more insight on what settings to actually use, for each provider, to maximize performance for the Plex use case, instead of power users hunting down and tweaking their drives using information on the internet that could negatively impact them.
     
    What I propose, is on the I/O Performance window you can create a checkbox to say "Use recommended settings for Plex" which would automatically fill in the boxes with the settings that would work best for a cloud drive with large media libraries. That would take a lot of the guess work out for new users looking to setup a cloud drive for plex and ensure they are not negatively affecting performance in their initial testing phase of the product and could convert more demo uses to sales. This checkbox could also change settings that the user won't have access to traditionally if needed, so it is more even more incentive to be using the "recommended settings".
     
    You could also take this into consideration when you have the wizard for creation of a new drive. I would have loved to have a checkbox when I initially created my drives that said "This drive would be for use with Plex" then it automatically sets the chunk size for the provider to maximum and sets the drive size to the largest sector size for easy drive expansion in the future, without having to re-create the drive.
     
    Please let me know if you have any questions.
     
    Thanks,
    Desani
     
    EDIT: You could also continually optimize the settings with future releases as different options or better settings become available as the product matures. Thus ensuring that the settings I have on the drive today, don't stay as the potential back end of the program changes and better settings become available.
  3. Like
    Desani got a reaction from bequbed in Cloud Drive Redundancy   
    This is exactly how you should be handling redundancy if you would like it. If you copy the data yourself to a secondary account, it is an entirely manual process and cloud drive will not auto mount the drive in the new location if your main one fails.
     
    Instead, attach multiple drives using CloudDrive to the different Accounts/Providers and created one drive using DrivePool. Use the duplication options in DrivePool to turn on duplication on the entire volume so the data is always stored in both locations. This way if one drive fails or is having technical issues, your users on Plex won't even be able to tell there is an issue because you will still be able to read from the DrivePool drive until the second drive come online.
     
    Personally I have to set to the following for redundancy:
    3 Cloud Drives as following:
    2x Google Cloud Unlimited Drives
    1x Amazon Cloud Drive
     
    Each one mounted as a 100 TB volume and added 1 on DrivePool Drive. I have the pooled drive set at 3x redundancy on the entire volume. You just have to remember, the more cloud drives you have, the larger the upload pipe you need because when you add a file to the pool, it will begin uploading to all three drives. You need to make sure you still have upload left for anyone watching Plex.
     
    Thanks,
    Desani
  4. Like
    Desani got a reaction from Christopher (Drashna) in DrivePool & CloudDrive combo   
    Yup it will work fine.
     
    Drivepool won't care if they are encrypted or not it will just pool them.
     
    Use CloudDrive to decided if you would like to have it encrypted or not and it will all happen in the background before it is uploaded. The OS and DrivePool will just see a normal drive.
  5. Like
    Desani got a reaction from Christopher (Drashna) in DrivePool & CloudDrive combo   
    Hi Christofer,
     
    When you create the could drive you are asked if you would like to make it an encrypted volume. Chose this and you will not have to worry about encryption after the fact and it will happen all behind the scenes.
     
    Also when you create a Pool I believe that it also automatically sets it up as a local dive.
     
    So your steps would be:
    Create (x4) Google Cloud Drives with encryption.
    Create (x1) Drive Pool with all of the cloud drives and set the duplication on the pool at x4.
     
    After that you will have a pooled drive and any file you put on there will be duplicated across all of the google drives.
  6. Like
    Desani reacted to Christopher (Drashna) in Error - Files not constistant across pooled drives   
    Did StableBit CloudDrive indicate that it was running recovery on your drives? 

    If so, there was an issue where the data was getting corrupted by Google (it's a long explanation).  The newest version attempts to automatically repair that data, but it may not be 100%. 
     
    In this case, if your drives did get repaired, it *could* cause this. 
     
     
    That said, this issue is no longer an issue.  We've made changes that will absolutely make sure that this CANNOT happen in the future. 
     
     
     
     
     
    That said, StableBit DrivePool does write identical files, and ties to make sure of this.
    If/when there is an issue, it will be flagged in the UI.  However, this relies on accessing the file, in most cases.  It will check the file modify date and if that doesn't match, then it will run a CRC check on the files.  
     
    However, given the above issue, the dates may have been fine, so it didn't flag the file for a crc hash. 
     
     

     
    This should not occur in the future, but if it does, then please let me know, right away. 
  7. Like
    Desani got a reaction from KiaraEvirm in Error - Files not constistant across pooled drives   
    I have three cloud drives that are all pooled together with DrivePool. I have it set to duplicate the data across all three cloud drives for 3x duplication for redundancy.
     
    Drive 1: 100 TB Amazon Cloud Drive
    Drive 2: 100 TB Gdrive
    Drive 3: 100 TB Gdrive
     
    I went to access a media file on the pooled drive and I was unable to play the file. The file appeared corrupt. At first I thought that it may have gotten corrupted in transit for the initial transfer. Then I checked other files in the same show and other seasons in the same show, all tv eposides for the one show exhibit the same corruption and refuse to play even copying the file locally from the cloud.
     
    I manually went into each drive in the pool and found the file and downloaded it. What I found was, the file was corrupt and would not play on both GDrives volumes but the file was working properly off of the ACD volume.
     
    I believe when I added the show I only had 2 cloud drives, 1 ACD and 1 GDrive. When I added the 3rd drive I think it replicated from the 1 GDrive that already had the error with the file and thus duplicating the error to the second GDrive.
     
    My question is, how is the file in an inconsistent state across the pool? Shouldn't the file be an exact copy on each volume? I tested removing one of the episodes on both GDrives and it proceeded to mirror the file and it is now working as expected and plays without issue. I would like to be able to tell if there are more shows like this and correct the issue before it becomes unrecoverable. Drivepool should be able to see if the files are in an inconstant state and perhaps even prompt me for which version I would like to keep and mirror to the other drives.
     
    I have left the rest of the show in an inconstant state so that I can assist with troubleshooting how to track down and fix the issue.
     
    OS: Windows Server 2012 R2 x64
    CloudDrive Version: 10.0.0.842
    DrivePool Version: 2.2.0.740
  8. Like
    Desani got a reaction from KiaraEvirm in Feature Request - Assign drive priority for dupicated data   
    I have 3 cloud drives that are all pooled together and are set at duplication x3 for the entire pool so each drive has the same data.
     
    Drive 1: 100 TB ACD
    Drive 2: 100 TB GDrive
    Drive 3: 100 TB GDrive
     
    What I would like to accomplish is when accessing the data that is duplicated on all three drives, I want to assign a weight or priority to accessing the data to the two google drives as they have much better access time and speed and avoid using the ACD as it is there just as another level of redundancy.
     
    Ideally this would not be needed it DrivePool was able read from all the drives at the same time for the file being accessed.
     
    Please let me know if this is a possibility.
     
    Thanks
  9. Like
    Desani got a reaction from Ginoliggime in Error - Files not constistant across pooled drives   
    I have three cloud drives that are all pooled together with DrivePool. I have it set to duplicate the data across all three cloud drives for 3x duplication for redundancy.
     
    Drive 1: 100 TB Amazon Cloud Drive
    Drive 2: 100 TB Gdrive
    Drive 3: 100 TB Gdrive
     
    I went to access a media file on the pooled drive and I was unable to play the file. The file appeared corrupt. At first I thought that it may have gotten corrupted in transit for the initial transfer. Then I checked other files in the same show and other seasons in the same show, all tv eposides for the one show exhibit the same corruption and refuse to play even copying the file locally from the cloud.
     
    I manually went into each drive in the pool and found the file and downloaded it. What I found was, the file was corrupt and would not play on both GDrives volumes but the file was working properly off of the ACD volume.
     
    I believe when I added the show I only had 2 cloud drives, 1 ACD and 1 GDrive. When I added the 3rd drive I think it replicated from the 1 GDrive that already had the error with the file and thus duplicating the error to the second GDrive.
     
    My question is, how is the file in an inconsistent state across the pool? Shouldn't the file be an exact copy on each volume? I tested removing one of the episodes on both GDrives and it proceeded to mirror the file and it is now working as expected and plays without issue. I would like to be able to tell if there are more shows like this and correct the issue before it becomes unrecoverable. Drivepool should be able to see if the files are in an inconstant state and perhaps even prompt me for which version I would like to keep and mirror to the other drives.
     
    I have left the rest of the show in an inconstant state so that I can assist with troubleshooting how to track down and fix the issue.
     
    OS: Windows Server 2012 R2 x64
    CloudDrive Version: 10.0.0.842
    DrivePool Version: 2.2.0.740
  10. Like
    Desani got a reaction from Ginoliggime in Feature Request - Assign drive priority for dupicated data   
    I have 3 cloud drives that are all pooled together and are set at duplication x3 for the entire pool so each drive has the same data.
     
    Drive 1: 100 TB ACD
    Drive 2: 100 TB GDrive
    Drive 3: 100 TB GDrive
     
    What I would like to accomplish is when accessing the data that is duplicated on all three drives, I want to assign a weight or priority to accessing the data to the two google drives as they have much better access time and speed and avoid using the ACD as it is there just as another level of redundancy.
     
    Ideally this would not be needed it DrivePool was able read from all the drives at the same time for the file being accessed.
     
    Please let me know if this is a possibility.
     
    Thanks
  11. Like
    Desani got a reaction from MikrotikOl in Google Drive Upload Error: Object Reference is not set to an Instance of an object   
    I recently upgraded to the latest beta of 10.0.0.834 and I am now having issues with uploading to one of the Google attached Drives on my system.
     
    I keep getting an error: Object Reference is not set to an Instance of an object.
     
    I have a total of three drives on this system, 1 ACD and 2 Gdrive. The amazon drive and the other google drive don't appear to have any issues. It is only the 3rd drive that I believe was made with an earlier version.
     
    I have attempted to re-authorize the drive and I am still having the same issues.
     
     
  12. Like
    Desani got a reaction from Ginoliggime in Google Drive Upload Error: Object Reference is not set to an Instance of an object   
    I recently upgraded to the latest beta of 10.0.0.834 and I am now having issues with uploading to one of the Google attached Drives on my system.
     
    I keep getting an error: Object Reference is not set to an Instance of an object.
     
    I have a total of three drives on this system, 1 ACD and 2 Gdrive. The amazon drive and the other google drive don't appear to have any issues. It is only the 3rd drive that I believe was made with an earlier version.
     
    I have attempted to re-authorize the drive and I am still having the same issues.
     
     
  13. Like
    Desani reacted to Christopher (Drashna) in Cloud Drive preventing Windows Server 2012 from shutting down   
    This may be normal....
     
    Specifically, StableBit CloudDrive halts the shutdown process until the data is fully flushed to the disk.  Otherwise, data corruption can occur.  
     
     
    However, if you're using the 1.0.0.463 build, I'd recommend upgrading, as there are a number of fixes that may help:
    http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.0.0.631_x64_BETA.exe
  14. Like
    Desani got a reaction from Christopher (Drashna) in Cloud Drive preventing Windows Server 2012 from shutting down   
    Upgraded and it is now cleanly shutting down and restarting in about 10 minutes.
     
    Thanks for the advice!
×
×
  • Create New...