Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11573
  • Joined

  • Last visited

  • Days Won

    366

Reputation Activity

  1. Like
    Christopher (Drashna) got a reaction from Jaga in Recovery from File System Damage affecting multiple drives   
    If this is for StableBit Scanner, then try installing this version: 
    http://dl.covecube.com/ScannerWindows/beta/download/StableBit.Scanner_2.5.2.3162_BETA.exe
     
    IIRC, there was an issue with false positives, and the newer version should fix this.
  2. Like
    Christopher (Drashna) got a reaction from Jaga in Damaged Flash Drive - CHKDSK finds nothing   
    Just keep in mind that you may want to leave this running for a few hours, at minimum. 
     
    Also, USB drives are notoriously finicky.
  3. Thanks
    Christopher (Drashna) got a reaction from MangoSalsa in Pool share permissions wiped after reboot   
    That's unusual.  Eg, it really shouldn't do that.  
     
    In fact, the permissions are stored on the underlying disks, and read from there.  So they shouldn't be changing with reboot, unless something is messing with it.    So, in this case, it may be worth running a CHKDSK pass of all of the pooled drives, just to make sure.  And then do this:
    http://wiki.covecube.com/StableBit_DrivePool_Q5510455
    This only applies to storing the database on the pool.   This is because Plex uses hardlinks for it's data rather than a database. And hardlinks are not supported on the pool. 
    By default, new files are placed on the disk with the most available free space (absolute, not percentage).   So if one of the drives is much larger, or has more data already, it will avoid using it, until all three have about the same amount of free space.
     
    There are some balancer plugins that do change this behavior though. 
    Well, the pool is everything "relative to" the PoolPart folders. So the contents of the pooled drives  should mirror the pool, in part, under this folder.
    Though, all you'd really need to do is install StableBit DrivePool on the new system, and it will automatically rebuild the pool.
     
  4. Like
    Christopher (Drashna) reacted to Jaga in Upgrade Pool / Separating Pool from Workstation   
    As far as heat goes, I find watercooling to be an effective solution... but not like most people use it.  I don't move it from the components into water, then directly into the air with radiators.  Instead I have a ~30 gallon denatured water reservoir (basically an insulated camping cooler) that I use two submerged pumps with, one moves water through the CPU/GPU/Chipset coolers on the workstation, the other cycles water through a Hydroponics chiller which vents the heat through heating/cooling duct pipes to push it straight out the window at night.
    The practical effect of this is that very few BTUs of heat make it into the room during the day during the summer.  In the winter I disconnect the ducting from the window and have it dump into the room - providing extra and practical heating to help save on power use.
    The only drawback is the initial purchase cost - the 1/2 HP Hydroponics Chiller and Industrial fan to move air were around $700, and the watercooling elements around another $300.  But I've been using the same setup for around 4 years now, with only water changes for maintenance, and it's delightfully cool in the summer and warm in the winter.  I don't have to run A/C or Heating in that room if I don't want to, and it's one of the most exposed in the house here.
    Interestingly, your GTX puts out the most heat, on average.  I can run my CPU with Prime95 for days on end (24/7), and not begin to approach the amount of heat my 980ti or 1060 dump into the water in even one day.  The drives by comparison (unless fully loaded 100% of the time) don't use that much power compared to even your GTX at idle.  A dozen spinners at ~4 watts each (average of idle vs loaded) barely take 50 watts.  The idle spec for your video card are 71 watts Idle, 288 under full load.
    To sum up:  consider controlling the heat by using watercooling to keep it in a large reservoir during the day, then push it out at night using either a chiller, or radiators.  Simply using air cooling means no matter what mechanism you use to remove the heat, you're just dumping it back into the room immediately.
  5. Like
    Christopher (Drashna) reacted to Julius in Balancing as a Backup   
    Umfriend and amateurphotographer, I don't know what your ages are, but I'm 52 and been using computers to store data for about 36 years now, and let me just throw this in here;
    The times when I 'accidentally' deleted a file or files and lost them that way can be counted on one hand. Especially since we have the RECYCLE BIN or Trash can that problem has disappeared for me, if I even ever accidentally do hit delete (in TotalCommander or DxO Photolab or some other program) and confirm its deletion, because the confirmation is still not automatic for me. So, I don't know about you guys, but that barely ever happens, if at all, in life.
    What *did* happen a lot more in the past 36 years is storage media suddenly deciding to crap out on me , floppies or drives becoming entirely or partly inaccessible. I've even lost 16 GB of stored data when an at the time very expensive Intel SLC SSD decided to stop working a few years back, it went from 1 to 0. No software out there could get it back, not even SpinRite, probably a charged spark borked the controller chip. Luckily it was mostly an OS installed boot-drive, but even so, it warned me about not trusting on the high MTBF of SSDs too much.
    What makes backup policies suck the most for me is the fact that they're almost always dated or too old, not current or not current enough. HDDs or SSDs breaking down almost always happens at moments you least hoped they would, and in ways you never expect. This, for me, is where DrivePool comes in. It has already saved me from losing data several times by having a direct dual or tripple written down copy of what I was working on. These days I have big data passing through my storage media, and it's really comforting to know I can access the functional left-over storage whenever one or more stop working. RAID storage methods for redundancy are horribly overrated, in my experience. Oh the times I've tried to restore data from broken RAID-arrays probably outnumbers those for just broken disks, and the time it took, good grief, incomparable to having StableBit DrivePool create standalone copies.
    So, backup or not, for me backup policies only exist in order to have data off-site and/or offline.
  6. Thanks
    Christopher (Drashna) reacted to Umfriend in Balancing as a Backup   
    One difference is that if you accidentally delete a file, it will be gone. DP is _not_ a backup and you're setting yourself up for trouble when trying to use it as such.
  7. Like
    Christopher (Drashna) reacted to Dave Hobson in Micro-management Of Drivepool - Solved by using junctions.   
    Even more useful now.
    I have all my data more or less duplicated on gdrive. As my penchant for higher quality files grows local space becomes more of an issue (but not in anyway terminal).
    As a result some shows that friends/family have asked for and not to my personal taste reside solely on gdrive (which has a separate Plex Server attached to it). I tried NetDrive @ Drive File Stream for a while, pointing my local server to those. This lead to one of two evils. All the shows I have locally would show as duplicates OR in Plex I would have to point to the individual show directories for stuff that is only on google (far too lazy to separate the shows on google itself).
    On a whim I just decided to see what would happen if i selected some show folders on Drive File Stream and make directories junctions for them. I really didn't expect it to work and yet it did. I went to my google api screen to monitor the hits while Plex scanned the files in and everything was fine. 
    So now I'm gonna set up a separate folder consisting only of these junction points and point Plex to that. If I do ever get around to watching these shows they are there without switching to the Gdrive mounted server or having to download terrabytes of data locally for shows I may never bother with. 
     
    I have no idea about Directory Junctions inner workings... but wow this is game changing for me. 
     
  8. Like
    Christopher (Drashna) reacted to Dave Hobson in Micro-management Of Drivepool - Solved by using junctions.   
    Yep. I'm not really a command line type of guy.
    But http://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html this really helped.
  9. Like
    Christopher (Drashna) reacted to B00ze in Micro-management Of Drivepool - Solved by using junctions.   
    What's fantastic is that DrivePool supports them!
  10. Like
    Christopher (Drashna) reacted to Chris Downs in Trying to understand Drivepool and SSD cache...   
    Is this for video editing? If so, you might want to consider larger SSDs for the cache, or using a non-pooled SSD for editing purposes and moving the files when you're done (the file you are creating - the source material will be fine left on the pool as you're only reading).
     
  11. Like
    Christopher (Drashna) reacted to Scott in What does "unusable for duplication" mean?   
    Good points, I think I have my answers now.  I did some more reading as well and I have enabled file integrity on each of my drives within my pool. Any drives I add from now on out I will format with file integrity enabled from the start.
     
    Thanks for the walk through!!
     
    Best wishes,
    Scott
  12. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Warning: 0 : [PinDiskMetadata] Cloud drive is not found in VDS' basic disks collection.   
    If that's the case, then yeah, it definitely sounds like it was a timing issue. 
  13. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Warning: 0 : [PinDiskMetadata] Cloud drive is not found in VDS' basic disks collection.   
    You're very welcome, and I'll let you know as soon as I have feedback from Alex. 
  14. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Warning: 0 : [PinDiskMetadata] Cloud drive is not found in VDS' basic disks collection.   
    Wow, that issue sure shows up a lot in the service log. 
    I've flagged the logs for Alex (the Developer), so he can check.
    https://stablebit.com/Admin/IssueAnalysis/27781
  15. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Warning: 0 : [PinDiskMetadata] Cloud drive is not found in VDS' basic disks collection.   
    It's probably safe to ignore.
    Just in case, could you run the StableBit Troubleshooter on the system in question?

    http://wiki.covecube.com/StableBit_Troubleshooter
    Use "3525" as the Contact ID in the Troubleshooter. 
  16. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Maximum CloudDrive size for Google Drive   
    Larger chunk sizes is a good idea.  But aside from that, the defaults should be pretty good, actually.
    And the 63TB thing ... the drive can be larger than that. But if it is, you'd want to partition the drive so that no single partition is larger than that. 
     
  17. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Maximum CloudDrive size for Google Drive   
    The box for the capacity is a text box. You can type on any value you want (that ... is supported, at least, so up to 1PB). As long as your provider has the free space for it. 
     
    Just remember that the largest cluster (allocation unit) size may be 64kb on most versions of Windows, which means 256TB volume, max.  
    Also, note that you may not want to have partitions/volumes that are larger than 63TBs, as Windows has issues with disk checking if the volume is NTFS and 64TBs or larger...  You can partition and pool the partitions to avoid the issue, if needed... But ....
  18. Thanks
    Christopher (Drashna) got a reaction from No0bstacles in Maximum CloudDrive size for Google Drive   
    You're very welcome for it! 
     
    And yeah, the larger chunk size means potentially lower overhead, so it's not a bad idea, if this is for multimedia storage. 
  19. Like
    Christopher (Drashna) got a reaction from browned in Hyper-V Replication   
    I should do this with my VM lab...
    But no, I've never had the opportunity to set this up. 
  20. Like
    Christopher (Drashna) reacted to qwiX in Using Clouddrive on Windows Server Datacenter 2016 - Never Mounts   
    I'm using Windows Server 2016 Datacenter (GUI Version - newest updates) on a dual socket system in combination with CloudDrive (newest version).
    The only problem I had, was to connect with the Internet Explorer to the cloud service. Using a 3rd party browser solved this.
    But I'm always using ReFS instead of NTFS...
  21. Like
    Christopher (Drashna) reacted to nauip in Defragging: Individual Drives or DrivePool?   
    This answered my question, too. Though I suspected I already knew the answer, it's helpful to be sure.
  22. Like
    Christopher (Drashna) reacted to B00ze in File placement based on folder   
    Good day.
    Yup, the "Balance on the fly" would be slow, since you'd have to check ALL drives for the folder. For large folders, you do NOT keep it together; what you do is place any new file into the volume where the folder already has the most data (comparing all the volumes where the folder exists) AND where there is space (and not limited by some rule.) So not only do you have to check all drives, you have to calculate the amount of space the folder takes on every drive, so you can tell which drive to pick, and that means scanning all the files inside. It would make everything slow. One way around this is to just place the file anywhere where there is space, and just balance later. As far as depth, it does not matter : You treat all folders as singles - i.e. Subfolders NEVER count, they are just another folder to keep together; you just try to keep the FILES together, not the Subfolders.
    All of these problems are with nothing else running. What happens when you throw in Placement Rules, duplication and other balancers? Gets crazy pretty fast.
    It's probably do'able, and you can cut corners (place any new file anywhere and balance later). But there sure is a lot to think about.
    Regards,
  23. Like
    Christopher (Drashna) reacted to Renstur in not authorized, must reauthorize   
    I just want to say: Its gone 12 hours thus far after updating to the recent build and it hasn't yelled at me to reauthorize GDrive. 
     
    I'll let you know if anything changes.
    Thanks Drashna!
  24. Like
    Christopher (Drashna) reacted to B00ze in Prefer existing folder option   
    Indeed - Have a look at the General forum; I just added a reply to a long thread about this, where I give examples of complications.
    Also, DrivePool will sometimes create a folder TEMPORARILY, when you place a file in it, and then move the file off to a different disk. The folder stays behind, so you could potentially end-up with your single season folder on multiple disks, where it is empty on all but 1 disk. So DrivePool cannot simply check for the existence of the folder, it would also have to see if it was empty or not (or else change the code a little to delete empty folders when it balances the contents somewhere else, which it does not do right now).
    Anyway, it's pretty complicated.
    Something that could help is to run Quinn's excellent Drive Pool Generate CSV (see here). Just schedule it in Task Scheduler and it will regularly create a CSV file of where all your files are. This way you can quickly find-out what you have lost should a disk go bad (you'll need a spreadsheet program).
    Regards,
  25. Thanks
    Christopher (Drashna) reacted to baldursgate in 2.2 release appreciation   
    It's a release version! Thank you!!
×
×
  • Create New...