Jump to content

Christopher (Drashna)

Administrators
  • Posts

    11573
  • Joined

  • Last visited

  • Days Won

    366

Reputation Activity

  1. Haha
    Christopher (Drashna) reacted to Bertilsson in Prefer existing folder option   
    Ok, in that case, let me give you a really long and angry rant about that...
    Just kidding, I still like DrivePool a lot! 
    Thanks for replying!
  2. Like
    Christopher (Drashna) reacted to Viktor in Constant "Name of drive" is not authorized with Google Drive   
    I installed the latest version 3 days ago and haven’t seen this notification anymore so far.
  3. Like
    Christopher (Drashna) reacted to Jose M Filion in Best Settings for Cloud Drive with Google Drive ?   
    My settings, Been using this for 6 months and with the help with many here and me testing.  This has been perfect me, I recommend bringing down the download threads to 10, and upload to 5,

  4. Like
    Christopher (Drashna) reacted to Burken in CloudDrive+Pool+2 GDrive Accounts = Double Upload/API?   
    i run one setup with 10 google accounts to get away from the daliy limits.
    So yes it works!
  5. Thanks
    Christopher (Drashna) reacted to Kraevin in Stablebit Cloud Drive Support   
    Yeah i agree, their support is awesome. Chris is always quick on responses and helpful. I have always been amazed how well their support is, specially with a small team to work with.
  6. Like
    Christopher (Drashna) reacted to B00ze in Folder Placement Rules - Who would've thought!   
    Thought I'd post this here rather than a new topic, since it has to do with placement rules.
    I have now learned how DrivePool handles file some file moves, and it's pretty nice. I have 3 drives. Most of my data goes into drives 1 and 2, duplicated; there are file-placement rules to make sure this is what happens. My "Downloads" folder however has no file placement rule and no duplication. Since free space is higher on drive 3, when I download it goes into drive 3. Once it's downloaded, I usually move it to one of the folders that are controlled by file placement rules - that means the file should go onto drives 1 and 2. But DrivePool doesn't turn the file move into a copy. What it does instead is it creates the directory structure necessary so the file can be moved to the correct folder ON DRIVE 3! This ensures fast move operations. Only LATER during balancing will it remove the file from drive 3 and duplicate it on drives 1 and 2. I think this is clever.
    Best Regards,
  7. Like
    Christopher (Drashna) reacted to B00ze in [HOWTO] File Location Catalog   
    Good day everyone.
    I had two issues with DrivePool-Generate-CSV-Log-V1.51 which I have corrected (I will try to attach to this reply):
    Get-PhysicalDisk is not supported on Windows 7 so I changed it to  Get-CimInstance -ClassName CIM_DiskDrive. Under Win7 the script wouldn't fail but the disk model and serial number columns were just blank. Unicode CSV files do not load correctly when double-clicked, at least not with Excel 2016, they load as a single column. Turns out however that Excel 2016 supports TAB delimited Unicode CSV just fine, so I changed the format from comma-delimited to tab-delimited. Works fine. The updated script should be attached...
    DrivePool-Generate-CSV-Log-V1.60.zip
  8. Like
    Christopher (Drashna) reacted to B00ze in Folder Placement Rules - Who would've thought!   
    Hey Christopher.
    Yeah, the "studiness" of DrivePool is really really nice, it has agreed to everything I have thrown at it so far, including Multi-Boot!
    As for performance of the read stripping, I have less good news: Today I tested copying from one of those disks but outside of the pool, to the SSD, and got ~200MB/s, so read stripping has only increased performance 12%. This is a far cry from the Intel RAID which simply doubled performance (Crystal Disk reported ~400MB/s sequential reads, but the RAID required Intel's own Write-Back cache or writes would fall down to 80MB/s). Of course you cannot match RAID, but 12% is a bit low, I was hoping for something like 50% faster. What do YOU get in term of performance improvement for 2x duplicated files on your server?
    PS: I see now how you manage reparse points. There is 1 file per point in the covefs folder, which contains type and souce and target, and an ID tagged to the reparse point as an alternate data stream. The reparse point is just an empty folder (I haven't yet tried SymLinks on files but presumably they will be empty files). Might slow down if someone has thousands of reparse points but I won't reach that many. What's nice is that Robocopy is able to copy SymLinks as Symlinks TO AND FROM the pool! Woohoo! AND if I FSUTIL the reparse point to delete it, DrivePool does its job and the reparse point becomes a normal folder, exactly the way it works on a normal disk. This is awesome!
    Best Regards,
  9. Like
    Christopher (Drashna) got a reaction from mcrommert in Confused about Duplication   
    Okay, so, you do want to do what I've posted above.
    Actually, the RC version of StableBit Drivepool will automatically prefer local disks (and "local" pools) over "remote" drives for reads.  So nothing you need to do here.
    As for writes, if real time duplication is enabled, there isn't anything we can really do here.  In this case, both copies are written out at the same time.  So, to the local pool and the CloudDrive disk, in this case.   
    But the writes happen to the cache, and then are uploaded.  There are some optimizations there to help prevent inefficient uploads. 
    No, this is to make sure that unduplicated data is kept local instead of remote.
    as for the drives size, it will ALWAYS report the raw capacity, no matter the duplication settings.  So this pool WILL report 80TB.   We don't do any processing of the drive size, because .... there is no good way to do this.  Since each folder can have different duplication levels (off, x2, x3 ,x5, x10, etc), and the contents may drastically vary in size, there is no good way to compute this, other than showing the raw capacity. 
    There isn't a (good) way to fix this.  
    You could turn off real time duplication, which would do this. But it means that new data wouldn't be protected for up to 24 hours, .... or more.  
    Also, files that are in use cannot be duplicated, in this config.   
    So, it leaves your data more vulnerable, which is why we recommend against it. 
     
    The other option is to add a couple of small drives and use the "SSD Optimizer" balancer plugin.  You would need a number of drives equal to the highest duplication level, and the drives don't need to be SSDs.   New files would be written to the drives marked as "SSD"s, and then moved to the other drives later (depending on the balancing settings for that pool). 
  10. Like
    Christopher (Drashna) got a reaction from Varroa in drives not balancing   
    Honestly, this sounds "fine".  It shouldn't rebalance here, since it's not outside of any settings. 
    However, if you want, you can download and install the "Disk Usage Equalizer" balancer, as this will cause the pool to rebalance so it's using roughly equal space on each disk. 
    https://stablebit.com/DrivePool/Plugins
  11. Like
    Christopher (Drashna) reacted to banderon in A question on DrivePool duplication   
    Yup, the ease of duplication was one of the main reasons I was attracted to DrivePool. Also, thank you; that clears things up: the "older parts" that will be deleted are the remaining parts of the damaged files, so that the intact copies can be re-replicated.
    Edit: I was going to mark your reply as the "best answer", but due to the architecture of this forum, that takes it out of the flow of the thread to be displayed at the top which I imagine would make it difficult to follow the conversation for anyone else. So I'll leave things as they are. Thanks!
  12. Confused
    Christopher (Drashna) got a reaction from Odeen in Privileging performance over available space   
    From the UI, it looks like the file is being moved from the N:\ drive to another location on the N:\ drive, but isn't using "smart move".  
    In this case, this fould be more normal, as the file system is both reading and writing to/from the drive.  That means that the I/O is split between the two.  In a best case scenario, this would halve the speeds, meaning you're really seeing closer to 70MB/s speeds here.  But in reality, you may see worse than half, because of overhead (such as head actuation).  So you may may have been getting 80-90MB/s or faster from the drive normally. 
     
    As for this sort of prioritization, it's very hard to do, as determining this "on the fly" is complex and ... "expensive" (in terms of system resources). 

    However, we do have the SSD Optimizer, which would help with this immensely.  It's designed to create a write cache for new files.  New files (like this) would be written to the "SSD" drive first, and then later balanced off to the other disks.  This would split up the I/O load, and put the new files on the much faster drive, potentially boosting speeds. 
     
  13. Thanks
    Christopher (Drashna) reacted to mushm0uth in Stablebit Cloud Drive Support   
    I just wanted to drop a note here on the community forum for folks who may be considering StableBit Cloud Drive and are researching the support forum prior to making a purchasing decision.  I've had questions on two occasions since I've been a license holder that led me to open support cases for assistance.  Both times I received prompt, personal and highly knowledgeable engagement from support.  For such a cutting edge (in my mind) product, that is so affordable (in my opinion), I couldn't ask for a better support system.  The forums here are great too.  
    My two cents -- thanks team for a great product backed by a great staff.  
  14. Haha
    Christopher (Drashna) reacted to RFOneWatt in Drive Pool Compatability with Seatage Drives   
    Throw them in the gutter.
    Better yet, toss them through the front window of Seagate HQ.  
    ST3000DM001 Excessive Failures
    Because of these drives (and a few prior issues with Seagate drives) I will never purchase a Seagate product again... 
    No reason to when I can get HGST's for the same money..  I haven't had a bad HGST drive yet!  (Out of about 30)
    ~RF
     
  15. Like
    Christopher (Drashna) got a reaction from Scuro in New drive not balancing   
    Wow, I'm sorry to hear about the PSU.  That ... can be pretty scary, actually! 
     
    As for the balancing, yeah, I've seen that myself and there is a ticket (or two) open about the issue already. 
  16. Like
    Christopher (Drashna) reacted to jmone in REFS in pool   
    ...also there is a good table here that shows what version of ReFS is supported by what Windows Version
    https://en.wikipedia.org/wiki/ReFS
     
  17. Like
    Christopher (Drashna) reacted to B00ze in Dual-Boot (continued)   
    Good day Christopher.
    Well, good news - I can redirect/Share the ENTIRE ProgramData folder, at least with the evaluation license. It did not complain about anything at all when I switched to Win10 and installed the second copy (after having junction'ed the ProgramData StableBit folder). It's a lot of fun; I can do something in Win7, which generates a notification (top left button in the UI) then boot Win10 and the notification is there, telling me I changed duplication 10 minutes ago (on another OS!) Now the real test comes; will it work with a real license? Stay tuned. PS: Just in case, I gave both OSes the same PCNAME. If this works, add some comments in the encryption code, something like "remember not to use PC SID or something similar to encrypt because some people dual-boot" ;-)
    Oh, and now that I've used the UI, I find the duplication pop-up pretty annoying. It's fine if you enable full pool duplication to have a kind of warning pop-up, but when you are playing with folder-level duplication, having a "Are you sure you want to duplicate" window pop-up every time you change a folder is pretty annoying. I understand it is asking also if I want to duplicate to more than 1 drive, and I'm not quite sure how to handle this better, but there is probably better way! The UI is nice. I would prefer a MMC snap-in to a custom UI, but it's not bad. I think you should make the "Manage Pool" words bigger, since this is where all the action is. Overall I do like the simplicity of it.
    Thank you.
  18. Like
    Christopher (Drashna) got a reaction from Varroa in Weird performace after re-install   
    Nope, Terry nuked the site.  And for a while, configured things in a way that nuked cached copies. 
    But here you go:
    https://web.archive.org/web/20150508105738/http://forum.wegotserved.com:80/index.php/topic/8335-before-you-post-media-stuttering-playback-issues-performance-irregularities/
    But here are the highlights: 
     
     
  19. Like
    Christopher (Drashna) got a reaction from Varroa in lost entire OS but drivepool stil intact   
    Well, I'm sorry to hear about the system drive, as that is never a pleasant experience.
    As for the pooled drives, yes, the software should automatically re-pool them on the new installation.
    If you're using WHS2011 still, then you'll want to grab the WSS Troubleshooter:
    http://wiki.covecube.com/StableBit_DrivePool_Utilities
    Run the "Restore DrivePool Shares" on the new system (after StableBit DrivePool is installed) and it will restore the shares.
     
    Otherwise yes, just reinstall the OS, connect the drives, and reinstall the software.  The pool automatically (re)builds a pool from the available disks.  No need to mess with anything. 
  20. Like
    Christopher (Drashna) got a reaction from denywinarto in NTFS mount point question   
    Ah.  Mount point's don't really matter.  To DrivePool, or Windows really. 
    So unless you're intending to mount the drive on the pool's path, there should be no issue. 
  21. Like
    Christopher (Drashna) reacted to HellDiverUK in Build Advice Needed   
    Ah yes, I meant to mention BlueIris.
    I run it at my mother-in-law's house on an old Dell T20 that I upgraded from it's G3220 to a E3-1275v3.  It's running a basic install of Windows 10 Pro. I'm using QuickSync to decode the video coming from my 3 HikVision cameras.  Before I used QS, it was sitting at about 60% CPU use.  With QS I'm seeing 16% CPU at the moment, and also a 10% saving on power consumption.
    I have 3 HikVision cameras, two are 4MP and one is 5MP, and are all running at their maximum resolution.  I record 24/7 on to an 8TB WD Purple drive, with events turned on.  QuickSync also seems to be used for transcoding video that's accessed by the BlueIris app (can highly recommend the app, it's basically the only way we access the system apart from some admin on the server's console). 
    Considering Quicksync has improved greatly in recent CPUs (basically Skylake or newer), you should have no problems with an i7-8700K.  I get great performance from a creaky old Haswell.
  22. Like
    Christopher (Drashna) reacted to JazJon in You require permission from NETWORK SERVICE, Can't delete P:\5cc6ba767698f5504f4a776ed0   
    I received a response from CrashPlan:
     
    "Hello Jon,
    Thank you for contacting our Champion Support Team.
    It looks like the CrashPlan app is restarting your backup unexpectedly, it is actually running a file verification scan. The scan puts ALL of your data in the To Do list. Then as the backup after the scan runs, its analyzes your files to determine if it has been backed up before and can be skipped, or if its new/changed and needs to be backed up. The percentage complete that you see is CrashPlan's progress through the current To Do list, not the overall backup.
    The scan is an important and normal part of CrashPlan's scheduled activities, but it can also be triggered by several events. To read more, click on the link below:
    http://support.code42.com/CrashPlan/Latest/Troubleshooting/Is_My_Backup_Starting_Over The explanation for the lengthy time estimation relates to how CrashPlan prioritizes files for backup. CrashPlan backs up your most recent changes first. When the scan finds new files for backup, these files go straight to the top of CrashPlan's “to do” list. This impacts the estimated time to complete backup because the estimate is based on the type of files the scan is reviewing (i.e., new or existing) and your current network speed.
    If you have any further CrashPlan questions or concerns, or need clarification please do not hesitate to reach out.
    Best Regards,
    Champion Support Specialist"
     
    This seems to be the case.  The number of files sent match up but the total percentage is still off even after a FULL manual file scan. (that normally only happens every 30 days)
    Things should sort themselves out over time after the complete verification "To Do" list is done.
  23. Like
    Christopher (Drashna) reacted to TonyP in Cannot Install Disk Equalizer Addon   
    Thanks for the great support.
  24. Sad
    Christopher (Drashna) got a reaction from skapy in copying cloud 1:1   
    No.  It uses different formats. 
    You can download the CloudPart data from Amazon, convert it to the local disk provider format and then mount it.
    But you'd need the space locally to do this.
  25. Thanks
    Christopher (Drashna) got a reaction from thepregnantgod in Can't add same disk to pool error?   
    Nope, that's fine, actually.
    Just make sure that you remeasure the pool, as that will update the statistics and recheck duplication. 
    Though, installing the RC, or resetting the settings should fix this issue, in the future. 
×
×
  • Create New...