Jump to content
Covecube Inc.

Thronic

Members
  • Content Count

    50
  • Joined

  • Last visited

  • Days Won

    1

Thronic last won the day on September 8 2018

Thronic had the most liked content!

About Thronic

  • Rank
    Advanced Member

Recent Profile Visitors

419 profile views
  1. Good point. But doing individual disks means more follow-up maintenance... I think I'll just move the script into a simple multithreaded app instead so I can loop and monitor it in a relaxed manner (don't wanna hammer the disks with SMART requests) and kill a separate sync thread on demand if needed. If I'm not mistaken, rclone won't start delete files until all the uploads are complete (gonna check that again). So that creates a small margin of error and/or delay. Thanks for checking the other tool.
  2. Ended up writing a batch script for now. Just needs a copy of smartctl.exe in the same directory, and a sync or script command to be run respectively of the result. Checks the number of drives found as well as overall health. Writes a couple of log files based on last run. Commented in norwegian, but easy enough to understand and adapt to whatever if anyone wants something similar. @echo off chcp 65001 >nul 2>&1 cd %~dp0 set smartdataloggfil=SMART_DATA.LOG set sistestatusloggfil=SISTE_STATUS.LOG set antalldisker=2 echo Sjekker generell smart helse for alle tilkoblede di
  3. I'm planning to run a scheduled rclone sync script to the cloud of my pool, but it's critical that it doesn't run if files are missing due to a missing drive, because it will match the destination to the source - effectively deleting files from the cloud. I don't wanna use copy instead of sync, as that will recover files I don't want to recover when I run the opposite copy script for disaster recovery in the future, creating an unwanted mess. So, I was wondering if there's any CLI tool I can use to check if the pool is OK (no missing drives), so I can use it as a basis for running the scr
  4. Bumping this a little instead of starting a new one.. I'm still running GSuite Business with same unlimited storage as before. I've heard some people have been urged to upgrade by Google, but I haven't gotten any mails or anything. I wonder if I'm grandfathered in perhaps. Actually, they didn't: If they did, I'd just upgrade to have everything proper. I have 2 users now, as there was rumors late-2020 that Google were gonna clean up single users soon who were "abusing" their limits (by a lot) but leave those with 2 or more alone. I guess that's a possible reason I ma
  5. What I meant with that option is this: Since I haven't activated duplication yet, the files are unduplicated. And since the cloud pool is set to only duplicated, they should land on the local pool first. And when I activate duplication, the cloud pool will accept the duplicates. Working as intended so far, but I'm still hesitant about this whole setup. Like I'm missing something I'm not realizing, but will hit me on the ass later.
  6. Sure is quiet around here lately... *cricket sound*
  7. DrivePool doesn't care about assigned letters, it uses hardware identifiers. I've personally used only folder mounts since 2014 via diskmgmt.msc, across most modern Windows systems without issues. Drives "going offline for unknown reason" must be related to something else in your environment. Drive letters are inherently not reliable. The registry key MountedDevices under HKLM that's responsible for keeping track can become confused at random unknown drive related events, even to the point where you can't even boot the system properly. Doesn't need stablebit software for this
  8. Seems tickets are on a backlog, so I'll try the forums as well. I've setting up a new 24 drive server and looking first at stablebit as a solution to pool my storage since I already have multiple licenses and somewhat experience with it, just not with clouddrive. I've done the following. Pool A: Physical drives only. Pool B: Cloud drives only. Pool C: A and B with 2x duplication. Drive Usage set to B only having duplicate files. This let me hold off turning on 2x duplication until I have prepared local data (I have everything in the cloud right now), so the server doe
  9. A small 40mm fan works wonders in desktop builds or "calmed down" rack chassis'. Brought my Dell H200 and LSI 9211-8i cards from being too hot to touch down to barely noticable heat. Just connect it to any random case fan pinout. The model on the picture is a Noctua NF-A4x10 FLX 40mm. I'm also a RC hobbyist, so finding small screws that fit ... anything .. is never a huge problem...
  10. If you're going to use a cloud that has rollback of data, then using a solution that splits up your files is straight up a recipy for COMPLETE disaster. This is nonsensical to both argue and defend. In that scenario, uploading entire files is safer. No matter the software. I ignore nothing, I simply stay on topic. It doesn't matter how volatile you claim cloud storage is in general, this was about Google Drive specifically. And there is no argument available that can claim CloudDrive was safer, or equally safe, as a fully uploaded files in a situation where the cloud rolled back your data
  11. Nonsense. Any rollback would roll back entire files, not fragments. While Rclone uploaded data would just be older versions of complete files, still entirely available, not blocking access to any other data. I'm not saying Rclone is universally better, but in this case it definitely was. Saying Rclone data is stagnant is nonsense too, it entirely depends on the usage. There are absolutely no valid arguments available to make Rclone look worse in the scenario that happened. I'll look into the pinned data changes.
  12. Seems upload verification (or any action for that matter on client side) won't help much if the risk is Google doing rollbacks. CloudDrive splitting up files into blocks of data, will make everything fragmented and disastrous if some of those blocks gets mixed with rollback versions. This explains everything. So it will work, as long as Google doesn't mix old/new data blocks. All this makes rclone FAR safer to use, any rollback will not be fragmented, but complete files. But thanks for the explanation.
  13. Hi It's been a while since I stepped away from stablebit software for a bit and have been using 100% cloud via rclone instead. Was wondering how CloudDrive is doing since I'm considering a hierarchy pool with drivepool:clouddrive duplication. It felt very volatile the last time I used it. Now I see this sticky: https://community.covecube.com/index.php?/topic/4530-use-caution-with-google-drive/ Is this a problem similar to write hole behavior on RAID, where blocks of data were being written via CloudDrive and considered uploaded, but interrupted and then corrupted? If so, it s
  14. Just did some spring cleaning in my movie folder and removed over 200 movies. Folders with the movie file and srt files. When I marked all of the folders and pressed delete, about 20 of them came back really fast. Then I marked those for deletion, pressed delete, and about 5 bounced back. And 2. And after deleting those 2 no more folders "bounced" back. I've probably had these files for 2-3 years, across a couple iterations of DP. Perhaps outdated ADS properties causing it? Not a huge deal, more of an awkward observation, didn't hurt my data but required a watchful eye to avoid leaving a mess.
  15. Thronic

    Big Veeam files

    Yeah. It seemed to accumulate a little more data than it should over time but not much, but worked without errors and did test recovery OK too. I ran it for a few months. EDIT: oops I thought this was my old DP post. I've never done it with CD. I ended up with rclone and raw gdrive@gsuite instead.
×
×
  • Create New...