Jump to content

Thronic

Members
  • Posts

    52
  • Joined

  • Last visited

  • Days Won

    1

Thronic last won the day on September 8 2018

Thronic had the most liked content!

Recent Profile Visitors

980 profile views

Thronic's Achievements

Advanced Member

Advanced Member (3/3)

2

Reputation

  1. Used gsuite for 5 years or so, been through most of the hurdles, ups and downs, their weird duplicate file spawns. Had a clean rclone setup for it that worked as well as could be expected, but I have to say, never been happier to go back to local storage the last couple years, mostly due to performance. And also don't have to sorry about all this anymore..
  2. I got the same mail a few days ago and the change to Workspace Enterprise Standard with a single user was entirely painless, just changing from "more services" under invoicing. I have 22 TB and everything still works like usual. It seems Google support evade the question as much as possible, or try not to directly answer it. I personally interpret is as they want the business as long as it doesn't get abused too much. Only people I've heard about being limited or cancelled are those closing in on petabytes of data.
  3. Good point. But doing individual disks means more follow-up maintenance... I think I'll just move the script into a simple multithreaded app instead so I can loop and monitor it in a relaxed manner (don't wanna hammer the disks with SMART requests) and kill a separate sync thread on demand if needed. If I'm not mistaken, rclone won't start delete files until all the uploads are complete (gonna check that again). So that creates a small margin of error and/or delay. Thanks for checking the other tool.
  4. Ended up writing a batch script for now. Just needs a copy of smartctl.exe in the same directory, and a sync or script command to be run respectively of the result. Checks the number of drives found as well as overall health. Writes a couple of log files based on last run. Commented in norwegian, but easy enough to understand and adapt to whatever if anyone wants something similar. @echo off chcp 65001 >nul 2>&1 cd %~dp0 set smartdataloggfil=SMART_DATA.LOG set sistestatusloggfil=SISTE_STATUS.LOG set antalldisker=2 echo Sjekker generell smart helse for alle tilkoblede disker. :: Slett gammel smart logg hvis den finnes. del %smartdataloggfil% > nul 2>&1 del %sistestatusloggfil% > nul 2>&1 :: Generer oppdatert smartdata loggfil. for /f "tokens=1" %%A in ('smartctl.exe --scan') do (smartctl.exe -H %%A | findstr "test result" >> %smartdataloggfil%) :: Sjekk smartdata loggen at alle disker har PASSED. set FAILEDFUNNET=0 set DISKCOUNTER=0 for /f "tokens=6" %%A in (%smartdataloggfil%) do ( if not "%%A"=="PASSED" ( set FAILEDFUNNET=1 ) set /a "DISKCOUNTER=DISKCOUNTER+1" ) :: Kjør synkronisering mot sky hvis alle disker er OK. echo SMART Resultat: %FAILEDFUNNET% (0=OK, 1=FEIL). echo Antall disker funnet: %DISKCOUNTER% / %antalldisker%. set ALTOK=0 :: Sjekker at SMART er OK og at riktig antall disker ble funnet. if %FAILEDFUNNET% equ 0 ( if %DISKCOUNTER% equ %antalldisker% ( set ALTOK=1 ) ) :: Utfør logging og arbeid basert på resultat. if %ALTOK% equ 1 ( echo Alle disker OK. Utfører synkronisering mot skyen. > %sistestatusloggfil% echo STARTING SYNC. ) else ( echo Dårlig SMART helse oppdaget, kjører ikke synkronisering. > %sistestatusloggfil% echo BAD DRIVE HEALTH DETECTED. STOPPING. )
  5. I'm planning to run a scheduled rclone sync script to the cloud of my pool, but it's critical that it doesn't run if files are missing due to a missing drive, because it will match the destination to the source - effectively deleting files from the cloud. I don't wanna use copy instead of sync, as that will recover files I don't want to recover when I run the opposite copy script for disaster recovery in the future, creating an unwanted mess. So, I was wondering if there's any CLI tool I can use to check if the pool is OK (no missing drives), so I can use it as a basis for running the script. Or rather, towards the scanner. Halting the execution if there are any health warnings going on.
  6. Bumping this a little instead of starting a new one.. I'm still running GSuite Business with same unlimited storage as before. I've heard some people have been urged to upgrade by Google, but I haven't gotten any mails or anything. I wonder if I'm grandfathered in perhaps. Actually, they didn't: If they did, I'd just upgrade to have everything proper. I have 2 users now, as there was rumors late-2020 that Google were gonna clean up single users soon who were "abusing" their limits (by a lot) but leave those with 2 or more alone. I guess that's a possible reason I may not have heard anything as well. 2 business users is just a tad above what I'd have to pay for a single enterprise std. The process looks simple, but so far I haven't had a reason to do anything ..
  7. What I meant with that option is this: Since I haven't activated duplication yet, the files are unduplicated. And since the cloud pool is set to only duplicated, they should land on the local pool first. And when I activate duplication, the cloud pool will accept the duplicates. Working as intended so far, but I'm still hesitant about this whole setup. Like I'm missing something I'm not realizing, but will hit me on the ass later.
  8. Sure is quiet around here lately... *cricket sound*
  9. DrivePool doesn't care about assigned letters, it uses hardware identifiers. I've personally used only folder mounts since 2014 via diskmgmt.msc, across most modern Windows systems without issues. Drives "going offline for unknown reason" must be related to something else in your environment. Drive letters are inherently not reliable. The registry key MountedDevices under HKLM that's responsible for keeping track can become confused at random unknown drive related events, even to the point where you can't even boot the system properly. Doesn't need stablebit software for this to happen. Certain system updates mess around with the ESP partition for reboot-continuation purposes and may actually cause this if disturbed by anything during its process.
  10. Seems tickets are on a backlog, so I'll try the forums as well. I've setting up a new 24 drive server and looking first at stablebit as a solution to pool my storage since I already have multiple licenses and somewhat experience with it, just not with clouddrive. I've done the following. Pool A: Physical drives only. Pool B: Cloud drives only. Pool C: A and B with 2x duplication. Drive Usage set to B only having duplicate files. This let me hold off turning on 2x duplication until I have prepared local data (I have everything in the cloud right now), so the server doesn't download and upload at the same time. Pool A has default settings. Pool B have turned off balancing, I don't want it to start downloading and uploading just to balance drives in the cloud. It's enough with the overfill prevention. My thought process is that if a local drive goes bad or need replacement, users of Pool C will be slowed down but still have access to data via the cloud redundancy. And when I replace a drive, the duplication on Pool C will download needed files to it again. Is read striping needed for users of Pool C to always prioritize Pool A resources first? This almost seems too good to be true, can I really expect it to do what I want? I have 16TB download as well as Pool B having double upload (2x cloud duplication for extra integrity) before I can really test it. Just wanted to see if there are any negative experiences with this before continuing. My backup plan is to just install a GNU/Linux distro instead as a KVM hypervisor and create a ZFS or MDADM pool of mirrors (for ease of expansion) with a dataset passed to a Windows Server 2019 VM on a SSD (backed up live via active blockcommit) and hope GPU passthrough really works. But it surely wouldn't be as simple ... I know there is unraid too, but it doesn't even support SMB3 dialect out of the box and I'm hesitant to the automatic management of all the open source software stacks involved.. Heard of freezes and lockups etc.. Dunno about it. Regardless, any of the backup solutions would simply use rclone sync as I've used so far for user data backups. Which would not provide live redundancy like hierarchical pools, so I'd loose local space for parity based storage or mirroring. I wont have to loose any local storage capacity at all, if this actually works as expected.
  11. A small 40mm fan works wonders in desktop builds or "calmed down" rack chassis'. Brought my Dell H200 and LSI 9211-8i cards from being too hot to touch down to barely noticable heat. Just connect it to any random case fan pinout. The model on the picture is a Noctua NF-A4x10 FLX 40mm. I'm also a RC hobbyist, so finding small screws that fit ... anything .. is never a huge problem...
  12. If you're going to use a cloud that has rollback of data, then using a solution that splits up your files is straight up a recipy for COMPLETE disaster. This is nonsensical to both argue and defend. In that scenario, uploading entire files is safer. No matter the software. I ignore nothing, I simply stay on topic. It doesn't matter how volatile you claim cloud storage is in general, this was about Google Drive specifically. And there is no argument available that can claim CloudDrive was safer, or equally safe, as a fully uploaded files in a situation where the cloud rolled back your data. The majority would agree that outdated data, probably just by seconds or minutes, is better than loosing it all. The volatility issue here was CloudDrive's expectation of that not happening, straight up depending on it, not using cloud itself. You seem to want to defend CloudDrive by saying that cloud is unsafe no matter what. But the fact stands, fully uploaded files vs fragmented blocks of them would have at least offered the somewhat outdated data that Google still had available, while CloudDrive's scrambled blocks of data became a WMD. CloudDrive lost that event, hands down. Claiming my nightly synched files via Rclone would not have been any safer is absolute nonsense.
  13. Nonsense. Any rollback would roll back entire files, not fragments. While Rclone uploaded data would just be older versions of complete files, still entirely available, not blocking access to any other data. I'm not saying Rclone is universally better, but in this case it definitely was. Saying Rclone data is stagnant is nonsense too, it entirely depends on the usage. There are absolutely no valid arguments available to make Rclone look worse in the scenario that happened. I'll look into the pinned data changes.
  14. Seems upload verification (or any action for that matter on client side) won't help much if the risk is Google doing rollbacks. CloudDrive splitting up files into blocks of data, will make everything fragmented and disastrous if some of those blocks gets mixed with rollback versions. This explains everything. So it will work, as long as Google doesn't mix old/new data blocks. All this makes rclone FAR safer to use, any rollback will not be fragmented, but complete files. But thanks for the explanation.
  15. Hi It's been a while since I stepped away from stablebit software for a bit and have been using 100% cloud via rclone instead. Was wondering how CloudDrive is doing since I'm considering a hierarchy pool with drivepool:clouddrive duplication. It felt very volatile the last time I used it. Now I see this sticky: https://community.covecube.com/index.php?/topic/4530-use-caution-with-google-drive/ Is this a problem similar to write hole behavior on RAID, where blocks of data were being written via CloudDrive and considered uploaded, but interrupted and then corrupted? If so, it should not cause issues with connectivity to the drive and/or the rest of the data. If so, I'll be very tentative to use it.
×
×
  • Create New...