Jump to content

Gandalf15

Members
  • Posts

    14
  • Joined

  • Last visited

Gandalf15's Achievements

Member

Member (2/3)

0

Reputation

  1. At least one got saved! Did you check for lost data?
  2. It is not lost. You can try chkdsk or and data recovery. I feel you it was the same for me the first time (2 accounts both killed). This time at least I was lucky and it only hit 1 somehow... I feel like its not worth it and I may just switch to rclone, since rclone doesnt have this problems as far as I know. I dont know if it is software or cloud problem - somehow outtakes kill the file system on cloud drives. With the chunks being local I also dont see how this is possible. I would love to hear some of the Coders here what they say to this problem.
  3. I know. But if everytime an outage happens and you lose everything, then there is no point in using the software or I am wrong? Cant something like a killswitch be implemented to not miss up the drive when an outtage happens? This time I was lucky and had a 2nd drive not affected, last time I lost 3 drives and everything. I was basicly lucky that just 1 drive of the 2 was destroyed. This doesnt happen with rclone for example (I know both tools work different). I was told upload verification solves the problem, but it didnt. It was pure luck that 1 drive that had the data duplicated didnt fail...
  4. Since I had the same issue today again - thank fully only 1 of the 2 drives I used to duplicate, I deleted the raw drive. 25 TB to upload again. This time I go with nested pool drive. I created 5 10TB drives, added them into a pooldrive (without duplication) and added this pooldrive to the other working (old) pooldrive. It started now to duplicate the stuff. I hope this problem wont come every 2 to 3 months, like this its almost useless to use stablebit....
  5. So a couple of weeks back we had a downtime and all my drives were marked as RAW after. Last night it happend again. One of my 2 cloud drives is marked as RAW... I had upload verficitation on since the last incident and somehow it happend again.... 25 TB of data is gone. Anyone saw a similar thing last night? It is getting annoying to rebuild the entire clouddrive every x weeks since there are downtimes... I was told upload verfication should solve the problem, in fact I just wasted 25 TB of traffic and load on my SSD.... Would like to hear if someone else is also affected?
  6. Thank you very much. So it would be possible to shrink my new 256 TB Volumes to 50. then when my 2 50 TB mirrors are full, I create 2 more on each account, add both on the same account to a pool and the 2 remaining into a pool again and I can continue. Is this how it works right?
  7. Didnt think of that actually - Smart! but isnt there error potential in that with the "pool-seption" in it? That could totaly work out if I think now of of it.
  8. How would one set up an hierarchical pool structure? I am pretty new (a bit over a month using the software)?
  9. Maybe I mixxed drives and volumes up indeed. But it is drivepool who takes care of duplication. And it uses Volumes as pooled "drives" no? So if I set 2 volumes that are on one drive in drivepool and set duplication to 2x, it will duplicate on the same drive, which is nonsense in my case (and obviously also in non-cloud drives). Or I am wrong with that assumption?
  10. Your last answer is definitly a good point to start on. I understand if googles API said "Hey mate, I got that!" so clouddrive says "aight mate, gonna delete that on my end!" and then it wasnt there. The data in the cloud were only in the cloud - 3 times though. The reason I do have 3 different gdrives is simple: One is of my company, one my studendsfree and one my own (all legit). But who knows, maybe I leave the company one day and they delete it? Or my university decides (for what ever reason) to stop working with google? It was only 12 TB, but if I had use it longer than a few weeks, it could be 100, 300 or even more. I see your point about 60 TB volumes, but this comes again back to what I explained above: What if duplicity is only on the account of my university, that might be shut down suddenly? If I dont have an option to say "keep the file here (account Uni) and here (account work)" I dont want to use multiple volumes and taking the risk that the copy is on the same account (university, that might be terminated). Towards my failed drives: I deleted them now manually. I dont want to block googles space and I got pretty pissed. Since its replacable data, I simply said "fuck it, dont want to waste energy on that bullshit". Upload integrity seems like being the solution. I dont care about Bandwith, I have full duplex 1gbit at home. The download limit is high enough on gdrive anyway (its the uploadlimit that is blocking me). I will activate that, so I can be sure something like this doesnt happen again. Thank you very much for your time and knowledge you are sharing here with me, I really aprechiate that! I also only found about 2 outtages btw, the one I suffered a loss and another one. Lets hope it stays at 2
  11. I am actually waiting for a reply. I am aware that another user from reddit has exactly the same. I doubt google doesnt have redudancy and wouldnt say anything if there was dataloss. I only started using Clouddrive around 5 weeks ago, so the lost data are only 10 to 12 TB. I liked the product but had trust issues, so I only uploaded replacable data yet - and I am glad I did. By making new volumes I mean, that I created new drives to upload other data that I need to have online. 10 TB to recover over Gdrive? I assume that a long time (I have 1gbit fibre though). I just want to make sure this does not happen again when google has another outtake.... And as you say: all 3 drives had the same problem with ckdsk, not only the bigger volumes. Maybe I think wrong, but to put it simple: If I make 2 volumes on 2 gdrive accounts and I want "raid 0 / 2x duplicity", how do I make sure that cloudpool doesnt put the same file twice on the same Gdrive? I mean it sees 4 harddisks and thinks "only 2 copies need, lets put on A and B" but A and B are from the same gdrive account? That is basicly why I made big partitions. Also, I could shrink them no? Edit: I got a reply by Christopher for my ticket: Basicly use CHKDSK, if that doesnt work, use Recuva
  12. Well, it was 2 times 256 TB Volumes and one 16 TB Volume. All of them were non fixable.... I opened a Ticket. But since I need those Volumes online, I had to create new ones already....
  13. Gday As we all know last night was a issue with Gdrive around the globe. Well since then it is back, but all my drives are shown in clouddrive as normal (I see how much space is used, how much free etc), in windows explorer they all have a driveletter but are not shown with the bar of how much is taken or free. When I click on them, windows asks me to format them. In drive manager they are shown as Raw. All 3 drives are on a different gdrive account. What are the odds that 3 drives are hit bysuch a problem at once? Anyone can help me? If all 3 are borken now, I lost all my data (yes it was duplicated, 3 times around all those drives....). Can someone help me? Cheers! Edit: what I tried so far: Reattaching one drive. Did not help. Detach and reattach, did also not help, still shown as raw. chkdsk /f says "not enough space" but finds lots of errors. I take every advice that could solve it. For me it looks like stablebit did something wrong. It started with the outtake of Gdrive, but the chunks and everything are there in the cloud... Please, I am pretty close to a meltdown if everything is gone...
×
×
  • Create New...