Jump to content

Jellepepe

Members
  • Posts

    45
  • Joined

  • Last visited

  • Days Won

    4

Reputation Activity

  1. Thanks
    Jellepepe reacted to Chupa in Problems with Google Drive in Hetzner server   
    Hello, I wanted to report that the problem is solved, simply set Cloudfare as DNS servers then set 1.1.1.1 as primary server and Google 8.8.8.8 as secondary, this problem is limited to Hetzner servers. I have reported the issue to their support and they are looking into fixing it.
  2. Like
    Jellepepe got a reaction from VapechiK in [HOWTO] dynamic disks and/or mounted VHD(X)   
    Hi everyone,
    After dreading that i would have to add an additional drive to my server to be able to mount my clouddrives (which would cost me a lot, as it is not local to me), I tried a lot of possible workarounds.
    While i understand the reasoning for not allowing certain drives to be pooled, and I do not recommend doing this unless you understand the risks, I am happy to report it definitely works, with no issues in my testing.
    So a as a TL;DR;
    You can create a VHD(X) file on a dynamic volume, mount that, then create a storage space with this mounted VHD(X) file, which can then be pooled and/or used as clouddrive cache.
    The performance decrease is quite minor, as you can see from some quick testing on a old 840 evo drive;

    [LEFT: local disk mounted directly, RIGHT: mounted VHDX -> Storage Space -> Drivepool]
    Step-By-Step:
    Doing this is easy, and it only takes a few steps;
    Create the VHD(X) file Mount the VHD(X) file as a drive Create a storage space with the mounted VHD(X) drive(s) Use the storage space as clouddrive cache or add it to a drivepool pool 1. First we make the VHD(X) file if you do not have one yet, which we can simply do from disk manager; Action -> Create VHD
    you are then presented with the following prompt, in which you can choose the size and location of the VHD(X)

    For optimal performance it is recommended to choose 'VHDX' and 'Fixed size'
    2. Next we need to mount the VHD file if it hasn't done so already, which we can also do from disk manager; Action -> Mount VHD
    We don't need to initialize or format the drive, it will undo this in the next step anyway.
    3. Almost done! We now need to create a storage space, as the mounted VHD(X) still cannot be used by clouddrive/drivepool
    To do this, we open the storage spaces configuration screen (search 'storage spaces'), and choose 'create a new pool and storage space'
    in the following screen select the mounted VHD drive ("Attached via VHD"), which should show up under "Unformatted drives" be careful to not select a different drive, this will wipe all data!
    In the wizard we can now select the size, parity settings, and format settings, in this example we are using 'simple', but you can choose other settings if you're using multiple VHDs.

    4. If everything went well, we should now be able to detect the new storage space drive in both clouddrive and drivepool, ready to be used!
    And we are done!
    For some reason because we created a storage space, we no longer need to manually mount the VHD drives on reboot, so the system keeps working.
  3. Thanks
    Jellepepe got a reaction from JulesTop in Google Drive: The limit for this folder's number of children (files and folders) has been exceeded   
    Just to clarify for everyone here, since there seems to be a lot of uncertainty:
    The issue (thus far) is only apparent when using your own api key However, we have confirmed that the clouddrive keys are the exception, rather than the other way around, as for instance the web client does have the same limitation Previous versions (also) do not conform with this limit (and go WELL over the 500k limit) Yes there has definitely been a change at googles side that implemented this new limitation Although there may be issues with the current beta (it is a beta after all) it is still important to convert your drives sooner rather than later, here's why: Currently all access (api or web/googles own app) respects the new limits, except for the clouddrive keys (probably because they are verified) Since there has been no announcement from google that this change was happening, we can expect no such announcement if (when) the clouddrive key also stops working either It may or may not be possible to (easily) convert existing drives if writing is completely impossible (if no api keys work) If you don't have issues now, you don;t have to upgrade, instead wait for a proper release, but do be aware there is a certain risk associated. I hope this helps clear up some of the confusion!
  4. Like
    Jellepepe reacted to srcrist in Panic Google drive user where is the assistance from Covercube?   
    They are not comparable products. Both applications are more similar to the popular rClone solution for linux. They are file-based solutions that effectively act as frontends for Google's API. They do not support in-place modification of data. You must download and reupload an entire file just to change a single byte. They also do not have access to genuine file system data because they do not use a genuine drive image, they simply emulate one at some level. All of the above is why you do not need to create a drive beyond mounting your cloud storage with those applications. CloudDrive's solution and implementation is more similar to a virtual machine, wherein it stores an image of the disk on your storage space.
    None of this really has anything to do with this thread, but since it needs to be said (again):
    CloudDrive functions exactly as advertised, and it's certainly plenty secure. But it, like all cloud solutions, is vulnerable to modifications of data at the provider. Security and reliability are two different things. And, in some cases, is more vulnerable because some of that data on your provider is the file system data for the drive. Google's service disruptions back in March caused it to return revisions of the chunks containing the file system data that were stale (read: had been updated since the revision that was returned). This probably happened because Google had to roll back some of their storage for one reason or another. We don't really know. This is completely undocumented behavior on Google's part. These pieces were cryptographically signed as authentic CloudDrive chunks, which means they passed CloudDrive verifications, but they were old revisions of the chunks that corrupted the file system.
    This is not a problem that would be unique to CloudDrive, but it is a problem that CloudDrive is uniquely sensitive to. Those other applications you mentioned do not store file system data on your provider at all. It is entirely possible that Google reverted files from those applications during their outage, but it would not have resulted in a corrupt drive, it would simply have erased any changes made to those particular files since the stale revisions were uploaded. Since those applications are also not constantly accessing said data like CloudDrive is, it's entirely possible that some portion of the storage of their users is, in fact, corrupted, but nobody would even notice until they tried to access it. And, with 100TB or more, that could be a very long time--if ever. 
    Note that while some people, including myself, had volumes corrupted by Google's outage, none of the actual file data was lost any more than it would have been with another application. All of the data was accessible (and recoverable) with volume repair applications like testdisk and recuva. But it simply wasn't worth the effort to rebuild the volumes rather than just discard the data and rebuild, because it was expendable data. But genuinely irreplaceable data could be recovered, so it isn't even really accurate to call it data loss. 
    This is not a problem with a solution that can be implemented on the software side. At least not without throwing out CloudDrive's intended functionality wholesale and making it operate exactly like the dozen or so other Google API frontends that are already on the market, or storing an exact local mirror of all of your data on an array of physical drives. In which case, what's the point? It is, frankly, not a problem that we will hopefully ever have to deal with again, presuming Google has learned their own lessons from their service failure. But it's still a teachable lesson in the sense that any data stored on the provider is still at the mercy of the provider's functionality and there isn't anything to be done about that. So, your options are to either a) only store data that you can afford to lose or b) take steps to backup your data to account for losses at the provider. There isn't anything CloudDrive can do to account for that for you. They've taken some steps to add additional redundancy to the file system data and track chksum values in a local database to detect a provider that returns authentic but stale data, but there is no guarantee that either of those things will actually prevent corruption from a similar outage in the future, and nobody should operate based on the assumption that they will. 
    The size of the drive is certainly irrelevant to CloudDrive and its operation, but it seems to be relevant to the users who are devastated about their losses. If you choose to store 100+ TB of data that you consider to be irreplaceable on cloud storage, that is a poor decision. Not because of CloudDrive, but because that's a lot of ostensibly important data to trust to something that is fundamentally and unavoidably unreliable. Contrarily, if you can accept some level of risk in order to store hundreds of terabytes of expendable data at an extremely low cost, then this seems like a great way to do it. But it's up to each individual user to determine what functionality/risk tradeoff they're willing to accept for some arbitrary amount of data. If you want to mitigate volume corruption then you can do so with something like rClone, at a functionality cost. If you want the additional functionality, CloudDrive is here as well, at the cost of some degree of risk. But either way, your data will still be at the mercy of your provider--and neither you nor your application of choice have any control over that.
    If Google decided to pull all developer APIs tomorrow or shut down drive completely, like Amazon did a year or two ago, your data would be gone and you couldn't do anything about it. And that is a risk you will have to accept if you want cheap cloud storage. 
  5. Thanks
    Jellepepe got a reaction from vitpardio in Drive on Google dismounting randomly   
    I was away all weekend, so sorry for not posting. Im not sure, i checked and im still running .951, so i doubt its related.
    The issues have seemed to go away, i can see it struggle still during the general 7-10pm window, but its not enough to force a dismount of the drive.
    This mostly leads me to believe it was some sort of issue at google, but it does disappoint we never managed to figure out exactly what the cause was.
    I will get back to this if theres any more issues, and if you or anyone else is having a similar issue please do so also
  6. Like
    Jellepepe got a reaction from ntilegacy in Drive on Google dismounting randomly   
    i had the same at that time, but not enough to dismount, its 22:48 for me now, so past the window it has dismounted all previous days. 
    It would seem like today is the first day in over a month that i havent had any dismounts, i REALLY hope this means the issue was at google and is now resolved.
    i also switched to using the latest beta builds around when i posted this thread, i also noticed performance seemingly being much better. i assume something changed in the way I/O works?
  7. Like
    Jellepepe got a reaction from Christopher (Drashna) in Google drive unable to upload?   
    Thanks for responding, i seem to have fixed it by force attaching the drive to a different pc (was unable to detach too), then reinstalling clouddrive and reattaching it to the old pc. i've turned it down to 6 up threads and 20 down now, which still seems to fill up my bandwidth (33mbps up, 330 down)
    it seems to have somehow crashed in the authorization, in such a way i was unable to reauthorize it.
    If it stops working again i will definitely report back
×
×
  • Create New...