Jump to content

modplan

Members
  • Posts

    101
  • Joined

  • Last visited

  • Days Won

    3

Reputation Activity

  1. Like
    modplan got a reaction from Antoineki in Many files in Google Drive, something to be aware of   
    Just a PSA post here. I recently passed 1.26 MILLION files in my google drive account. Only ~400,000 of these files were from CloudDrive, but I use lots of other apps that write lots of files to my account and thought I would pass on this info.
     
    1) Google Drive for Windows client will crash, even if you only have 1 folder synced to your desktop: I have only a few docs folders actually synced to my desktop client, but the app insists on downloading an entire index of all of your files on your account into memory, before writing it to disk, when this crosses 2.1GB of RAM, as it did for me after 1.26 million files, the app will crash (due to Google Drive for Windows still stupidly being 32-bit). No workaround other than lowering your number of files on your account.
     
    2) Google Drive API documentation warns of API breakdowns as you cross 1 million files on your account, query sorting can cease to function, etc, who knows which apps depend on API calls that could start to fail.
     
    I've spent the last 10 days running a python script I wrote to delete un wanted/needed files, one by one. 10 days, and I probably have 10 days left. I hope to get to ~600,000 total by the time I am done. 
     
    Hope this helps someone in the future.
  2. Like
    modplan got a reaction from Ginoliggime in Many files in Google Drive, something to be aware of   
    Just a PSA post here. I recently passed 1.26 MILLION files in my google drive account. Only ~400,000 of these files were from CloudDrive, but I use lots of other apps that write lots of files to my account and thought I would pass on this info.
     
    1) Google Drive for Windows client will crash, even if you only have 1 folder synced to your desktop: I have only a few docs folders actually synced to my desktop client, but the app insists on downloading an entire index of all of your files on your account into memory, before writing it to disk, when this crosses 2.1GB of RAM, as it did for me after 1.26 million files, the app will crash (due to Google Drive for Windows still stupidly being 32-bit). No workaround other than lowering your number of files on your account.
     
    2) Google Drive API documentation warns of API breakdowns as you cross 1 million files on your account, query sorting can cease to function, etc, who knows which apps depend on API calls that could start to fail.
     
    I've spent the last 10 days running a python script I wrote to delete un wanted/needed files, one by one. 10 days, and I probably have 10 days left. I hope to get to ~600,000 total by the time I am done. 
     
    Hope this helps someone in the future.
  3. Like
    modplan got a reaction from KiaraEvirm in Many files in Google Drive, something to be aware of   
    Just a PSA post here. I recently passed 1.26 MILLION files in my google drive account. Only ~400,000 of these files were from CloudDrive, but I use lots of other apps that write lots of files to my account and thought I would pass on this info.
     
    1) Google Drive for Windows client will crash, even if you only have 1 folder synced to your desktop: I have only a few docs folders actually synced to my desktop client, but the app insists on downloading an entire index of all of your files on your account into memory, before writing it to disk, when this crosses 2.1GB of RAM, as it did for me after 1.26 million files, the app will crash (due to Google Drive for Windows still stupidly being 32-bit). No workaround other than lowering your number of files on your account.
     
    2) Google Drive API documentation warns of API breakdowns as you cross 1 million files on your account, query sorting can cease to function, etc, who knows which apps depend on API calls that could start to fail.
     
    I've spent the last 10 days running a python script I wrote to delete un wanted/needed files, one by one. 10 days, and I probably have 10 days left. I hope to get to ~600,000 total by the time I am done. 
     
    Hope this helps someone in the future.
  4. Like
    modplan got a reaction from Ginoliggime in Full Cache Re-upload after crash?   
    Sorry if this has been covered a quick search did not find me what I was looking for. 
     
    I have a CloudDrive that I send windows server backups to nightly. The full backup size is about 75 GB, but the nightly incremental change is only 6-7GB, easily uploadable in my backup window. 
     
    I have set the cache on this drive to 100GB to ensure the majority of the "full backup" data is stored in cache, so that when windows server backup is comparing blocks to determine the incremental changes, clouddrive does not have to (slowly) download ~75GB of data for comparison every single night. 
     
    This works very well.
     
    The problem comes when there is a power outage, crash, BSOD, etc. Even though the clouddrive is fully current and "To Upload" is 0, when I bring the server back up, after we go though drive recovery (which takes 5-8 hours for this 100GB), cloud drive then moves ALL 100GB of the cache into "To Upload" and starts re-uploading all of that data.
     
    Why? I can't think of how this is necessary. Incase a little data was written to the cache at the last minute before the unexpected reboot? If so, certainly there is a better way of handling this than a 100GB re-upload, some sort of new-unuploaded-blocks tag/database? What if a drive has a massive cache, a re-upload could take days/weeks!
     
    Thanks for any insight, I've gone through this process a couple of times using clouddrive and it has been painful every time. I'd be happy even if we downloaded every single block that is cached, compared it to the local cache block, and then only uploaded the ones that have changed.
  5. Like
    modplan got a reaction from Christopher (Drashna) in Plex Recommended Settings   
    You can get around this by importing the content locally, and then moving it to your Clouddrive drive.
     
    For example I have 2 folders in my 'TV' library for plex, X:\Video\TV and E:\Video\TV, X: is local, E: is Clouddrive.
     
    All content is originally added to X:, where it is indexed. Then, as I watch a whole season or something, and I want to 'archive' a show, I move it to E:
     
    Plex will not re-index since it notices that it is just the same file that has moved, updates the pointer to the file instantly, and everything works perfectly.
  6. Like
    modplan got a reaction from Christopher (Drashna) in Backing up large amounts causes server to "freeze"   
    Windows uses RAM as a cache when copying files between drives. My guess is that your large transfer eventually saturates your computer's RAM before that RAM can be destaged to the cache drive. Since your computer is now starving for RAM, other tasks begin to fail/take forever.
     
    My server has 16GB of RAM and I saw something similar. I would suggest using UltraCopier to throttle the copy. I set the throttle to the same as my upload speed and I had no more issues trying to copy a 4TB folder to my CloudDrive (other than it taking a few weeks)
  7. Like
    modplan reacted to Christopher (Drashna) in Long term use. Will things change much?   
    Sorry, a bit ranty here.... 
     
    I wouldn't say that he ruined anything.  In fact, just the opposite. Microsoft advertised "unlimited", and he *used* that.  Just because Microsoft (and many other providers) hedge on most users not even using 500GBs of data doesn't mean he abused it. 
     
    If you're going to advertise it as such, people will use it as such. Just because *you* (in this case, Microsoft) didn't mean it, doesn't mean that some guy spoiled anything. 
  8. Like
    modplan got a reaction from Christopher (Drashna) in CloudDrive and Deduplication?   
    While I would love this, since CloudDrive seems to be largely build around 1MB+ chunks, I really do not think dedupe would be very effective if this is the level at which dedupe would have to be done at. But maybe CloudDrive's architecture allows for some sort of sub-chunk dedupe?
     
    Most dedupe on enterprise arrays are done at the 4k-8k block size level, once you get too far past that it becomes less and less likely the blocks will match exactly and dedupe loses its effectiveness.
  9. Like
    modplan got a reaction from Christopher (Drashna) in the local cache and large data   
    I think it will slow the copy to practically nothing, but I'm not sure if that will cause the backup app or windows to timeout the copy or anything. I've never come close to filling the drive my cache is on.
     
    Here is a technical deep dive on the cache architecture: http://community.covecube.com/index.php?/topic/1610-how-the-stablebit-clouddrive-cache-works/
  10. Like
    modplan reacted to Christopher (Drashna) in Amazon Cloud Drive - Why is it not supported?   
    The problem is that we are. and have been.  The problem is server stability issues, as well as trying to pull teeth to get actual limitations that we should be using. They're not documenting these at all, so we're doing this blindly. 
     
    Also, because we are a very small company, we don't really have the resources to go around. So our top priority pretty much as to be issues and improvements that benefit all (or many) of the providers, rather than focusing on a single provider. 
     
    To be blunt, if Amazon Cloud Drive documented their API limits and guidelines and documented the max bandwidth per client allowed, or .... well, actually created a service that PROPERLY implemented throttling (both for APIs per time period, and for connection speed aka MB/s), then there would be a much smaller issue here, and it would be much easier to implement.
     
    Because the issue why we have "temporarily abandoned" Amazon Cloud Drive, because the point we were at was in a very bad development cycle that Amazon mandates. Specifically:
    Release a build and send it to Amazon Amazon analyzes it's usage Amazon tells us that we're making too many API calls, using too many threads, uploading too fast, or making more API calls than expected and tells us we need to change that Fix it to fit within Amazon's black box limits Repeat That's what we were doing, and it was affecting the development status of the entire product. Because from our standpoint, it really looks like Amazon doesn't have their shit together. At least in regards to their Cloud Drive service.
     
     
    And yes, I know it's very popular, because it's about the cheapest, unlimited provider. In fact, I do have a paid subscription for it, so I definitely have a vested interest in it working properly. 
     
     
     
     
     
    Even *if* we do get approved for production status (again), there will be a hard limit on the bandwidth (otherwise Amazon *will* demote us again, most likely), and unless we are able to get it working reliably and stably, it will be stuck as an experimental provider. 
     
    And we do plan on pursuing this, but once we have a stable build and .. well, more time to devote solely to it.
  11. Like
    modplan got a reaction from Christopher (Drashna) in Setup Same CloudDrive on multiple Computers?   
    Yes it sounds like the OP wants to share to his parents which live further away. However this can easily be done by sharing the drive like you say, and then using a VPN service like Hamachi.
×
×
  • Create New...