Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. Chkdsk can never do more harm than good. If it recovered your files to a Lost.dir or the data is corrupted than the data was unrecoverable. Chkdsk restores your file system to stability, even at the cost of discarding corrupt data. Nothing can magically restore corrupt data to an uncorrupt state. The alternative, not using chkdsk, just means you would continue to corrupt additional data by working with an unhealthy volume. Chkdsk may restore files depending on what is wrong with the volume, but it's never guaranteed. No tool can work miracles. If the volume is damaged enough, nothing can repair it from RAW to NTFS. You'll have to use file recovery or start over.
  2. This might actually be on the provider's side. I once deleted about 100TB from Google Drive and it took months for the data to actually be removed from Google's servers. It didn't matter, since I didn't have a limit, but you might be running into a similar situation here. Contact your administrator and see if there is something they can do.
  3. Those are generally network errors, not I/O errors in the traditional sense. Something is probably interfering with your connection to Google's servers. You'll need to troubleshoot the network connection. Logs might help to pin down exactly what is happening.
  4. That's a good question, and one probably better asked in the DrivePool forum, because it's really a DrivePool question. I'm not sure if there is any way to tell DrivePool to prefer reads off of a specific drive. I know that it's smart enough to prefer local drives over cloud drives if the same data exists on both, but I'm not aware, off the top of my head, of a way to tell it to prefer a specific drive that falls within the same tier of the built-in hierarchy. That doesn't mean it doesn't exist, though. One work-around that I can think of would be to simply point your applications at the hidden folder on the drive, rather than pointing them at the pool itself. That would work.
  5. Sure thing. And also do not be scared to simply use the DNS servers provided by your ISP to test them. They're likely to be the most local servers for you, and the most likely to provide you with efficient routing to Google--despite the other flaws with using ISP provided DNS. No, I think that's fine. I used DrivePool to consolidate when I recognized this problem myself. You can always click the little arrows next to the status bar in the DrivePool UI to give the transfer a higher priority, if you're doing other things on the drives. That may help speed up the process somewhat. Just make sure that they aren't all caching on the same physical drive. Move the other two to another cache drive, even if you're only using them for duplication. That's fine. Any time.
  6. So, to be clear, there isn't anything that CloudDrive can do that should ever really cause pixelation. That just isn't how digital video works. Buffering, maybe. But not pixelation. That would be a Plex thing, and a result of how Plex is encoding or decoding the video. The drive throughput is either going to work or not. It's going to be fast enough to keep up, or not. It won't pixelate the video, it just won't work. I suppose Plex, as a client, might display pixelation if it isn't able to get data fast enough, as a sort of stop-gap to keep the stream going--so that might be why you're seeing that. The settings I gave you were the ones I use, and am able to stream 4K video with at original quality. So that's effectively all the settings we can tweak within CloudDrive--at least in the short term. There are some tweaks we might be able to make to optimize for you later, but those should be working and they're a good starting point for your connection speed. At this point, it's probably best to start looking at the network, if you've ruled out the possibility that there is some issue with the file. Play with your DNS servers and see if you're being sent to a non-optimal Google server, and check your network settings across the board. You should easily be able to hit 400mbps or so with the CloudDrive settings I gave you. If you aren't, there is something wrong. As far as the maximum chunk size goes, there isn't any way to change that without recreating the drive. Though you may want to consider that anyway, as you cannot use chkdsk on volume sizes of 100TB, and it's a lot less optimal to use 3 separate CloudDrive drives than 1 drive partitioned into multiple volumes. In fact, if all three of those drives are all caching from the same SSD, that might be the cause of all of your issues here, as well. The cache can be very disk usage heavy, and I don't generally recommend any more than a single virtual drive caching to a single physical drive. I had problems with that myself. Note that a single 1PB CloudDrive drive can be divided up into many volumes, and all can share a single cache. It's probably better to make these changes now, while there is still relatively little on the drives, than later, when it can take months. Each volume should be smaller than 60TB to work with Volume Shadow Copy and CHKDSK.
  7. I mostly meant the chunk size, but drive structure would be things like chunk size, cluster size, sector size, file system type. The things you chose when you created the drive. But really only the chunk size is relevant here.
  8. Some of this may seem counter-intuitive, but try this: Drop your download threads to ten, drop your prefetch window to 150-175MB, raise your prefetch trigger to 15MB and drop the window to 10 seconds. Try that, test it, and report back. Also, what is your drive structure? Most importantly, what is the chunk size that you specified at drive creation? Also, to rule out one more factor, have you tried playing the files off of the CloudDrive with a local media player like VLC?
  9. That simply isn't true. Are you sure that you aren't running into some sort of I/O issue? CloudDrive will upload without restriction as soon as your upload threshold is met in the performance settings. Leave "Background I/O" enabled to ensure that writes are prioritized over reads in Windows' I/O and see if that fixes your problem. Or try disabling it if it's already enabled and see if other processes are simply getting in the way. I know this, btw, because I transferred 90TB from one drive to another, and my cache was full for months. So I know from experience that the cache being full does not throttle upstream performance.
  10. CloudDrive has vastly higher overhead than many other transfer protocols like FTP and Usenet. 700mbps is probably about right with respect to a maximum theoretical transfer rate on a 1gbps connection. It isn't built for raw transfer throughput, and it adds additional overhead on top of the protocols being used to transfer the data. I think what is required here is an adjustment of expectations, more than an adjustment of settings in the application. Remember that CloudDrive is also using whatever protocols your provider requires to transfer its data, so, if you were using FTP, for example, you need to account for the standard FTP overhead AND the additional overhead above that amount that CloudDrive requires as well. If transferring files from one location to another is the primary goal, you'd be better off setting aside the accessory benefits of CloudDrive, and simply completing that transfer using a dedicated file transfer protocol. It will effectively always be faster. Remember, also, that overhead is additive per transfer, so CloudDrive's mechanism of chunking your data means that you add X overhead per chunk rather than X overhead per file--as you would if you were using, say, FTP. You simply will not see transfer rates of 90-95% of your maximum throughput, and that isn't the goal of this particular application.
  11. You'll need to use the CreateDrive_AllowCacheOnDrivePool setting in the advanced settings to enable this functionality. See this wiki page for more information. The cache being full will not limit your upload, only your copy speed. It will throttle transfers to the speed of your upstream bandwidth, so it should make effectively no difference, aside from the fact that you won't be able to copy new data into the drive faster. That data will still upload at the same rate either way.
  12. It's possible, but provides no benefit over simply using the SSD as the cache drive. The cache is stored as a single file, and will not be split among multiple drives in your pool. Just use the SSD as your cache drive, and the cached content will always be at SSD speeds.
  13. That isn't a question with an objective, universal answer. The benefits to different chunk sizes vary depending on what you want to use your drive for. A lot of people use their CloudDrive drives to store large media libraries. So the maximum chunk size for Google, which is 20MB, allows for the highest capacity drive, and the highest throughput for streaming media off of the drive. 10MB chunks are *also* probably fine for this purpose, but the drive cannot be as large (though it can still be huge), and the theoretical maximum downstream speed will be lower. You cannot, in any case, change the chunk size of a drive once it has been created. That's one of the settings that has to be chosen upon drive creation, and changing it requires a fundamental change in the structure of the drive as it is stored on the provider, so it can't be changed later.
  14. You can actually check this, in the UI, by mousing over the drive size at the top under the provider logo. If you want to.
  15. This is incorrect. Adjusting the number of threads in CloudDrive has no appreciable impact on CPU usage. Threads, in this case, simply refers to the number of simultaneous upload and download requests that the application will let itself make at any given time. CloudDrive has limits on the number of threads that one can set in the application, so it isn't possible to test whether or not thousands of threads may eventually lead to a degradation of CPU performance, but, under the limitations that exist today, you will hit the API limitations of the provider long before any impact on CPU usage is worthy of consideration. In short: Do not adjust CloudDrive's threads based on CPU related concerns, and there is no relationship between "threads" as related to processor scheduling and "threads" as they are used in this context--other than the name. As far as I know, this is also incorrect. A minimum download size larger than the chunk size will still download multiple chunks of contiguous data until the specified threshold is met, again, as far as I know. There is nothing conclusive in the documentation about exactly how this functions, but I have seen Christopher recommend settings higher than 20MB for Google Drive, such as he did in this case and this case, for specific situations. That would lead me to believe that higher settings do, in fact, have an impact beyond a single chunk. This is, of course, with the caveat that, as explained in the user manual, CloudDrive can still request less data than the "minimum" if the system is also requesting less (total) data. In any case, it is true, either way, that a larger minimum download will reduce the responsiveness of the drive, and that speeds in excess of 600-700mbps can be achieved with a minimum download of 10-20MB using the settings I recommended above. So if the problem is responsiveness, dropping it may help. The base settings that I would recommend anyone try if they are having performance issues are the following, and then play with them from there: 5 Upload Threads 10 Download Threads Upload Throttling enabled at approx 70mbps (to stay within Google's 750GB/day limitation) Upload Threshold 1MB/5 Mins Minimum Download 10-20MB Disable Background IO if you'll be streaming media so that downstream performance is not impacted by upstream requests. If that works, you're great, and, if not, then you come back and people can try to help. But these are settings that have worked for a lot of people here over the last several years.
  16. The question is a little bit confusing, but if you're asking if you can simply use DrivePool to duplicate at the file system level, even if the data is encrypted in the cloud by CloudDrive, then yes. That's fine. What Christopher is talking about above is duplicating the raw data on your provider. Don't do that for any reason, unless you really understand the longer term implications. The same file *will* be encrypted and stored differently on both drives.
  17. The file corruption was almost certainly because there was a data rollback on provider side, if you are using Google, and not a result of anything you did at all.
  18. You can, but if you're going to be using it for the same purpose, I would suggest enlarging the drive and adding a second volume to the same drive instead. This make cache and performance management easier, and it's all the same to CloudDrive. It doesn't care if the information is on two volumes or twenty.
  19. The threads in CloudDrive are not CPU threads, and are not CPU limited. They are just simultaneous upload and download requests. They are limited by frequency and your provider's API limitations. Getting the correct value, for you, will be a matter of experimentation. I suggest starting with 10 download threads and 5 upload, and adjusting from there. You should *never* need more than a handful of upload threads, in any case, unless your network is misconfigured. Even 2 or 3 upload threads is capable of hitting the 70-75mpbs throughput that Google allows--permitting that your connection speed is capable of doing so. See the manual page here for more information. This is the exact opposite of what you want. That's why. The larger your minimum download, the less responsive your drive will be. A 100 MB minmum download means that even if an application only requests 1 MB of data, CloudDrive is still downloading 100MB. There is effectively no reason for your minimum download to ever be larger than your chunk size, for most use cases. Raising your minimum is actually working against you here. Go with my above recommendations for thread count, and drop the minimum download to somewhere between 10 and 20 MB. See how that performs. We can make adjustments to your prefetcher to accommodate larger data requests over and above that amount. EDIT: Also disable Background IO on the drive in the performance settings if read response is important.
  20. EDIT: Never mind. I understand what you're looking at now and where you're getting those numbers.
  21. Sorta. But I think you might have it backwards. You wouldn't back up the data that sits on the provider to other providers, because the drive structure changes somewhat by provider. You would, instead, mount multiple providers with CloudDrive and use a tool like DrivePool to mirror the volumes to one another.
  22. A 2500-2800ms response time to google is almost *certainly* a network issue on your part, and not one that needs to be considered as a general concern. It should not take that long to send an internet packet around the entire planet, and, if you're seeing that sort of latency, you probably have bad hardware between you and your destination. A round trip to a satellite, which is the lengthiest hop in existence, is only 500ms plus the terrestrial hops. So unless you are, for some reason, beaming data to space and back several times, 2500-2800ms is simply a broken connection. I think it's important for you to consider just how unrepresentative that is, of the typical internet user, when considering whether or not someone is speaking to your issues specifically, or when requesting that changes be made to accommodate your situation. You're talking about latency that exceeds 5x even the typical satellite internet connection. The number of people dealing with such a situation are a fraction of a fraction of a percent. Regardless of your situation, though, Google themselves have a hard limit of 750gb/day. A single thread, for most users, can hit that limit. In any case, my above response is perfectly compatible with slower response times. It's very easy to hit the maximum 70-75 mbps. No matter how many threads are taken up by reads, you can always add additional threads to make up for the read threads. If there are two read threads outstanding, you add two more for uploads. If there are 4 read threads outstanding, you add 4. If your responses are that slow, you won't get throttled no matter how many threads you add--which means that adding threads can always compensate for your connection delays. If you're still experiencing throughput issues, the problem is something else.
  23. No, you're right. It will use the upload thread count to download the data, so that will take up some number of slots for a time before those threads will be available for a new upload chunk. The mistake, though, is translating that to a loss of throughput, which isn't (or at least does not necessarily need to be) true. That is: you can achieve the same upload speeds with more threads, if you need to. 5 upload threads should be more than enough for most users to still max out the 70-75mbps of average data that Google allows you to upload each day, while also reading back the chunks in real-time. What *is* true is that if you theoretically only had one upload thread, it would have to use the same thread to read the data back each time, and that would slow things down somewhat. But, even for residential connections, that wouldn't generally translate to a 50% reduction in throughput, because your downstream capacity is generally a lot faster than your upstream on residential connections. So it would take a lot less time to *read* each chunk than to upload it to the provider. If your connection is symmetrical, then the question simply becomes about your overall throughput. Anything over 100 mbps or so should not generally notice a difference aside from downstream congestion as a result of the verification process. Bottom line: If you *do* suffer a performance drop with upload verification enabled, and you're working with a high-bandwidth connection (100 mbps symmetrical or more), you just need to add a few upload threads to accommodate the increased workload. As long as you can hit 70ish mbps upload, though, you're capable of maxing out Google's limitations regardless. So all of this only sort of matters in a theoretical sense.
  24. So chkdsk is not affected by the size of the *drive*. Only the size of the *volume* (aka, generally, partition). You can have multiple volumes on the SAME CloudDrive drive. So you can still expand the size the drive and create a second VOLUME smaller than 55TB or so, and chkdsk will work with it just fine. I would, in fact, recommend this over a second drive entirely, so that you don't have multiple caches. Aside from this note, DrivePool will operate identically regardless of whether or not you're using a volume on a local disk, a CloudDrive disk, or multiple CloudDrive volumes on the same disk. DrivePool does not operate at the block level, so it doesn't concern itself with the chunks. That will all still be handled at the CloudDrive level. You can customize the DrivePool settings to configure the file placement however you want. There is no impact on performance, as DrivePool simply forwards requests to the underlying drives--CloudDrive or otherwise. Chkdsk is a file system tool. So it operates on volumes, not drives. But, to fix issues, you'd use chkdsk on the volumes that underlie your pool.
  25. To be clear: this is not necessarily true. It depends on the speed of your connection. All the upload verification means is that it will download every byte that it uploads before clearing the data from the local cache. If your downstream bandwidth is at least as large as your upstream bandwidth, it should not theoretically slow down your upload speeds at all. There will just be a delay before the data is removed from local storage while it reads the data back from the provider. CloudDrive can (and will) still move on to the next chunk upload while this process is completed.
×
×
  • Create New...