Jump to content
Covecube Inc.


  • Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by srcrist

  1. Chkdsk can never do more harm than good. If it recovered your files to a Lost.dir or the data is corrupted than the data was unrecoverable. Chkdsk restores your file system to stability, even at the cost of discarding corrupt data. Nothing can magically restore corrupt data to an uncorrupt state. The alternative, not using chkdsk, just means you would continue to corrupt additional data by working with an unhealthy volume. Chkdsk may restore files depending on what is wrong with the volume, but it's never guaranteed. No tool can work miracles. If the volume is damaged enough, nothing can
  2. This might actually be on the provider's side. I once deleted about 100TB from Google Drive and it took months for the data to actually be removed from Google's servers. It didn't matter, since I didn't have a limit, but you might be running into a similar situation here. Contact your administrator and see if there is something they can do.
  3. Those are generally network errors, not I/O errors in the traditional sense. Something is probably interfering with your connection to Google's servers. You'll need to troubleshoot the network connection. Logs might help to pin down exactly what is happening.
  4. That's a good question, and one probably better asked in the DrivePool forum, because it's really a DrivePool question. I'm not sure if there is any way to tell DrivePool to prefer reads off of a specific drive. I know that it's smart enough to prefer local drives over cloud drives if the same data exists on both, but I'm not aware, off the top of my head, of a way to tell it to prefer a specific drive that falls within the same tier of the built-in hierarchy. That doesn't mean it doesn't exist, though. One work-around that I can think of would be to simply point your applications at the
  5. Sure thing. And also do not be scared to simply use the DNS servers provided by your ISP to test them. They're likely to be the most local servers for you, and the most likely to provide you with efficient routing to Google--despite the other flaws with using ISP provided DNS. No, I think that's fine. I used DrivePool to consolidate when I recognized this problem myself. You can always click the little arrows next to the status bar in the DrivePool UI to give the transfer a higher priority, if you're doing other things on the drives. That may help speed up the process somewhat.
  6. So, to be clear, there isn't anything that CloudDrive can do that should ever really cause pixelation. That just isn't how digital video works. Buffering, maybe. But not pixelation. That would be a Plex thing, and a result of how Plex is encoding or decoding the video. The drive throughput is either going to work or not. It's going to be fast enough to keep up, or not. It won't pixelate the video, it just won't work. I suppose Plex, as a client, might display pixelation if it isn't able to get data fast enough, as a sort of stop-gap to keep the stream going--so that might be why you're seeing
  7. I mostly meant the chunk size, but drive structure would be things like chunk size, cluster size, sector size, file system type. The things you chose when you created the drive. But really only the chunk size is relevant here.
  8. Some of this may seem counter-intuitive, but try this: Drop your download threads to ten, drop your prefetch window to 150-175MB, raise your prefetch trigger to 15MB and drop the window to 10 seconds. Try that, test it, and report back. Also, what is your drive structure? Most importantly, what is the chunk size that you specified at drive creation? Also, to rule out one more factor, have you tried playing the files off of the CloudDrive with a local media player like VLC?
  9. That simply isn't true. Are you sure that you aren't running into some sort of I/O issue? CloudDrive will upload without restriction as soon as your upload threshold is met in the performance settings. Leave "Background I/O" enabled to ensure that writes are prioritized over reads in Windows' I/O and see if that fixes your problem. Or try disabling it if it's already enabled and see if other processes are simply getting in the way. I know this, btw, because I transferred 90TB from one drive to another, and my cache was full for months. So I know from experience that the cache being full
  10. CloudDrive has vastly higher overhead than many other transfer protocols like FTP and Usenet. 700mbps is probably about right with respect to a maximum theoretical transfer rate on a 1gbps connection. It isn't built for raw transfer throughput, and it adds additional overhead on top of the protocols being used to transfer the data. I think what is required here is an adjustment of expectations, more than an adjustment of settings in the application. Remember that CloudDrive is also using whatever protocols your provider requires to transfer its data, so, if you were using FTP, for example, you
  11. You'll need to use the CreateDrive_AllowCacheOnDrivePool setting in the advanced settings to enable this functionality. See this wiki page for more information. The cache being full will not limit your upload, only your copy speed. It will throttle transfers to the speed of your upstream bandwidth, so it should make effectively no difference, aside from the fact that you won't be able to copy new data into the drive faster. That data will still upload at the same rate either way.
  12. It's possible, but provides no benefit over simply using the SSD as the cache drive. The cache is stored as a single file, and will not be split among multiple drives in your pool. Just use the SSD as your cache drive, and the cached content will always be at SSD speeds.
  13. That isn't a question with an objective, universal answer. The benefits to different chunk sizes vary depending on what you want to use your drive for. A lot of people use their CloudDrive drives to store large media libraries. So the maximum chunk size for Google, which is 20MB, allows for the highest capacity drive, and the highest throughput for streaming media off of the drive. 10MB chunks are *also* probably fine for this purpose, but the drive cannot be as large (though it can still be huge), and the theoretical maximum downstream speed will be lower. You cannot, in any case, chang
  14. You can actually check this, in the UI, by mousing over the drive size at the top under the provider logo. If you want to.
  15. This is incorrect. Adjusting the number of threads in CloudDrive has no appreciable impact on CPU usage. Threads, in this case, simply refers to the number of simultaneous upload and download requests that the application will let itself make at any given time. CloudDrive has limits on the number of threads that one can set in the application, so it isn't possible to test whether or not thousands of threads may eventually lead to a degradation of CPU performance, but, under the limitations that exist today, you will hit the API limitations of the provider long before any impact on CPU usage is
  16. The question is a little bit confusing, but if you're asking if you can simply use DrivePool to duplicate at the file system level, even if the data is encrypted in the cloud by CloudDrive, then yes. That's fine. What Christopher is talking about above is duplicating the raw data on your provider. Don't do that for any reason, unless you really understand the longer term implications. The same file *will* be encrypted and stored differently on both drives.
  17. The file corruption was almost certainly because there was a data rollback on provider side, if you are using Google, and not a result of anything you did at all.
  18. You can, but if you're going to be using it for the same purpose, I would suggest enlarging the drive and adding a second volume to the same drive instead. This make cache and performance management easier, and it's all the same to CloudDrive. It doesn't care if the information is on two volumes or twenty.
  19. The threads in CloudDrive are not CPU threads, and are not CPU limited. They are just simultaneous upload and download requests. They are limited by frequency and your provider's API limitations. Getting the correct value, for you, will be a matter of experimentation. I suggest starting with 10 download threads and 5 upload, and adjusting from there. You should *never* need more than a handful of upload threads, in any case, unless your network is misconfigured. Even 2 or 3 upload threads is capable of hitting the 70-75mpbs throughput that Google allows--permitting that your connection speed i
  20. EDIT: Never mind. I understand what you're looking at now and where you're getting those numbers.
  21. Sorta. But I think you might have it backwards. You wouldn't back up the data that sits on the provider to other providers, because the drive structure changes somewhat by provider. You would, instead, mount multiple providers with CloudDrive and use a tool like DrivePool to mirror the volumes to one another.
  22. A 2500-2800ms response time to google is almost *certainly* a network issue on your part, and not one that needs to be considered as a general concern. It should not take that long to send an internet packet around the entire planet, and, if you're seeing that sort of latency, you probably have bad hardware between you and your destination. A round trip to a satellite, which is the lengthiest hop in existence, is only 500ms plus the terrestrial hops. So unless you are, for some reason, beaming data to space and back several times, 2500-2800ms is simply a broken connection. I think it's importa
  23. No, you're right. It will use the upload thread count to download the data, so that will take up some number of slots for a time before those threads will be available for a new upload chunk. The mistake, though, is translating that to a loss of throughput, which isn't (or at least does not necessarily need to be) true. That is: you can achieve the same upload speeds with more threads, if you need to. 5 upload threads should be more than enough for most users to still max out the 70-75mbps of average data that Google allows you to upload each day, while also reading back the chunks in real-tim
  24. So chkdsk is not affected by the size of the *drive*. Only the size of the *volume* (aka, generally, partition). You can have multiple volumes on the SAME CloudDrive drive. So you can still expand the size the drive and create a second VOLUME smaller than 55TB or so, and chkdsk will work with it just fine. I would, in fact, recommend this over a second drive entirely, so that you don't have multiple caches. Aside from this note, DrivePool will operate identically regardless of whether or not you're using a volume on a local disk, a CloudDrive disk, or multiple CloudDrive volumes on the s
  25. To be clear: this is not necessarily true. It depends on the speed of your connection. All the upload verification means is that it will download every byte that it uploads before clearing the data from the local cache. If your downstream bandwidth is at least as large as your upstream bandwidth, it should not theoretically slow down your upload speeds at all. There will just be a delay before the data is removed from local storage while it reads the data back from the provider. CloudDrive can (and will) still move on to the next chunk upload while this process is completed.
  • Create New...