Jump to content
Covecube Inc.

srcrist

Members
  • Content Count

    275
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by srcrist

  1. Because of the way DrivePool works, you could save yourself a SMALL headache by placing a single volume in a pool and pointing Plex and your other applications at the pool right from the start. But the process to convert a standalone drive to a pool isn't terribly difficult later, either. So it's up to you. If you create a pool from a single volume right from the start, adding additional volumes and drives is as easy as creating them and clicking the add button in DrivePool later. If you don't create the pool until later, you'll simply have to move all of the existing data on the drive to the hidden pool folder when you want to use a pool.
  2. The important thing is that each volume (read: partition) should be less than 60TB, so that Volume Shadow Copy and Chkdsk can operate on the volume to fix problems. In light of some of the Google Drive problems in March, some of us switched to using even smaller volumes in the thought that the smaller the volume the less data might be corrupted by an outage. But the changes in the recent beta like the file system data redundancy should, ideally, make this a non-issue today. Just keep each volume under 60TB. There is not, in any case, any significant performance difference between using 25TB volumes or 50TB volumes combined with DrivePool. It depends on what you mean by important. CloudDrive can be quite I/O intensive, and you'll notice a significant difference between the cache performance of an SSD vs a spinning rust drive. This will be particularly noticeable if you will be both writing and reading to and from the drive simultaneously. Will it work on an HDD? Probably. Will an SSD be markedly better? Absolutely. SSDs are cheap these days. I would suggest picking one up. It doesn't need to be a fancy Samsung EVO 960 Pro or anything. As long as you're using a processor with AES-NI, the resource impact of the StableBit software should be negligible. DrivePool simply forwards I/O requests to the underlying drives, so its impact is effectively non-existent, and, setting aside the obvious I/O needs of the cache, CloudDrive's actual resource requirements are all around the encryption and decryption--all of which should be offloaded to AES-NI, as long as you have it. I think that using CloudDrive on the NUC is wise. There should be no major issue sharing CloudDrive volumes or DrivePool pools via SMB to the NAS.
  3. Note, again, that these errors generally indicate a connection issue. So if you're having trouble with forced unmounts and I/O errors, the first thing you want to look at is the stability of the network connection between your PC and Google's servers.
  4. Unclear. You need to submit a proper ticket for this to support. Do so here: https://stablebit.com/Contact It's possible that a memory error might have fundamentally corrupted something, it's also possible that it might be fixed by simply deleting the data on the PC. Let official support walk you through that, to be sure. This is true, but remember that this limitation is per volume, not per drive. His volumes could be smaller than 60TB on a 256TB drive.
  5. In that case, I'm not sure. There might be a handle issue causing a problem, but chkdsk generally asks you if you want to close all handles if it encounters one. You could try /X and see if it helps. I would definitely open an actual ticket with covecube about the problem, though. I would be hesitant to use any drive that chkdsk cannot operate on. Especially considering some of the recent problems.
  6. Okay. To be clear: Scanner's "File System Damaged" warning is something very different than the corruption issue that you are reading about other people encountering lately. Scanner is just warning about normal old "please run chkdsk" file system health, and it can give that warning for any number of reasons. That's getting quite confusing in this thread because people are confusing this with the file system corruption that has been leading to data loss and RAW mounts--which is something different entirely. A drive with "file system damage" in Scanner is a normal part of any NTFS file system and can be caused because of simple power loss or a software crash. A drive showing up RAW indicates a much larger problem. Both are technically file system damage. As far as the drive showing up damaged in scanner but not chkdsk, I've only personally seen that get wonky with volumes that chkdsk does not properly support. How large is the drive you're trying to use? Is it larger than 60TB total? chkdsk does not support volumes 60TB or larger. I've seen scanner throw warnings simply because chkdsk cannot properly manage the volume size, and that would explain your errors when running chkdsk as well.
  7. When was the drive created? After March 13th? My drive did not exhibit corruption for weeks afterwards, but it was still corrupted by the same outage. The answer to your question remains, in any case, that the way a drive becomes corrupted when the data is verified is because the data is modified in place on Google's servers after the fact. That's how it happens, we just don't know exactly why. It obviously shouldn't happen. But I'm really not sure what can prevent it.
  8. To be clear: did you re-upload data to the same drive?
  9. Because on March 13th Google had some sort of service disruption, and whatever their solution was to that problem appears to have modified some of the data on their service. So any drive that existed before March 13th appears to have some risk of having been corrupted. Nobody knows why, and Google obviously isn't talking about what they did. My best guess is that they rolled some amount of data back to a prior state, which would corrupt the drive if some chunks were a version from mid-February while others were a newer version from March--assuming the data was modified during that time. But ultimately nobody can answer this question. Nobody seems to know exactly what happened--except for Google. And they aren't talking about it. No amount of detaching and reattaching a drive, or changing the settings in CloudDrive, is going to change how the data exists on Google's severs or protect you from anything that Google does that might corrupt that data in the future. It's simply one of the risks of using cloud storage. All anyone here can do is tell you what steps can be taken to try and recover whatever data still exists. And "RAW" is just Windows' way of telling you that the file system is corrupt. RAW just means "I can't read this file system data correctly." Raw isn't actually a specific file system type. It's simply telling you that its broken. Why it's broken is something we don't seem to have a clear answer for. We just know nothing was broken before March 13th, and then people have had problems since. Drives that i have created since March 13th have also not had issues.
  10. Chkdsk can never do more harm than good. If it recovered your files to a Lost.dir or the data is corrupted than the data was unrecoverable. Chkdsk restores your file system to stability, even at the cost of discarding corrupt data. Nothing can magically restore corrupt data to an uncorrupt state. The alternative, not using chkdsk, just means you would continue to corrupt additional data by working with an unhealthy volume. Chkdsk may restore files depending on what is wrong with the volume, but it's never guaranteed. No tool can work miracles. If the volume is damaged enough, nothing can repair it from RAW to NTFS. You'll have to use file recovery or start over.
  11. This might actually be on the provider's side. I once deleted about 100TB from Google Drive and it took months for the data to actually be removed from Google's servers. It didn't matter, since I didn't have a limit, but you might be running into a similar situation here. Contact your administrator and see if there is something they can do.
  12. Those are generally network errors, not I/O errors in the traditional sense. Something is probably interfering with your connection to Google's servers. You'll need to troubleshoot the network connection. Logs might help to pin down exactly what is happening.
  13. That's a good question, and one probably better asked in the DrivePool forum, because it's really a DrivePool question. I'm not sure if there is any way to tell DrivePool to prefer reads off of a specific drive. I know that it's smart enough to prefer local drives over cloud drives if the same data exists on both, but I'm not aware, off the top of my head, of a way to tell it to prefer a specific drive that falls within the same tier of the built-in hierarchy. That doesn't mean it doesn't exist, though. One work-around that I can think of would be to simply point your applications at the hidden folder on the drive, rather than pointing them at the pool itself. That would work.
  14. Sure thing. And also do not be scared to simply use the DNS servers provided by your ISP to test them. They're likely to be the most local servers for you, and the most likely to provide you with efficient routing to Google--despite the other flaws with using ISP provided DNS. No, I think that's fine. I used DrivePool to consolidate when I recognized this problem myself. You can always click the little arrows next to the status bar in the DrivePool UI to give the transfer a higher priority, if you're doing other things on the drives. That may help speed up the process somewhat. Just make sure that they aren't all caching on the same physical drive. Move the other two to another cache drive, even if you're only using them for duplication. That's fine. Any time.
  15. So, to be clear, there isn't anything that CloudDrive can do that should ever really cause pixelation. That just isn't how digital video works. Buffering, maybe. But not pixelation. That would be a Plex thing, and a result of how Plex is encoding or decoding the video. The drive throughput is either going to work or not. It's going to be fast enough to keep up, or not. It won't pixelate the video, it just won't work. I suppose Plex, as a client, might display pixelation if it isn't able to get data fast enough, as a sort of stop-gap to keep the stream going--so that might be why you're seeing that. The settings I gave you were the ones I use, and am able to stream 4K video with at original quality. So that's effectively all the settings we can tweak within CloudDrive--at least in the short term. There are some tweaks we might be able to make to optimize for you later, but those should be working and they're a good starting point for your connection speed. At this point, it's probably best to start looking at the network, if you've ruled out the possibility that there is some issue with the file. Play with your DNS servers and see if you're being sent to a non-optimal Google server, and check your network settings across the board. You should easily be able to hit 400mbps or so with the CloudDrive settings I gave you. If you aren't, there is something wrong. As far as the maximum chunk size goes, there isn't any way to change that without recreating the drive. Though you may want to consider that anyway, as you cannot use chkdsk on volume sizes of 100TB, and it's a lot less optimal to use 3 separate CloudDrive drives than 1 drive partitioned into multiple volumes. In fact, if all three of those drives are all caching from the same SSD, that might be the cause of all of your issues here, as well. The cache can be very disk usage heavy, and I don't generally recommend any more than a single virtual drive caching to a single physical drive. I had problems with that myself. Note that a single 1PB CloudDrive drive can be divided up into many volumes, and all can share a single cache. It's probably better to make these changes now, while there is still relatively little on the drives, than later, when it can take months. Each volume should be smaller than 60TB to work with Volume Shadow Copy and CHKDSK.
  16. I mostly meant the chunk size, but drive structure would be things like chunk size, cluster size, sector size, file system type. The things you chose when you created the drive. But really only the chunk size is relevant here.
  17. Some of this may seem counter-intuitive, but try this: Drop your download threads to ten, drop your prefetch window to 150-175MB, raise your prefetch trigger to 15MB and drop the window to 10 seconds. Try that, test it, and report back. Also, what is your drive structure? Most importantly, what is the chunk size that you specified at drive creation? Also, to rule out one more factor, have you tried playing the files off of the CloudDrive with a local media player like VLC?
  18. That simply isn't true. Are you sure that you aren't running into some sort of I/O issue? CloudDrive will upload without restriction as soon as your upload threshold is met in the performance settings. Leave "Background I/O" enabled to ensure that writes are prioritized over reads in Windows' I/O and see if that fixes your problem. Or try disabling it if it's already enabled and see if other processes are simply getting in the way. I know this, btw, because I transferred 90TB from one drive to another, and my cache was full for months. So I know from experience that the cache being full does not throttle upstream performance.
  19. CloudDrive has vastly higher overhead than many other transfer protocols like FTP and Usenet. 700mbps is probably about right with respect to a maximum theoretical transfer rate on a 1gbps connection. It isn't built for raw transfer throughput, and it adds additional overhead on top of the protocols being used to transfer the data. I think what is required here is an adjustment of expectations, more than an adjustment of settings in the application. Remember that CloudDrive is also using whatever protocols your provider requires to transfer its data, so, if you were using FTP, for example, you need to account for the standard FTP overhead AND the additional overhead above that amount that CloudDrive requires as well. If transferring files from one location to another is the primary goal, you'd be better off setting aside the accessory benefits of CloudDrive, and simply completing that transfer using a dedicated file transfer protocol. It will effectively always be faster. Remember, also, that overhead is additive per transfer, so CloudDrive's mechanism of chunking your data means that you add X overhead per chunk rather than X overhead per file--as you would if you were using, say, FTP. You simply will not see transfer rates of 90-95% of your maximum throughput, and that isn't the goal of this particular application.
  20. You'll need to use the CreateDrive_AllowCacheOnDrivePool setting in the advanced settings to enable this functionality. See this wiki page for more information. The cache being full will not limit your upload, only your copy speed. It will throttle transfers to the speed of your upstream bandwidth, so it should make effectively no difference, aside from the fact that you won't be able to copy new data into the drive faster. That data will still upload at the same rate either way.
  21. It's possible, but provides no benefit over simply using the SSD as the cache drive. The cache is stored as a single file, and will not be split among multiple drives in your pool. Just use the SSD as your cache drive, and the cached content will always be at SSD speeds.
  22. That isn't a question with an objective, universal answer. The benefits to different chunk sizes vary depending on what you want to use your drive for. A lot of people use their CloudDrive drives to store large media libraries. So the maximum chunk size for Google, which is 20MB, allows for the highest capacity drive, and the highest throughput for streaming media off of the drive. 10MB chunks are *also* probably fine for this purpose, but the drive cannot be as large (though it can still be huge), and the theoretical maximum downstream speed will be lower. You cannot, in any case, change the chunk size of a drive once it has been created. That's one of the settings that has to be chosen upon drive creation, and changing it requires a fundamental change in the structure of the drive as it is stored on the provider, so it can't be changed later.
  23. You can actually check this, in the UI, by mousing over the drive size at the top under the provider logo. If you want to.
  24. This is incorrect. Adjusting the number of threads in CloudDrive has no appreciable impact on CPU usage. Threads, in this case, simply refers to the number of simultaneous upload and download requests that the application will let itself make at any given time. CloudDrive has limits on the number of threads that one can set in the application, so it isn't possible to test whether or not thousands of threads may eventually lead to a degradation of CPU performance, but, under the limitations that exist today, you will hit the API limitations of the provider long before any impact on CPU usage is worthy of consideration. In short: Do not adjust CloudDrive's threads based on CPU related concerns, and there is no relationship between "threads" as related to processor scheduling and "threads" as they are used in this context--other than the name. As far as I know, this is also incorrect. A minimum download size larger than the chunk size will still download multiple chunks of contiguous data until the specified threshold is met, again, as far as I know. There is nothing conclusive in the documentation about exactly how this functions, but I have seen Christopher recommend settings higher than 20MB for Google Drive, such as he did in this case and this case, for specific situations. That would lead me to believe that higher settings do, in fact, have an impact beyond a single chunk. This is, of course, with the caveat that, as explained in the user manual, CloudDrive can still request less data than the "minimum" if the system is also requesting less (total) data. In any case, it is true, either way, that a larger minimum download will reduce the responsiveness of the drive, and that speeds in excess of 600-700mbps can be achieved with a minimum download of 10-20MB using the settings I recommended above. So if the problem is responsiveness, dropping it may help. The base settings that I would recommend anyone try if they are having performance issues are the following, and then play with them from there: 5 Upload Threads 10 Download Threads Upload Throttling enabled at approx 70mbps (to stay within Google's 750GB/day limitation) Upload Threshold 1MB/5 Mins Minimum Download 10-20MB Disable Background IO if you'll be streaming media so that downstream performance is not impacted by upstream requests. If that works, you're great, and, if not, then you come back and people can try to help. But these are settings that have worked for a lot of people here over the last several years.
  25. The question is a little bit confusing, but if you're asking if you can simply use DrivePool to duplicate at the file system level, even if the data is encrypted in the cloud by CloudDrive, then yes. That's fine. What Christopher is talking about above is duplicating the raw data on your provider. Don't do that for any reason, unless you really understand the longer term implications. The same file *will* be encrypted and stored differently on both drives.
×
×
  • Create New...