Jump to content

srcrist

Members
  • Posts

    466
  • Joined

  • Last visited

  • Days Won

    36

Everything posted by srcrist

  1. That is not possible. It's contrary to the intended purpose of this particular tool. You want something more similar to rClone, Netdrive, or Google's own Sync and Backup software.
  2. Hey guys, So, to follow up after a day or two: the only person who says that they have completed the migration is saying that their drive is now non-functional. Is this accurate? Has nobody completed the process with a functional drive--to be clear? I can't really tell if anyone trying to help Chase has actually completed a successful migration, or if everyone is just offering feedback based on hypothetical situations. I don't even want to think about starting this unless a few people can confirm that they have completed the process successfully.
  3. Has anyone with a large drive completed this process at this point? If you don't mind, can you folks drop a few notes on your experience? Setting aside the fact that the data is inaccessible and other quality of life issues like that, how long did the process take? How large was your drive? Did you have to use personal API keys in order to avoid rate limit errors? Anything else you'd share before someone began to migrate data? Trying to get a feel for the downtime and prep work I'm looking at to begin the process.
  4. srcrist

    Beta 1.2.0.1305 = WTF

    The discussion at this link will explain what it is doing (and why). Google has implemented new API limits in their service and the drive will become unusable without the changes, eventually. I do agree that BETA releases should be marked as such in the upgrade notification, and am waiting myself to actually upgrade to the new format. But this is what it going on, and why.
  5. I'm not really going to get bogged down with another technical discussion with you. I'm sorry. I can only tell you why this change was originally implemented and that the circumstances really had nothing to do with bandwidth. If you'd like to make a more formal feature request, the feedback submission form on the site is probably the best way to do so. They add feature requests to the tracker alongside bugs, as far as I know.
  6. No, I think this is sort of missing the point of the problem. If you have, say, 1 GB of data, and you divide that data up into 100MB chunks, each of those chunks will necessarily be accessed more than, say, a bunch of 10MB chunks, no matter how small or large the minimum download size, proportional to the number of requested reads. The problem was that CloudDrive was running up against Google's limits on the number of times any given file can be accessed, and the minimum download size wouldn't change that because the data still lives in the same chunk no matter what portion of it you download at a time. Though a larger minimum download will help in cases where a single contiguous read pass might have to read the same file more often, it wouldn't help in cases wherein any arbitrary number of reads has to access the same chunk file more often--and my understanding was that it was the latter that was the problem. File system data, in particular, is an area where I see this being an issue no matter how large the minimum download. In any case, they obviously could add the ability for users to work around this. My point was just that it had nothing to do with bandwidth limitations, so an increase in available user-end bandwidth wouldn't be likely to impact the problem.
  7. I believe the 20MB limit was because larger chunks were causing problems with Google's per-file access limitations (as a result of successive reads), not a matter of bandwidth requirements. The larger chunk sizes were being accessed more frequently to retrieve any given set of data, and it was causing data to be locked on Google's end. I don't know if those API limitations have changed.
  8. Well the first problem is that you only allow the CloudDrive volume to hold duplicated data, while you only let the other volumes hold unduplicated data, which will prevent your setup from duplicating data to the CloudDrive volume at all (none of the other drives can hold data that is duplicated on the CloudDrive volume). So definitely make sure that all of the volumes that you want to duplicate can hold duplicated data. Second, before you even start this process, I would caution you against using a single 256TB NTFS volume for your CloudDrive, as any volume over 60TB exceeds the size limit for shadow copy and, also, thus, chkdsk. That is: a volume that large cannot be repaired by chkdsk in case of any file system corruption, and is effectively doomed to increase corruption over time. So you might consider a CloudDrive with multiple partitions of less than 60TB apiece. That being said, NEITHER of these things should have any impact on the write speed to the drive. The pool should effectively be ignoring the CloudDrive altogether, since it cannot duplicate data to the CloudDrive, and only the other drives can accept new, unduplicated data. The SSD balancer means that all NEW data should be written to the SSD cache drive first, so I would look at the performance of that underlying drive. Maybe even try disabling the SSD balancer temporarily and see how performance is if that drive is bypassed, and, if it's better, start looking at why that drive might be causing a bottleneck. What sort of drive is your CloudDrive cache drive, and how is that cache configured? Also, what are the CloudDrive settings? Chunk size? Cluster size? Encryption? Etc.
  9. DrivePool just forwards the drive requests to the underlying drives, so slow performance while writing to the pool is likely just caused by poor underlying drive performance. But there's honestly no way to tell what might be causing the issue just from this information. I think the first step might be to provide all of your settings for both DrivePool and CloudDrive, as well as an account of the hardware you're using for the cache and the underlying pool.
  10. Correct. Nothing needs to be reuploaded to just move the data around on the same volume. Note that you can still control the placement of files and folders with the DrivePool balancing settings, though not quite as granularly as you could with a single volume.
  11. You will never see the 1 for 1 files that you upload via the browser interface. CloudDrive does not provide a frontend for the provider APIs, and it does not store data in a format that can be accessed from outside of the CloudDrive service. If you are looking for a tool to simply upload files to your cloud provider, something like rClone or Netdrive might be a better fit. Both of those tools use the standard upload APIs and will store 1 for 1 files on your provider. See the following thread for a more in-depth explanation of what is going on here:
  12. You're just moving data at the file system level to the poolpart folder on that volume. Do not touch anything in the cloudpart folders on your cloud storage. Everything you need to move can be moved with windows explorer or any other file manager. Once you create a pool, it will create a poolpart folder on that volume, and you just move the data from that volume to that folder.
  13. There are some inherent flaws with USB storage protocols that would preclude it from being used as a cache for CloudDrive. You can see some discussion on the issue here: I don't believe they ever added the ability to use one. At least not yet.
  14. I *believe* you can use the SSD balancer to do this, maybe? The way the SSD balancer works is that if you have a drive marked as "SSD" it copies the data there FIRST, and then moves it to other drives during a balancing pass. Any drive can, in fact, be marked as "SSD." So this might work to move all of the data off of the local drives once the cloud drives are available.
  15. A 4k (4096) cluster size supports a maximum volume size of 16TB. Thus, adding an additional 10TB to your existing 10TB with that cluster size exceeds the maximum limit for the file system, so that resize simply won't be possible. Volume size limits are as follows: Cluster Size Maximum Partition Size 4 KB 16 TB 8 KB 32 TB 16 KB 64 TB 32 KB 128 TB 64 KB 256 TB This is unfortunately not possible, because of how Cloud Drive works. However, a simple option available to you is to simply partition your drive into multiple volumes (of a maximum 16TB apiece) and recombine them using DrivePool into one large aggregate volume of whatever size you require (CloudDrive's actual and technical maximum is 1PB per drive).
  16. You can change the cache location by detaching and reattaching the drive to the system from the UI. It will prompt you for the cache location and type whenever it is attached.
  17. I'm not 100% positive on this, but I believe that once the cloud drive is added to the pool, if the system boots up and the drive is still locked, Drive Pool will see the locked drive as "unavailable" just like if you had a removable drive in your pool that was disconnected. That means that balancing would be disabled on the pool until all of the drives are available. It won't start rebalancing without the drive being unlocked, unless you remove the drive from the pool. It should be aware of all of the drives that comprise the pool, even if some of the drives are missing. The information is duplicated to each drive. Bottom line: no special configuration should be necessary. DrivePool should function perfectly fine (accepting data to local drives) while the cloud drives are missing, and then balancing rules will be reapplied once the drives become available. Though, the part I'm not sure about is whether or not the pool will be placed into read-only mode as long as a drive is missing. It may simply not accept any written data at all--and, if that's the case, I'm not sure if there is a way to change that. You may just have to wait until all of the drives are available.
  18. Sounds like a Plex problem. The data read from the drive is the same regardless of whether or not it's transcoding, and "Remux" releases can have wildly different bitrates, so there isn't really anything unique to those files. CloudDrive is more than capable of achieving the throughput required, in any case. Might want to ask on their forums to see if anyone has relevant experience.
  19. srcrist

    Question about my setup

    Option one would use more data, but you could simply extract directly to the cloud drive without using any more data. The move will take place at the file system level and will not use any additional data. Downloading the compressed files to the cloud, and then extracting them to the cloud, though, does use additional data. Because the data from the extraction has to be created during the extraction process. Once extracted, no new data will need to be uploaded for a move. The same as a local drive.
  20. srcrist

    Question about my setup

    I don't see anything wrong with that. The thing you want to avoid for optimization is doing other work on the same drive that's being used for the cache. You're not doing that, so you should be good.
  21. I believe that this is related to your windows settings for how to handle "removable media." CloudDrive shows up as a removable drive, so if you selected to have windows open explorer when removable media is inserted, it will open when CloudDrive mounts. Check that Windows setting.
  22. Almost certainly related to this issue: File a ticket and submit a troubleshooter. @Christopher (Drashna) was asking if this still existed in the latest beta. It would appear that it does.
  23. Perhaps the other person can, but I can't really afford to dismount the drive on a prod server just to test it. Particularly considering that it could lead to downtime of almost an entire day if it dismounts incorrectly again. My apologies. I know that isn't helpful.
  24. Isn't the the entire purpose of a changelog? To explain the functionality of new features? The changes that have been made to the operation of the application? I copied roughly 100 TB over the course of several months from one drive to another via CloudDrive, and with the cache drive being full that entire time it simply throttled the writes to the cache and accepted data as room became available. Which is intended functionality as far as I am aware. You may have encountered a bug, or there may have been a technical issue with your cache drive itself--but it should do little more than copy slowly. I'm not even sure what you mean by CloudDrive being unable to read chunks because the cache is full. How would a full cache prevent reads? The full cache is storing your writes, and the reads are from the cloud unless you configure your cache to maintain some read availability...but even without it, CloudDrive can still read chunks. It's possible that you're simply misattributing the cause of your data loss. The only way in which I can think that you might lose data via the cache is if the drive you're using for cache space isn't storing valid data. The cache, in any case, will not be meaningfully affected by any sort of data duplication.
×
×
  • Create New...