Jump to content

Choruptian

Members
  • Posts

    28
  • Joined

  • Last visited

  • Days Won

    1

Choruptian last won the day on February 20 2017

Choruptian had the most liked content!

Choruptian's Achievements

Member

Member (2/3)

1

Reputation

  1. I have the same problem, typically happens when I lost my internet: It's just stuck as shown: I've waited way more than 1 hour, seems to just not do anything. I would have to force a recovery I think to make something happen. (EDIT: restarting computer "fixes" it) I think this has been an issue for a long time but it could be unrelated I guess (I installed the latest beta today to try data duplication).
  2. Added part about duplicate chunks, also to make the post more visible.
  3. You have the ability to completely disable pinning of any data, you can individually "Pin directories" which will make listing of all files/folders be instantaneous and also "Pin metadata" which I'm unsure about but I'm guessing is file metadata such as time created to embedded album artwork. Because of this the accumulated data from pinning is determined by the amount of files/folders and the metadata size. The detaching process is equally "slow" on all drives (AFAIK). Takes like 30 seconds? (Depends on bandwidth, API throttling, download/upload threads). Thus you should be able to just click detach, let it do its thing and move on to the other computer. You can use the drive the second you see it available in a File Explorer. (On-par with detach time)
  4. I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and: Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name. Used "--files-from=" with a list of every chunk in the CONTENTS directory. If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves. If you're following these instructions and you want to fetch a list of all the chunks then: Download: http://sqlitebrowser.org/ Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock) Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" Export to CSV by clicking bottom-middle button. (New line character: Windows) Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-") The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use) You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account)) Things to not do: Don't start the copy when you've got the drive mounted (Like I did ).
  5. I might be wrong but I think they are separate things. "Verify data" does cryptographic validation (SHA-1) to ensure that the data hasn't been modified lying on the cloud for a while. "Upload verification" actually re-downloads the data to ensure you uploaded the right content as you go. EDIT: A quick test seems to corroborate this as unticking "Verify data" yields "ValidationAlgorithm: None" in Drive details.
  6. Just FYI: The drives are already encrypted/obfuscated. I can imagine that multiple people might be using the same VPN/IP as yourself thus causing enough throttling to cause an unmount, this is just guesswork.
  7. Did anyone make any more progress with this?
  8. I've got an inkling that it's the "*-data-ChunkIdStorage" folder that already exists that causes this. I've yet to delete try to delete it, waiting on answer in my support ticket. Could just be a bug tho.
  9. From what I understand, you can even do this right now but if you were to write stuff to the Cloud Drive from one client at the same time as another client was reading the same data then it would experience an out-of-sync filesystem at times which to me doesn't really seem healthy. It's very difficult to synchronize this I imagine. I could foresee an option to mount a drive as read-only on both sides however.
  10. .630 * Added the ability to create ReFS cloud drives. - ReFS drives can be created starting with Windows 8 and newer (both in client and server editions). - ReFS drives cannot be attached to OSes older than Windows 8. - ReFS volumes can be up to 1 PB in size (and are not limited to 256 TB like NTFS). The 1 PB limit is the maximum drive size in StableBit CloudDrive and is not a ReFS limit. - The metadata pinning engine cannot yet pin ReFS directory entries. I can't find anything more about this in the changelogs so I'll hazard a guess and say it's not yet supported.
  11. I suggest you lower the Prefetch trigger to just 1 MB (something smaller), this because the software you use to play video files probably won't buffer in 40MB at once as that would probably be pointless on a local disk. You might even need to tweak some settings in the video player as well to have it work really well. Consider using VLC and/or something else just to see if there's differences. Try to show the CloudDrive interface as well when playing the movie, it should be possible to deduce if it's fetching chunks as you're playing the video in a sustainable way. From what I've read I haven't been able to understand if you have issues uploading as well, care to clarify?
  12. It eventually gets all the data but it's only downloading 1 chunk at a time which only accounts for a small amount of files, relatively speaking. My cache-size is usually just 1 GB higher more than what the total amount required for all the pinned items. It's not a direct issue, it's just _very_ slow when you have a lot of files/directories.
  13. I'm experiencing the same exact thing (1.0.0.800 & W7 64-bit)
  14. That's the option I'm talking about. I want it to be multithreaded or boosted at least. Sorry if I wasn't clear. It would make things easier if the chunks that are downloaded for pinning directories were closer together and then use prefetching for boosting. I'm not sure if that's the right step to take however (MFT optimization).
×
×
  • Create New...