Jump to content

Choruptian

Members
  • Posts

    28
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Choruptian

  1. I have the same problem, typically happens when I lost my internet:

    CloudDrive.UI_FsxLhigAmk.png

    It's just stuck as shown:

    CloudDrive.UI_apo4vCUiMb.png

    I've waited way more than 1 hour, seems to just not do anything. I would have to force a recovery I think to make something happen. (EDIT: restarting computer "fixes" it)

    I think this has been an issue for a long time but it could be unrelated I guess (I installed the latest beta today to try data duplication).

  2. I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

    1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
    2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
      • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
      • If you're following these instructions and you want to fetch a list of all the chunks then:
        • Download: http://sqlitebrowser.org/
        • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
        • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
        • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
        • Export to CSV by clicking bottom-middle button. (New line character: Windows)
        • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
      • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
    3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
    4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
    5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

    Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

     

    Added part about duplicate chunks, also to make the post more visible.

  3. You have the ability to completely disable pinning of any data, you can individually "Pin directories" which will make listing of all files/folders be instantaneous and also "Pin metadata" which I'm unsure about but I'm guessing is file metadata such as time created to embedded album artwork. Because of this the accumulated data from pinning is determined by the amount of files/folders and the metadata size.

     

    The detaching process is equally "slow" on all drives (AFAIK). Takes like 30 seconds? (Depends on bandwidth, API throttling, download/upload threads). Thus you should be able to just click detach, let it do its thing and move on to the other computer.

     

    You can use the drive the second you see it available in a File Explorer. (On-par with detach time)

  4. I managed to migrate a drive without much issue where I followed the instructions that triadpool provided and:

    1. Use latest BETA to eliminate duplicate chunks as rclone can't distinguish them when copying by name.
    2. Used "--files-from=" with a list of every chunk in the CONTENTS directory.
      • If you're following these instructions and you want to enumerate the chunks again because new data was added then you will either have to delete file in *-data-ChunkIdStorage and re-mount or wait for https://stablebit.com/Admin/IssueAnalysis/27357 to gain traction where I requested for the ability to decrypt the database for ourselves.
      • If you're following these instructions and you want to fetch a list of all the chunks then:
        • Download: http://sqlitebrowser.org/
        • Navigate to: %programdata%\StableBit CloudDrive\Service\Db\ChunkIds
        • Copy the .db file with no activity on the disk. (This to avoid missing any chunks and avoids the file-lock)
        • Open the file, go to Execute SQL and run "SELECT `Chunk` FROM `ChunkIds`" 
        • Export to CSV by clicking bottom-middle button. (New line character: Windows)
        • Use whatever tool you might like to insert the Uid infront of the chunks. (Example: Regex - Find: "^" Replace: "be66510c-460d-4bf8-bcd4-c58480630d19-chunk-")
      • The reason for even using "--files-from=" is because rclone uses the v2 API which causes it to not find every file in the directory.
    3. Used my own client_id & secret but I can't vouch to say if that was faster, I just did it for neat API graphs and guaranteed API throughput.
    4. Added "--verbose --ignore-times --no-traverse --stats=30s --transfers=8" as additional flags to the rclone copy. (The 8 transfer threads could need tweaking depending upon which clientid&secret you use)
    5. You will also need to copy the remaining folders for this to work of course. (Excluding the ChunkIdStorage as it contains references to fileId's (and these are invalidated by copying to the different account))

    Things to not do: Don't start the copy when you've got the drive mounted (Like I did  :P).

  5. I might be wrong but I think they are separate things. "Verify data" does cryptographic validation (SHA-1) to ensure that the data hasn't been modified lying on the cloud for a while. "Upload verification" actually re-downloads the data to ensure you uploaded the right content as you go.

     

    EDIT: A quick test seems to corroborate this as unticking "Verify data" yields "ValidationAlgorithm: None" in Drive details.

  6. I've got an inkling that it's the "*-data-ChunkIdStorage" folder that already exists that causes this. I've yet to delete try to delete it, waiting on answer in my support ticket.

     

    Could just be a bug tho.

  7. I am having similar issues. For some reason, a folder on my drive containing 2TB of data has gone corrupt. The build-in Windows disk check tool didn't do anything to solve the problem. After updating to .819 the drive appeared as RAW in Disk Management in Windows and is no longer accessible from Windows Explorer. 

     

    What's happening :(

     

    Same issue here.

  8. From what I understand, you can even do this right now but if you were to write stuff to the Cloud Drive from one client at the same time as another client was reading the same data then it would experience an out-of-sync filesystem at times which to me doesn't really seem healthy. It's very difficult to synchronize this I imagine. I could foresee an option to mount a drive as read-only on both sides however.

  9. .630
    * Added the ability to create ReFS cloud drives.
    - ReFS drives can be created starting with Windows 8 and newer (both in client and server editions).
    - ReFS drives cannot be attached to OSes older than Windows 8.
    - ReFS volumes can be up to 1 PB in size (and are not limited to 256 TB like NTFS). The 1 PB limit is the maximum drive size in StableBit CloudDrive and is not a
    ReFS limit.
    - The metadata pinning engine cannot yet pin ReFS directory entries.

     

    I can't find anything more about this in the changelogs so I'll hazard a guess and say it's not yet supported.

  10. I suggest you lower the Prefetch trigger to just 1 MB (something smaller), this because the software you use to play video files probably won't buffer in 40MB at once as that would probably be pointless on a local disk. You might even need to tweak some settings in the video player as well to have it work really well. Consider using VLC and/or something else just to see if there's differences. Try to show the CloudDrive interface as well when playing the movie, it should be possible to deduce if it's fetching chunks as you're playing the video in a sustainable way.

     

    From what I've read I haven't been able to understand if you have issues uploading as well, care to clarify?

  11. It eventually gets all the data but it's only downloading 1 chunk at a time which only accounts for a small amount of files, relatively speaking. My cache-size is usually just 1 GB higher more than what the total amount required for all the pinned items. It's not a direct issue, it's just _very_ slow when you have a lot of files/directories.

  12. That's the option I'm talking about. I want it to be multithreaded or boosted at least. Sorry if I wasn't clear.

     

    It would make things easier if the chunks that are downloaded for pinning directories were closer together and then use prefetching for boosting. I'm not sure if that's the right step to take however (MFT optimization).

  13. I've not used Plex with CloudDrive, no. (yet)

     

    I've got a 100 TB for fun and a 16 TB that I'm currently operating with. I've used chkdsk with the 100 TB before to resolve an issue I had way back when. (This was with assistance from StableBit whom even fixed an issue with chkdisk, for me. ^_^ )

     

    I don't use DrivePool, I'm guessing you're pooling together different CloudDrives and not using it as a cache.

     

    I presume for Plex to work it requires a fair bit of advanced reading of whatever media is playing, it might be necessary to change settings within it. Otherwise it's just about adjusting the prefetch settings to allow for the same to happen. I presume you're not exceeding your own bandwidth by playing too high-quality movies or whatever. I think it would also be advisable indeed to operate with one drive at a time because the API limit with Google is connected to your IP and not account (AFAIK).

     

    I should probably also mention that a memory leak was just fixed by the dev. in the latest release (http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt). It might be pertinent to install that if you plan to test streaming.

  14. I've been using Google Drive for over a year with almost zero issues. My internet connection is 100/30 and I've kept to 10/10 threads for download/upload as it doesn't cause GDrive throttling and seems to utilize my bandwidth quite optimally. It would be possible to increase this slightly if you so feel like it however, particularly for download, this just requires light experimentation.

     

    Currently I'm using version 1.0.0.784 which is found here: http://dl.covecube.com/CloudDriveWindows/beta/download/

    Small note: Me and possibly some others are currently experiencing some form of memory leak on this version, I can't however suggest a different version that doesn't have this problem as I've had some difficulty even determining its cause.

     

    Changelogs are found here: http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt

    It's probably a good idea to upgrade to the last version but it's also considered good practice to wait a while or download the second-to-last version to be sure that it's relatively stable. This is still beta software after all.

     

    Cache type: This is possible to change after the drive is created but I would use: Expandable for the most part. Proportional cache when copying a lot of data as it automatically throttles (slows down the copy-process) to allow for continuous uploading while keeping the cache low. (Read the (I)nformation when making the drive.)

    Storage chunk size: I would keep to the default if your connection is in the same range as mine. (This is the amount of data you are required to upload if you make a change to your data, basically)

    Sector size: Keep at default 4 KB

    Chunk cache size: Possible to change after drive is created. (20 MB is fine)

     

    Everything else I would also just leave as default.

     

    It could be an idea to create different CloudDrives with different settings for different types of media, mainly for different prefetching settings (streaming movies vs songs) and possibly Storage chunk size.

     

    Otherwise it's considered best practice I would imagine to have your cache on a fast SSD/HDD. I also prefer to keep my cache as low as possible should any problems arise causing it to perform recovery on the drive (basically re-uploading the entire cache). I have mine set to just a 1 GB (default).

     

    PS: These are just my own tips and I'm not affiliated with StableBit. Feel free to wait for others to make comment.

  15. Would it all be possible to optimize the pinning engine for directories, it's quite determined to only download one chunk+offset at a time. I looked into MFT optimization but I'm not sure how much it would help and I wonder if it's enough to just use a program like TreeSize to scan/traverse all the directories so that in effect it would be picked up from the cache when it hits. Reason for nagging: It's taken 1.5 hours now just to traverse my 200 GB of data now.

  16. I just had something like this happen. Weird set of circumstances where it was spamming this over and over at one point in the Service Log:

     

    0:33:45.2: Warning: 0 : [CloudDrive] Error updating statistics. Systemet finner ikke angitt fil

    0:33:45.5: Warning: 0 : [CloudDrive] Error getting write requests from driver. Systemet finner ikke angitt fil
     
    "Systemet finner ikke angitt fil" -> "The system cannot find the file specified"
     
    I unfortunately didn't think to log anything else like using the tools OP used.
     
    I can't say whether the issues are related. However there's definitely some sort of memory leak going on who's pattern I've been unable to put together. I'll report back if I can a way to replicate.
  17. It transfers the chunks pretty fast but I had to run the command about 10 times to get all of the files copied because of Google API limits.  It took about 20 minutes to copy 15,000 files from a 350GB drive.  All of this doesn't really matter right now since the copied drive won't mount in stablebit.

     

    Interesting. Did you copy it exactly as a new folder (StableBit CloudDrive) or did you try to integrate it with already established disks?

  18. As a secondary request to this, would it be possible to limit the writing to such an extent that you don't continuously re-upload the same chunk? I guess instead of just limiting you'd have X amount of data needed before being "written".

     

    Does any of this makes sense? I'm just trying to establish whether or not I'm right with this pattern I'm seeing after doing huge copy operations but with limited U/L. 

  19. Hello,

     

    in some (weird) cases CloudDrive doesn't seem to respect the limit I set on upload/download threads and uses as much as up to 60x threads. This is obviously not ideal. I can't explain what causes it but it seems to have a tie-in with an unexpected shutdown at least. I can fix it for the most part by setting it to 0/0 threads and waiting a while then setting it back to what I want.

     

    I'm not sure if this is related to the "Write thread boosting" in Technical details but I turned that off as well when trying to fix it (just now).

     

    Sorry for the rambling of words, I don't really know how to describe it.

×
×
  • Create New...