Jump to content

ufo56

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by ufo56

  1. ErDx.png

    On 10/27/2020 at 10:55 PM, Christopher (Drashna) said:

    For anyone experiencing this issue:

    This is a known issue, and due to changes on Google's ends.  

    The fix is in the beta version, but that hasn't shipped yet.   The simplest method to fix this is to grab the beta version: 

    http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.2.0.1356_x64_BETA.exe

     

    Also, once a stable release is out, you should get that, and will be switched back to the stable channel for updates (unless you explicitly set it otherwise).

     

     

  2. https://www.reddit.com/r/DataHoarder/comments/emuu9l/google_gsuit_whats_it_like_and_is_it_still_worth/fdw1cri/

     

    To recap for clarity: *

    A decent chunk of bans have been occuring, mostly to a mixture of bogus edu and less than 5 user domains.

    * A Google contact is informing that further purges will continue as enforcement is ratified on the 12th

    * GSuite uploads have been slowly capped to 120 MB/s per some mixture of domain + user + IP.

    * Speculation continues on how the usage of TDs and SAs are seen as "abuse" even with 5 or more users.

    Qs: * Has anyone or is it advised to ask a GSuite support member about the use of SAs? I can't imagine Google giving the ability to do a thing without intending to.

  3.  

    "Error 0x80070643" means you have a pending reboot.  Reboot the system and try again. 

    Also yeah, grab the above linked version. 

    yeah i figured that out from log but reboot did not solve that issue. Thats why im asking :P

    I try to do one more

  4. Another folder full of files got corrupted. Drive has not been disconnected since files have been added.

    And another folder corrupted. Now im doing chkdsk /f. Lets see if that changes something...

    Annoying is that u can't even remove this corrupted folder.

     

    This is scary.

     

    I just started using CloudDrive about a week ago. Still under trial and will probably purchase. 

     

    I have transferred about 1.5 TB of data and discovered one folder completely corrupted and cannot be opened. Using google as my provider and was transferring data over from Amazon Cloud. Had to recopy the entire folder again.

     

     

    1. What's the best method or software to check if more of my existing data in Stablebit Cloud Drive is corrupted? 

     

    2. Is it safer to create 4 x 1 TB drives for usage as opposed to having just one large 4TB drive?

     

    3. Is data corruption less likely to occur if i use an unencrypted Cloud Drive instead of a full encrypted one.

     

    4. Is there any situation where the entire Cloud Drive can be corrupted and data completely irrecoverable?

     

     

    I do not have the resources to keep separate copies of my data in different clouds or physical HDD. 

     

    Thanks !

     

    1: Probably if u do md5 hash before upload and after that u check if this match with original

     

    2: Yes, most people here probably is using 100+tb drives.

     

    3: I don't belive there is difference.

     

    4: In forum someone mentioned few months back that whole drive was corrupted.

  5. *Gdrive

    *NTFS, 100TB disk

    *Can't find on latest version integrity check

    *Upload verification off

     

     

    Memtest done. It's ok. Cache SSD ok.

     

    Some corrupted files are dated few months back. Some files few days or week. Hundreds files that have been copied between are ok. Just some of them are broken. Annoying is that i can't even delete them. 

    Another folder full of files got corrupted. Drive has not been disconnected since files have been added.

  6. *Gdrive

    *NTFS, 100TB disk

    *Can't find on latest version integrity check

    *Upload verification off

     

     

    Memtest done. It's ok. Cache SSD ok.

     

    Some corrupted files are dated few months back. Some files few days or week. Hundreds files that have been copied between are ok. Just some of them are broken. Annoying is that i can't even delete them. 

  7. Much of this is wrong or a bit misinformed.

     

    Using a CloudDrive as a torrent drive does not result in any additional API calls. It will result in additional reads and writes to the cache drive, but CloudDrive will still upload and download the chunks with the same amount of API usage as any other use.

     

    Beyond this, rClone results in API bans because it neither caches filesystem information locally, nor respects Google's throttling requests with incremental backoff. CloudDrive does both of these things, and will do so regardless of its use-case--torrents or otherwise.

     

    In any case, CloudDrive DOES work for torrents. In particular, it makes a great drive to hold long-term seeds. The downside, as observed, is that hash checks and such will take a long time initially, but once that's completed you should notice few differences as long as your network speeds can accommodate the overhead for CloudDrive. 

    Thanks for clearing things.

  8. If i be honest. I'm thinking that using clouddrive for torrenting is abusing much cloud storage providers, very high API calls.

     

    Like i understand covecube pays for API calls that we using in clouddrive. We don't use API limit that is set our user. Because rclone users all the time complaining API ban for 24 hours. Or i get it wrong ?

  9. Hi

     

    I have been using clouddrive and plex now maybe 6 months. Windows machine where drive is shared to lan and then mounted on linux. Nothing special. With exact same configuration everything worked great before.

    But lately there have been many issues playing media files. Can't seek on video files, plex just stops after 20-30 sec. Some files won't even start. When i try to play files with VLC they work. Files seems to be okay, no corruption whatsoever.

    100TB disk.

    Download threads 15

    Upload threads 5

    Prefetch trigger 10mb

    Prefetch forward 400mb

    Prefetch time window 400 sec

    Min download 10mb

    Local chunk cache size 100mb

  10. Hello

     

    I tried to resize my drive, at moment it is 10tb and i extended to 30tb. Clouddrive shows 30tb but in reality its 10tb.

     

    Disk management shows unallocated 20tb partition and when i try to exten 10tb partition i get this error

     

    The volume cannot be extended because the number of clusters will exceed the maximum number of clusters supported by the file system

  11. Sry, i forget to answer.

     

    Different drives. My unraid machine and clouddrive machine is separated. I mount my clouddrive share over gigabit network. And when i copy to unraid machine to clouddrive share then everything freezes. Clouddrive can't upload or download data after file copy is done.

     

    It is somewhat better when i use ssd as cache drive. With HDD im starting to get bandwith errors and so on. Maybe HDD can't handle incoming LAN transfer and same time uploading/downloading data.

     

     

    24/7 i get 300/300, i tried even 500/500 connection but same thing..

     

     

    It would be nice if clouddrive can empty cache when it get at some %.

×
×
  • Create New...