Jump to content

Search the Community

Showing results for tags 'clouddrive'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

  1. Hi! I have had this problems for a few weeks now. I use CloudDrive as an backend to plex, in the biginning everything went great! But now, every night clouddrive will disconnect. The error that i receiv is in the console is: "<nameofdrive> is having trouble downloading data from Google Drive in a timely manner. Error: Thread was being aborted. This error can be caused by having insufficient bandwidth or bandwidht throttling. " Second error messages is saying : "<nameofdrive> is having trouble downloading data from Google Drive. Make sure that you are connected to the Internet and have sufficient bandwidht available. " And every morning i wake up the drive have disconnected, resulting in i have to rescan some libraries from time to time. I'm sure that my network is not the problem, everything else is working great and the second google drive does not disconnect (I have two drives). I have tried do disable "Perform extensive media analysis during maintenance" in Plex without an effect. Any tips in my situation? Thanks for a great software! edit: I am on the lates beta release(1.0.0.854)
  2. Hi guys, I know that migrating an existing DrivePool to a new pc is pretty simple (DrivePool "sees" that there are discs that were already part of a pool and simply re-creates that existing pool), but how does this work for CloudDrive? Is it enough to install CloudDrive, activate the license and then (re)connect to the cloud provider(s) I need? Will it then recognize that there are existing CloudDrives? Somehow I keep thinking this will create a new drive instead of linking me to the already existing one, so I hope you can put my mind at ease now that I'm dumping a large amount of data in my current drive from my old pc :-)
  3. Hi, I have a drive pool with a local, and a cloud drive in it. I have duplication set to 2x, so it's basically a mirrored setup. Is there a way I can set it so that ALL reading will be done off the local disk? I want to do this, because reading off of the cloud drive, while I have all the data locally doesn't make sense, and even though it's striping, it's still slower than just reading it all off the local disk... Any thoughts about this? Thanks
  4. Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  5. Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  6. Hello, all! I have been testing out CloudDrive's latest beta, and run into an issue. I had some odd errors, unrelated to CloudDrive, on my server, which resuled in my decision to destroy my 10TB cloud drive [on two separate occasions]. As part of a reinstall, I decided to delete the CloudDrive folder on my Google Drive, as well, as I will reauthorize and recreate another drive later. However, now I have found an issue. My Google Drive still reports as having over a TB in use, which is approximately what I had on the drive prior to deleting it. While I do not see any chunk files, I can perform a search by looking for title:chunk and get results. It looks like I still have all of the chunks from the destroyed drives, even if I do not actively see them in the Drive directory. I would like to find a method to delete them. Is this stubborn chunk data staying behind normal? Can I utilize CloudDrive to delete them? Doing so manually would be a bit impracticable. I see in my research that Google gives access to an API for deleting files. Will I need to create my own script to mass delete these pseudo-hidden chunk files? Lots of questions, I know. Let me know if I can provide any helpful data. I haven't deleted/uninstalled CloudDrive, so any logs it has created by default would still exist. Windows Server 2012 R2 Datacenter x64 16GB RAM, 3.5Ghz CPU CloudDrive at latest version as of May 17, 2016 -Tomas Z.
  7. Just can't connect to OneDrive no matter what I tried. I even removed 2-pass authentication to no avail. Anybody else has this problem and is their a solution? Thanks.
  8. I'm trying out your software suĂ­te as I am to replace my old WHS1 with a new home server. Pool & Scanner seems to run just fine, but CloudDrive seems to mess with me :-( Problem 1: I set up a drive on box.com When the drive is set up I get a lot of error messages, in short: I/O Error CD drive h:\ having trouble uploading data Error: The request was aborted: The request was canceled The new drive get some MB of data that is marked "To Upload" what is this? The drive is still empty? Problem 2: I really need some way to control bandwidth. CD basicly kills my internet connection trying to upload those MB of secret data. Just called my internet provider and yelled some because my up speed was crippled CD actully I suggest an option for schedule syncing of CD, with some extra options for * Disable schedule for X hours * Set max bandwidth to XXX (non scheduled) * Set max bandwidth to XXX for X hours (non scheduled) Problem 3: A minor thing. UAC asks every time i star CD UI if I want to allow CD to change my PC. Why? Doesn't happen when using scanner or pool UI. And maybe a stupid question; I assumed that CD would mirror my cloudaccount, but it seems to reserve space for a new "virtual drive" in my pc?
  9. Hi, just testing your software. I did a standard shutdown under Win7. When I started up again the CloudDrive software when into recovery mode after which it begin a complete re-upload of everything, even files that were uploaded a few days ago. Sorry I did not enable logging and I'm only testing so is no big deal just want to know is this normal behavior and did anyone encounter a similar problem. Also, will this be the default behavior after a power failure? This is kind of bad if I had multi terabytes of data saved.
  10. Hello, System: Windows 10 intel i4770 CloudDrive version: 1.0.0.417 BETA 64bit Background/Issue Created a local CloudDrive pointing to Amazon S3. I chose full encryption. The volume/bucket created successfully on S3 as well as the formatting and drive assignment for my local drive. I could see the METADATA GUID, and a few chunk files were created from the process on the S3 bucket. Next, I upload one file and noticed that additional "chunk" files were created in the bucket. After the Stablebit GUI status indicated that all file transfers were completed, I deleted the file from my local CloudDrive. After awhile, I refreshed the S3 bucket and saw that all the chunks were still there...including the new ones that were created when I transferred the file. Here's my newb questions: 1. Am I correct in stating that when I delete a local file which is on my local Stablebit CloudDrive pointing to Amazon S3, it should remove the applicable "chunks" from Amazon S3? 2. How can I be sure that when I delete a large file from my local CloudDrive, the applicable S3 storage bucket size reduces accordingly? 3. Am I off base to think that the two file systems should stay in sync? During my tests, and after waiting quite some time, the S3 bucket never decreased in size even though I deleted the large file(s). Which means Amazon boosts their bottom line at my expense. Thanks, and I searched for this but could not find any discussion on point.
  11. Avira is telling me CloudDrive.UI.exe contains TR/Dropper.MSIL.Gen2 is it telling me the truth? This is the installer file I am using StableBit.CloudDrive_1.0.0.403_x64_BETA.msi
  12. Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already. The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds. Why is this important and how does it affect your pool? You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive. You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk. These are some very useful tricks to integrate the products, already. And if anyone else finds some neat tips or tricks, we'll add them here as well.
×
×
  • Create New...