Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'clouddrive'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 34 results

  1. Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  2. Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  3. Hello, all! I have been testing out CloudDrive's latest beta, and run into an issue. I had some odd errors, unrelated to CloudDrive, on my server, which resuled in my decision to destroy my 10TB cloud drive [on two separate occasions]. As part of a reinstall, I decided to delete the CloudDrive folder on my Google Drive, as well, as I will reauthorize and recreate another drive later. However, now I have found an issue. My Google Drive still reports as having over a TB in use, which is approximately what I had on the drive prior to deleting it. While I do not see any chunk files, I can perform a search by looking for title:chunk and get results. It looks like I still have all of the chunks from the destroyed drives, even if I do not actively see them in the Drive directory. I would like to find a method to delete them. Is this stubborn chunk data staying behind normal? Can I utilize CloudDrive to delete them? Doing so manually would be a bit impracticable. I see in my research that Google gives access to an API for deleting files. Will I need to create my own script to mass delete these pseudo-hidden chunk files? Lots of questions, I know. Let me know if I can provide any helpful data. I haven't deleted/uninstalled CloudDrive, so any logs it has created by default would still exist. Windows Server 2012 R2 Datacenter x64 16GB RAM, 3.5Ghz CPU CloudDrive at latest version as of May 17, 2016 -Tomas Z.
  4. Just can't connect to OneDrive no matter what I tried. I even removed 2-pass authentication to no avail. Anybody else has this problem and is their a solution? Thanks.
  5. I'm trying out your software suĂ­te as I am to replace my old WHS1 with a new home server. Pool & Scanner seems to run just fine, but CloudDrive seems to mess with me :-( Problem 1: I set up a drive on box.com When the drive is set up I get a lot of error messages, in short: I/O Error CD drive h:\ having trouble uploading data Error: The request was aborted: The request was canceled The new drive get some MB of data that is marked "To Upload" what is this? The drive is still empty? Problem 2: I really need some way to control bandwidth. CD basicly kills my internet connection trying to upload those MB of secret data. Just called my internet provider and yelled some because my up speed was crippled CD actully I suggest an option for schedule syncing of CD, with some extra options for * Disable schedule for X hours * Set max bandwidth to XXX (non scheduled) * Set max bandwidth to XXX for X hours (non scheduled) Problem 3: A minor thing. UAC asks every time i star CD UI if I want to allow CD to change my PC. Why? Doesn't happen when using scanner or pool UI. And maybe a stupid question; I assumed that CD would mirror my cloudaccount, but it seems to reserve space for a new "virtual drive" in my pc?
  6. Hi, just testing your software. I did a standard shutdown under Win7. When I started up again the CloudDrive software when into recovery mode after which it begin a complete re-upload of everything, even files that were uploaded a few days ago. Sorry I did not enable logging and I'm only testing so is no big deal just want to know is this normal behavior and did anyone encounter a similar problem. Also, will this be the default behavior after a power failure? This is kind of bad if I had multi terabytes of data saved.
  7. Hello, System: Windows 10 intel i4770 CloudDrive version: 1.0.0.417 BETA 64bit Background/Issue Created a local CloudDrive pointing to Amazon S3. I chose full encryption. The volume/bucket created successfully on S3 as well as the formatting and drive assignment for my local drive. I could see the METADATA GUID, and a few chunk files were created from the process on the S3 bucket. Next, I upload one file and noticed that additional "chunk" files were created in the bucket. After the Stablebit GUI status indicated that all file transfers were completed, I deleted the file from my local CloudDrive. After awhile, I refreshed the S3 bucket and saw that all the chunks were still there...including the new ones that were created when I transferred the file. Here's my newb questions: 1. Am I correct in stating that when I delete a local file which is on my local Stablebit CloudDrive pointing to Amazon S3, it should remove the applicable "chunks" from Amazon S3? 2. How can I be sure that when I delete a large file from my local CloudDrive, the applicable S3 storage bucket size reduces accordingly? 3. Am I off base to think that the two file systems should stay in sync? During my tests, and after waiting quite some time, the S3 bucket never decreased in size even though I deleted the large file(s). Which means Amazon boosts their bottom line at my expense. Thanks, and I searched for this but could not find any discussion on point.
  8. Avira is telling me CloudDrive.UI.exe contains TR/Dropper.MSIL.Gen2 is it telling me the truth? This is the installer file I am using StableBit.CloudDrive_1.0.0.403_x64_BETA.msi
  9. To clarify what I mean here: If you want to ensure that one copy of a duplicate file is on a CloudDrive, but you don't care where the other copy is (or it can be on any local disk). This can be confusing and a bit counterintuitive. Additionally, this will cause your system to want to continually rebalance, as the balancing ratio will always be off. To start off with, you'll need to micromanage the pool a bit. You'll need to create a rule for the folders that you want duplicated (or use "\* " for the entire pool). In this rule, select ONLY the CloudDrive disk. Hit "Ok". You may want to disable the automatic balancing, or at least disable the "Allow balancing plug-ins to force immediate balancing" option. This should help reign in the balancing, so that it's not always attempting to do so. We do plan to add a "duplication grouping" option to handle this entire process natively, specifically for better synergy/integration between StableBit CloudDrive and StableBit DrivePool. However, we don't have an ETA on that, as it may be very complicated to implement.
×
×
  • Create New...