Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'clouddrive'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 27 results

  1. Hello everyone! I plan to integrate DrivePool with SnapRAID and CloudDrive and require your assistance regarding the setup itself. My end goal is to pool all data drives together under one drive letter for ease of use as a network share and also have it protected from failures both locally (SnapRAID) and in the cloud (CloudDrive) I have the following disks: - D1 3TB, D: drive (data) - D2 3TB, E: drive (data) - D3 3TB, F: drive (data) - D4 3TB, G: drive (data) - D5 5TB, H: drive (parity) Action plan: - create LocalPool X: in DrivePool with the four data drives (D-G) - configure SnapRAID with disks D-G as data drives and disk H: as parity - do an initial sync and scrub in SnapRAID - use Task Scheduler (Windows Server 2016) to perform daily synchs (SnapRAID.exe sync) and weekly scrubs (SnapRAID.exe -p 25 -o 7 scrub) - create CloudPool Y: in CloudDrive, 30-50GB local cache on an SSD to be used with G Suite - create HybridPool Z: in DrivePool and add X: and Y: - create the network share to hit X: Is my thought process correct in terms of protecting my data in the event of a disk failure (I will also use Scanner for disk monitoring) or disks going bad? Please let me know if I need to improve the above setup or if there is soemthing that I am doing wrong. Looking forward to your feedback and thank you very much for your assistance!
  2. Hey, I've set up a small test with 3x physical drives in DrivePool, 1 SSD drive and 2 regular 4TB drives. I'd like to make a set up where these three drives can be filled up to their brim and any contents are duplicated only on a fourth drive: a CloudDrive. No regular writes nor reads should be done from the CloudDrive, it should only function as parity for the 3 drives. Am I better off making a separate CloudDrive and scheduling an rsync to mirror the DrivePool contents to CloudDrive, or can this be done with DrivePool (or DrivePools) + CloudDrive combo? I'm running latest beta for both. What I tried so far didn't work too well, immediately some files I were moving were being actually written on the parity drive even though I set it to only contain duplicated content. I got that to stop by going into File Placement and unticking parity drive from every folder (but this is an annoying thing to have to maintain whenever new folders are added). 1) 2)
  3. I suspect the answer is 'No', but have to ask to know: I have multiple gsuite accounts and would like to use duplication across, say, three gdrives. The first gdrive is populated already by CloudDrive. Normally you would just add two more CloudDrives, create a new DrivePool pool with all three, turn on 3X duplication and DrivePool would download from the first and reupload to the second and third CloudDrives. No problem. If I wanted to do this more quickly, and avoid the local upload/download, would it be possible to simply copy the existing CloudDrive folder from gdrive1 to gdrive2 and 3 using a tool like rclone, and then to attach gdrive2/3 to the new pool? In this case using a GCE instance at 600MB/s. Limited of course by the 750GB/day/account. And for safety I would temporarily detach the first cloud drive during the copy, to avoid mismatched chunks.
  4. I must have accidentally checked the box that stores the encryption key because now CloudDrive mounts my drive when the machine boots up (even after a full shutdown). How do I tell it to ask for my key again at startup? Thanks.
  5. Hello, I am on version 1.0.2.976 and i tried to update to the newest version (1.1.0.1030). As you can see on the screenshot the shown version isnt updating correctly. I tried to install it again with admin rights but its not possible cause the installer let me only chose "repair" and "uninstall". Can i uninstall CloudDrive and install it again without losing my settings? Its not only the version which isnt updating, im missing some features which are on my 2nd pc are there (auto retry when drive gets disconnected,..). OS: Win Server 2012 (up2date). No security solution or firewall installed. regards
  6. ericksilvas

    HybridPool confusion

    So I'm a bit confused with the HybridPool concept. I have a LocalPool with about 6 hard drives totalling 30TB. I have about 25TB populated and with this new hybrid concept I'm trying to know if I can use another pool (cloudpool) for duplication. The part that I'm a litte confused is when I create a hybridpool with my cloudpool and my localpool I have to add all my 25TB of data on the new Hybridpool drive. On the localpool all my data appear in the "Other" section. I really have to transfer all the data from the localpool to the hybridpool, or am I missing something? Thanks
  7. Hello ! I would like to ask, what are the best settings for Cloudrive with Google Drive ? I have the following Setup / problems I use CloudDrive Version 1.0.2.975 on two computers: I have one Google Drive Account with 100TB of avaiable space. When I Upload some data to the drive the Upload Speed is perfect. On the frist Internet line I got an Upload Speed of nearly 100 MBit/sec On the Second Internet line (slower one) I also got full Upload Speed. (20 MBit/sec) When I try to download something the download Speeds are incredible slow. On the first line I have a download speed of 3 MB/sec. It should be 12,5 MB/sec (max Speed of the Line) On the Second Line I have a download Speed of 5 MB/sec. It should be 50 MB/sec (max Spped of the Line) I already set the Upload/download Threads to 12 on both Computers. I also set the minimum download size to 10.0 MB in the I/O Performance settings. What are the best settings to increase the download Speed ? Thank you very much in advance for any help. best regards Michael
  8. mushm0uth

    Stablebit Cloud Drive Support

    I just wanted to drop a note here on the community forum for folks who may be considering StableBit Cloud Drive and are researching the support forum prior to making a purchasing decision. I've had questions on two occasions since I've been a license holder that led me to open support cases for assistance. Both times I received prompt, personal and highly knowledgeable engagement from support. For such a cutting edge (in my mind) product, that is so affordable (in my opinion), I couldn't ask for a better support system. The forums here are great too. My two cents -- thanks team for a great product backed by a great staff.
  9. keithdok

    Schedule Metadata Pinning?

    I have 10TB expandable drive hosted on Google Drive with a 2TB cache. Whenever I add a large amount of data to the drive (for example I added 1TB yesterday) CloudDrive will occasionally stop all read and write operations to do "metadata pinning". This prevents Plex from doing anything while it does its thing, and took over 3 hours to do yesterday. I don't want to disable metadata pinning, but I would like to be able to schedule it, if necessary, for 3AM or something like that. In the meantime, is there a drawback to having such a large local cache? Would it improve operations like metadata pinning and crash recovery if I decreased it?
  10. Frank1212

    Onedrive drivepool using clouddrive

    Hi, I have an office 365 account where I get 5 - 1TB onedrives. I am trying to link 3 of them together using clouddrive and drivepool. I have on my pc as storage drives: x4 - Toshiba HDWE160 6TB x1 - Seagate ST3000DM001-9YN166 3TB I have all the drives pooled together using drivepool. When creating the OneDrive drivepool, how should I create the clouddrive cache? Should I put all 3 cache partitions on a single, dedicated, cache disk? Can I put a single cache partition on 3 different drives in the storage drivepool? Or do I need a dedicated cache drive for each of my onedrive cloud connections? What are your recommendations for this? I've tried putting the cache partitions on the same dedicated cache disk and get BSOD every time I write a lot of files to it. Thank you.
  11. When using DrivePool in conjuction with CluodDrive I have a problem when uploading a large amount of data. I have the following setup. 3x 3tb Drives for Cache for CloudDrive with 500Gb dedicated and the rest expandable. I am in an EU Datacentre with 1Gb/s internet connectivity I have a DrivePool with 3 different accounts - 2x Google Drive and 1x Dropbox. I have another DrivePool for where my downloads go, that consists of space left over from the listed drives above. When I attempt to copy / move files from the Downloads DrivePool into the CloudDrive DrivePool, one drive always drops off, Randomly, never one in particular. But then DrivePool will mark the Drive as read only and I can't move media to the new drive. I would have thought cache would handle this temporary outage, I would also expect that the Local Drive cache should handle the sudden influx of files, and not knock off the CloudDrives. I also would think that DrivePool would still be usable and not mark the drive as read only? What am I doing wrong? - how do I fix this behaviour?
  12. Because we've already have had a couple of questions about this, in their current forms, StableBit DrivePool works VERY well with StableBit Cloud Drive already. The StableBit CloudDrive drive disks appear as normal, physical disks. This means that you can add them to a Pool without any issues or workarounds. Why is this important and how does it affect your pool? You can use the File Placement Rules to control what files end up on which drive. This means that you can place specific files on a specific CloudDrive. You can use the "Disk Usage Limiter" to only allow duplicated data to be placed on specific drives, which means you can place only duplicated files on a specific CloudDrive disk. These are some very useful tricks to integrate the products, already. And if anyone else finds some neat tips or tricks, we'll add them here as well.
  13. achok166

    GSuites: Download Quota Errors

    Hi, Hoping someone can help me out with some constant errors that I'm seeing. I have unlimited storage via GSuites (5 accounts). I continuously receive the same errors from both my GDrive Storage containers; --------------- I/O Error: Cloud Drive (G:/) - (Drivename) is having trouble downloading data from Google Drive. The operation is being retired. Error: The download quota for this file has been exceeded. The error has occurred 44 times. Make sure you are connected to the Internet and have sufficient bandwidth available. ------------------ Here is my setup; Dual Xenon (2670's) on a dedicated VPS. 1 GB up/down, tons of bandwidth available. So hardware and bandwidth should NOT be a problem. Only use primarily for Plex. Maybe 5-7 users simultaneously max but generally 3-4 users is the average. I use local VPS drives for Radarr/Sonarr/etc. I'm wondering if perhaps the issue is happening because of the 5 Gdrive accounts I purchased, I do have both of my gdrives drives in the same email account (256TB x 2, 150GB+ cache, 10 threads up/ 5 down, 250MB throttle up/down, upload threshold 1MB / 5 min, Prefetch 10MB / 100MB / 120 sec (have tried 60 sec and 240 sec also)). Perhaps my quota error is because Google looks at each of the accounts as aggregates and not individual drives? so hence it sees 20 threads up / 10 down? Or any other advise someone can give? 2nd Question: Has anyone tried using the read-only feature of Clouddrive for Plex? I'm thinking of uploading data locally, but only have my VPS have read-only access to the media that is being transferred for Plex. Thanks in advance.
  14. I've been having issues for the past few days with DrivePool showing "Statistics are Incomplete" in the GUI. It seems to be the issue is the CloudDrive I have in there because it is showing "Other" where there are actually duplicated files in. In this state, duplication and balancing are not working. I checked the PoolPart folder on that drive and see all the duplicated files and folders that DrivePool has placed on there and it's been working for a couple weeks until this week. I uninstalled CloudDrive and DrivePool and reinstalled both seeing if there was just a glitch but still no luck. I also enabled file system logging in DrivePool and had it "re-measure" but no errors in the logs that I can see. I just don't understand what could be the issue especially when I can see all the files in the PoolPart folder and it looks like it has also placed some new files over to it today and those are showing up. I'm currently using the following application versions: DrivePool: 2.2.0.651 BETA CloudDrive: 1.0.0.870 OS: Windows 10 Pro Here's a screen shot showing my current setup with the CloudDrive showing mostly other when all that's on it is the PoolPart folder and its contents. This screen shot is showing the files and folders correctly within the PoolPart folder.
  15. Greetings, I have two NAS devices, plus a few local hard drives, and I'd like to aggregate them all together in one Pool (ie, so that they show up as a single drive in Windows). From reading through the website and forum it seems like this may be possible, but in my initial experiments with DrivePool and CloudDrive I'm having trouble achieving this. I can create a pool of the local drives in DrivePool, and I can create a drive of a shared folder using CloudDrive. But I'm not seeing that CloudDrive location show up in DrivePool. From my initial looking I think I'd prefer to just use DrivePool if possible, as it seems to have the sorts of features I'm interested in (eg, I know that I want to use the Ordered File Placement plugin). Ideally I'd like to be able to just enter in a UNC path for a shared folder on each NAS in order to add it as a pooled data location in DrivePool. But I could be fine with mapping NAS drives as well, though that doesn't seem to do anything. I'm trying out DrivePool 2.1.1.561 x64 and CloudDrive 1.0.0.854 x64. The background on all of this is that I have CrashPlan installed on this computer, and I want to create a pooled location to point CrashPlan to for storing data that will be backed up TO this computer from another computer (using CrashPlan computer-to-computer backup). CrashPlan only supports selecting a single location for this scenario, but since I have several old NAS's, plus some local hard drives I'd like to pool them all into one drive to use as my CrashPlan repository. For those that see the value in multiple offsite backups you'll appreciate knowing that I also backup to CrashPlan's servers as well. Thanks in advance for any help or advice on all this!
  16. tanjer45

    Google Drive will disconnect every night

    Hi! I have had this problems for a few weeks now. I use CloudDrive as an backend to plex, in the biginning everything went great! But now, every night clouddrive will disconnect. The error that i receiv is in the console is: "<nameofdrive> is having trouble downloading data from Google Drive in a timely manner. Error: Thread was being aborted. This error can be caused by having insufficient bandwidth or bandwidht throttling. " Second error messages is saying : "<nameofdrive> is having trouble downloading data from Google Drive. Make sure that you are connected to the Internet and have sufficient bandwidht available. " And every morning i wake up the drive have disconnected, resulting in i have to rescan some libraries from time to time. I'm sure that my network is not the problem, everything else is working great and the second google drive does not disconnect (I have two drives). I have tried do disable "Perform extensive media analysis during maintenance" in Plex without an effect. Any tips in my situation? Thanks for a great software! edit: I am on the lates beta release(1.0.0.854)
  17. Weirdomusic

    Moving CloudDrive to a new pc

    Hi guys, I know that migrating an existing DrivePool to a new pc is pretty simple (DrivePool "sees" that there are discs that were already part of a pool and simply re-creates that existing pool), but how does this work for CloudDrive? Is it enough to install CloudDrive, activate the license and then (re)connect to the cloud provider(s) I need? Will it then recognize that there are existing CloudDrives? Somehow I keep thinking this will create a new drive instead of linking me to the already existing one, so I hope you can put my mind at ease now that I'm dumping a large amount of data in my current drive from my old pc :-)
  18. KiLLeRRaT

    Specify a disk to always read from

    Hi, I have a drive pool with a local, and a cloud drive in it. I have duplication set to 2x, so it's basically a mirrored setup. Is there a way I can set it so that ALL reading will be done off the local disk? I want to do this, because reading off of the cloud drive, while I have all the data locally doesn't make sense, and even though it's striping, it's still slower than just reading it all off the local disk... Any thoughts about this? Thanks
  19. Mirabis

    Duplicate Later not working

    Hello, I recently started upgrading my 3TB & 4TB disks to 8TB disks and started 'removing' the smaller disks in the interface. It shows a popup : Duplicate later & force removal -> I check yes on both... Continue 2 days it shows 46% as it kept migrating files off to the CloudDrives (Amazon Cloud Drive & Google Drive Unlimited). I went and toggled off those disks in 'Drive Usage' -> no luck. Attempt to disable Pool Duplication -> infinite loading bar. Changed File placement rules to populate other disks first -> no luck. Google Drive uploads with 463mbp/s so it goes decently fast; Amazon Cloud Drive capped at 20mbps... and this seems to bottleneck the migration. I don't need to migrate files to the cloud at the moment, as they are only used for 'duplication'... It looks like it is migrating 'duplicated' files to the cloud, instead of writing unduplicated data to the other disks for a fast removal. Any way to speed up this process ? CloudDrive: 1.0.0.592 BETA DrivePool: 2.2.0.651 BETA
  20. Mirabis

    DrivePool + CloudDrive Setup Questions

    Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  21. Hello, all! I have been testing out CloudDrive's latest beta, and run into an issue. I had some odd errors, unrelated to CloudDrive, on my server, which resuled in my decision to destroy my 10TB cloud drive [on two separate occasions]. As part of a reinstall, I decided to delete the CloudDrive folder on my Google Drive, as well, as I will reauthorize and recreate another drive later. However, now I have found an issue. My Google Drive still reports as having over a TB in use, which is approximately what I had on the drive prior to deleting it. While I do not see any chunk files, I can perform a search by looking for title:chunk and get results. It looks like I still have all of the chunks from the destroyed drives, even if I do not actively see them in the Drive directory. I would like to find a method to delete them. Is this stubborn chunk data staying behind normal? Can I utilize CloudDrive to delete them? Doing so manually would be a bit impracticable. I see in my research that Google gives access to an API for deleting files. Will I need to create my own script to mass delete these pseudo-hidden chunk files? Lots of questions, I know. Let me know if I can provide any helpful data. I haven't deleted/uninstalled CloudDrive, so any logs it has created by default would still exist. Windows Server 2012 R2 Datacenter x64 16GB RAM, 3.5Ghz CPU CloudDrive at latest version as of May 17, 2016 -Tomas Z.
  22. Just can't connect to OneDrive no matter what I tried. I even removed 2-pass authentication to no avail. Anybody else has this problem and is their a solution? Thanks.
  23. PsychoCheF

    Need some kind of bandwidth control

    I'm trying out your software suĂ­te as I am to replace my old WHS1 with a new home server. Pool & Scanner seems to run just fine, but CloudDrive seems to mess with me :-( Problem 1: I set up a drive on box.com When the drive is set up I get a lot of error messages, in short: I/O Error CD drive h:\ having trouble uploading data Error: The request was aborted: The request was canceled The new drive get some MB of data that is marked "To Upload" what is this? The drive is still empty? Problem 2: I really need some way to control bandwidth. CD basicly kills my internet connection trying to upload those MB of secret data. Just called my internet provider and yelled some because my up speed was crippled CD actully I suggest an option for schedule syncing of CD, with some extra options for * Disable schedule for X hours * Set max bandwidth to XXX (non scheduled) * Set max bandwidth to XXX for X hours (non scheduled) Problem 3: A minor thing. UAC asks every time i star CD UI if I want to allow CD to change my PC. Why? Doesn't happen when using scanner or pool UI. And maybe a stupid question; I assumed that CD would mirror my cloudaccount, but it seems to reserve space for a new "virtual drive" in my pc?
  24. Hi, just testing your software. I did a standard shutdown under Win7. When I started up again the CloudDrive software when into recovery mode after which it begin a complete re-upload of everything, even files that were uploaded a few days ago. Sorry I did not enable logging and I'm only testing so is no big deal just want to know is this normal behavior and did anyone encounter a similar problem. Also, will this be the default behavior after a power failure? This is kind of bad if I had multi terabytes of data saved.
  25. Hello, System: Windows 10 intel i4770 CloudDrive version: 1.0.0.417 BETA 64bit Background/Issue Created a local CloudDrive pointing to Amazon S3. I chose full encryption. The volume/bucket created successfully on S3 as well as the formatting and drive assignment for my local drive. I could see the METADATA GUID, and a few chunk files were created from the process on the S3 bucket. Next, I upload one file and noticed that additional "chunk" files were created in the bucket. After the Stablebit GUI status indicated that all file transfers were completed, I deleted the file from my local CloudDrive. After awhile, I refreshed the S3 bucket and saw that all the chunks were still there...including the new ones that were created when I transferred the file. Here's my newb questions: 1. Am I correct in stating that when I delete a local file which is on my local Stablebit CloudDrive pointing to Amazon S3, it should remove the applicable "chunks" from Amazon S3? 2. How can I be sure that when I delete a large file from my local CloudDrive, the applicable S3 storage bucket size reduces accordingly? 3. Am I off base to think that the two file systems should stay in sync? During my tests, and after waiting quite some time, the S3 bucket never decreased in size even though I deleted the large file(s). Which means Amazon boosts their bottom line at my expense. Thanks, and I searched for this but could not find any discussion on point.
×