Jump to content
Covecube Inc.

Search the Community

Showing results for tags 'google drive'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
    • Nuts & Bolts
  • BitFlock
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 21 results

  1. Hi there, Sinds a few weeks i use Clouddrive in combination with google drive. I have a connection that can handle 500mbit download en upload speed. I know there is a limit about how much you can upload each day around 500gb/750gb but does that also cound for downloading. I can not get higher speed dan 80mbit and if i am streaming 4k content that is for sure not enough. In the picture below you can see that i got Throtled when i get around 85mbps and that is not enough. I also added a screenshots of my settings. Hope you guys can help me out
  2. Curious if there is a way to put the CloudDrive folder, which defaults to: (root)/StableBit CloudDrive, in a subfolder? Not only for my OCD, but in a lot of instances, google drive will start pulling down data and you would have to manually de-select that folder per machine, after it was created, in order to prevent that from happening. When your CloudDrive has terabytes of data in it, this can bring a network, and machine to its knees. For example, I'd love to do /NoSync/StableBit CloudDrive. That way, when I install anything that is going to touch my google drive storage, I can disable that folder for syncing and then any subfolder I create down the road (such as the CloudDrive folder) would automatically not sync as well. Given the nature of the product and how CloudDrive stores its files (used as mountable storage pool, separate from the other data on the cloud service hosting the storage pool AND not readable by any means outside of CloudDrive), it seems natural and advantageous to have a choice of where to place that CloudDrive data. Thanks, Eric
  3. Hello, I currently have a healthy 100TB Google drive mounted via CloudDrive and I would like to move the data to two 60TB Google drives. However when I attempt to create a new 60TB drive I get the following error: There was an error formatting a new cloud drive. You can try to format it manually from Disk Management. Value does not fall within the expected range. at CloudDriveService.Cloud.CloudDrivesTasks.CreateCloudPart.#R5c(TaskRunState #xag, CreateCloudPartTaskState #R9f, IEnumerable`1 #8we) at CoveUtil.Tasks.Concurrent.Task`1.(TaskRunState , Object , IEnumerable`1 ) at CoveUtil.Tasks.Concurrent.TaskGroup..() at CoveUtil.ReportingAction.Run(Action TheDangerousAction, Func`2 ErrorReportExceptionFilter) Attempts to format it manually from Disk Management also fail. Interestingly, I can create a small drive of 10GB without any issues (I have not tried other sizes to see if there is a breaking point). I have verified the OS is healthy (DSIM/SFC tests all pass). I am attaching the error report that was generated along with the log file. Any help would be greatly appreciated. Thank you. Alex ErrorReport_2019_03_20-11_55_05.9.saencryptedreport CloudDrive.Service-2019-03-20.log
  4. Hi all, I'm attempting to mount a 1.5gb (not a typo) google drive from Windows 10. I am selecting the full encryption option and choosing a 1tb HD with 300gb free for the cache. Everytime I do this I am presented with a BSOD and the following errors, stop code: SYSTEM)THREAD_EXCEPTION_NOT_HANDLED What failed: CLASSPNP.SYS Any ideas what could be causing this and how I can resolve ?
  5. I suspect the answer is 'No', but have to ask to know: I have multiple gsuite accounts and would like to use duplication across, say, three gdrives. The first gdrive is populated already by CloudDrive. Normally you would just add two more CloudDrives, create a new DrivePool pool with all three, turn on 3X duplication and DrivePool would download from the first and reupload to the second and third CloudDrives. No problem. If I wanted to do this more quickly, and avoid the local upload/download, would it be possible to simply copy the existing CloudDrive folder from gdrive1 to gdrive2 and 3 using a tool like rclone, and then to attach gdrive2/3 to the new pool? In this case using a GCE instance at 600MB/s. Limited of course by the 750GB/day/account. And for safety I would temporarily detach the first cloud drive during the copy, to avoid mismatched chunks.
  6. I don't know how to fix this. But If I'm supposed to be using the Cloud Drive (through Google) as a "normal" drive, it always behaves erratically. I've mentioned it a few times here, but the software will not even BEGIN uploading until the very last byte has been written to the local cache. So for example, if I'm uploading 7 videos, and they all total 100GB, even with 99GB in the queue to upload, it won't even upload until that last gig is written to the local cache. This is frustrating because it causes two problems: error messages and wasted time. The first big issue is the constant barrage of notifications that "there's trouble connecting to the provider" or downloading, or uploading, but it doesn't even upload! A consequence of these constant error messages is that the drive unmounts because of these messages triggering the software to disconnect. So unless I'm babying my computer, I can't leave the files transferring as normal (which means not pressing the pause upload button). The other issue is the waste of time. Instead of uploading what's currently in the local cache and subsequently deleting that queue, it waits for EVERYTHING to write to the local cache. Settings: UL/DL Threads = 10 Upload Threshold: 1 MB or 5 Minutes Error Message(s): I/O error: Cloud Drive Cloud Drive (D:\) is having trouble uploading data to Google Drive. This operated is being retired. Error: SQLite error cannot rollback - no transaction is active This error has occurred 2 times. Make sure that you are connected to the internet and have sufficient bandwidth available. I/O error Cloud drive Cloud Drive (D:) is having trouble uploading data to Google Drive in a timely manner. Error: Thread was being aborted. This error has occured 2 times. This error can be caused by having insufficient bandwidth or bandwidth trhottling. Your data will be cached locally if this persists. If these two problems are fixed, this software would be perfect. Now I've recommended this program many times because when you're monitoring it works beautifully. But the only thing missing is rectifying these issues so it actually behaves like a normal drive that doesn't constantly throw up errors.
  7. Since a couple of days, I am getting constant "is not authorized with Google Drive" about my drives as soon as I try to upload to it. My account is a G-Suite unlimited account. I've been uploading about 50GB of data today and had to re-authorize all drives (I have 4) 2 to 3 times each. This is a real pain to deal with and I lose access to my drives again and again. There does not seem to be any logic to it, drive will fail to this error one after the other, with random time between getting this error on each drive. As a side note, I have been uploading only to one drive out of my 4 mapped drives. Does anybody know what could be the issue? Thanks.
  8. Hello ! I would like to ask, what are the best settings for Cloudrive with Google Drive ? I have the following Setup / problems I use CloudDrive Version 1.0.2.975 on two computers: I have one Google Drive Account with 100TB of avaiable space. When I Upload some data to the drive the Upload Speed is perfect. On the frist Internet line I got an Upload Speed of nearly 100 MBit/sec On the Second Internet line (slower one) I also got full Upload Speed. (20 MBit/sec) When I try to download something the download Speeds are incredible slow. On the first line I have a download speed of 3 MB/sec. It should be 12,5 MB/sec (max Speed of the Line) On the Second Line I have a download Speed of 5 MB/sec. It should be 50 MB/sec (max Spped of the Line) I already set the Upload/download Threads to 12 on both Computers. I also set the minimum download size to 10.0 MB in the I/O Performance settings. What are the best settings to increase the download Speed ? Thank you very much in advance for any help. best regards Michael
  9. So i have been using Google drive on Cloud Drive for months and months and all of a sudden today realized when i ran a clean on my plex server that loads of files are missing. Since i run this daily they were available yesterday and earlier today. In one video folder every folder with a name that is after p in the alphabet is missing. In another folder inside that same folder half the files are missing. It only seems to affect one folder and its subfolders on the drive and i have no explanation for its disappearance. Having done a reboot earlier today it may have happened after that reboot with the mounting of the drive, but none of the missing files are in the recycling bin and unmounting and remounting has no effect. I have a few questions. - Is there anything i could have done wrong and how to do i keep this from happening in the future? - Is there a way to rescan the drive (??) in order to see if the file deletion is permanent? - I have all the files backed up on backblaze so thats not a big deal but should i build another drive instead of putting the files back on this drive? Thanks for the help.
  10. skapy

    copying cloud 1:1

    Hello, I want to copy my amazon cloud drive to google drive. Is it possible to clone it 1:1? Copying all folders and files takes too long because amazon cloud drives gets after some time disconnect from stablebit clouddrive and it needs me to click manually on "retry". Is it possible to clone with rclone or similair tools the whole acd stablebit cloudrive to gdrive? thanks!
  11. I spend almost 3hours every day trying to keep my CloudDrive+CloudPool setup alive. For me its a hard job with crashe and unresponsive system and GUIs. ReFS, NTFS, going virutal, going BETA, Clustersize, adding SSD, SSD Addon, cachesize... trying different providers, Dropbox, Azure, FTP, Google Drive, Local... Nothing keeps CloudDrives working more then some hours. But my latest problem is that im sleeping to long so CloudDrive removes my haddrives after dual bluescreens. When my drives are getting removed i see: [CloudDrives] Drive <ID> was previously unmountable. User is not retrying mount. Skipping. 10 drives is now 5 drives.. And now i need to force it....? So... what to do know? Start all over again? ok, this was not clouddrive related, i lost 2SSD´s with the last BSOD... sorry! but can i copy the cache folder to a new drive if i csn recover the data?
  12. Hi, I'm kind of new to CloudDrive but have been using DrivePool and Scanner for a long while and never had any issues at all with it. I recently set up CloudDrive to act as a backup to my DrivePool (I dont even care to access it locally really). I have a fast internet connection (Google Fiber Gigabit), so my uploads to Google Drive hover around 200 Mbps and was able to successfully upload about 900GB so far. However my cache drive is getting maxed out. I read that the cache limit is dynamic, but how can I resolve this as I dont want CloudDrive taking up all but 5 GB of this drive. If I understand correctly all this cached data is basically data that is waiting to be uploaded? Any help would me greatly appreciated! My DrivePool Settings: My CloudDrive Errors: The cache size was set as Expandable by default, but when I try to change it, it is grayed out. The bar at the bottom just says "Working..." and is yellow.
  13. Hello All, I am new to StableBit CloudDrive and I just wanted to ask a quick question; I am currently bulk uploading about 1.5TB of data over 4 GDrive accounts. I have increase replication to 2 and the upload is currently in process. Every so often I get the following error message from one of the drives: Error: Thread was being aborted When setting up I followed this Plex user guide from Reddit: https://www.reddit.com/r/PleX/comments/61ppfi/stablebit_clouddrive_plex_and_you_a_guide/ Which talks about upload threats potentially being a problem. I had them set exactly as it says in the guide. I have lowered them to: Upload: 5 Download: 5 Does this look OK? Is there anything else I can do? Is this something I can put back once the bulk upload has completed; as I will not be pushing 1.5TB as a one off push all the time?
  14. Hi, My current setup for Cloud Drive working in conjunction with Google Drive for Plex which is working perfectly. Couldn't have been more happier. However, now I would like to have redundancy for the current Google Drive. In order to achieve that, do I just sync or copy the Stablebit Cloud Drive folder on Drive to the new Drive account? If I one drive account was to fall over, do I then just simple link the Cloud Drive to with the other account and it will pick up all the files? Thanks in advance!
  15. I'm in the middle of doing a setup on G-suite. Seeing they don't cap space (yet) I want to go that route (I'll upgrade to the required users if it becomes an issue later on). For a cloud drive setup (aiming at 65TB to mirror local setup) Any recommended methods for storage? I saw another post here recommending various prefetch settings, however was not clear on whether new drives should be sized a certain way? Various 2TB drives or max out at 10TB for each one then combine into one share using DrivePool? Ultimately hoping to get an optimal setup out of the gate. And probably would not hurt to document that here for others to follow as well. Ideally using it with DrivePool so that it can be treated as one massive drive.
  16. Hey Guys, I got slow transfers with Google Drive, and set the threads to 12 up and 12 down. This worked for a while and everything was a bit faster. For the last two days, I have been getting countless Rate Limit Exceeded exceptions, even running 1 up and 1 down thread. I check out online in the Google Drive API guides and found a bit about exponential backoff. So a few questions/thoughts: Is exponential backoff implemented for the Google Drive provider? If I set the provider to use say 12 up and 12 down threads, do they all get blasted out using multiple requests at the same time? (causing rate limit exceptions later on)? Would it work to have something like a 'request gatekeeper' where you can set your own rate limits client side so that no matter how many threads you run, it always obeys that limit you set, and so that there is a 'master exponential backoff' in place? Is there at all a possibility to look at the provider implementation code? Or is this fully baked into the product? It'd be good if there was an API to allow anyone to build their own providers. Thanks for a good product! EDIT: Also, how will all the rate limiting work if you added say 5 Google Drives, each with 12 threads up and down? Quite quickly you will be making a TON of requests...
  17. I'm using windows 7. I'm on the latest stable release of drivepool. I can't find any detail other than this bug that appears to have been ignored: https://stablebit.com/Admin/IssueAnalysisPublic?Id=1151 I keep getting the following error from google drive when trying to relocate the google drive folder to my drivepool drive: 'please select a folder on an NTFS drive that is not mounted over a network.' Not sure why google drive thinks this since the drive appears as NTFS in windows storage management and I'm not trying to mount over a network. Other thread where this was pointed out on a different o/s years ago: http://community.covecube.com/index.php?/topic/528-google-drive-and-stablebit-drivepool/
  18. Hello Everyone, I just started testing CloudDrive and have run into a problem copying files to Google Drive. I can copy 1 or 2GB video files to the CloudDrive folder with no problems. I have trouble when I copy 5 or 10 2GB files ... after the first couple files are copied to the folder, the file copy process slows down. After a while, the copy will abort - OSX displays an error that there was a read/write error. In CloudDrive's logs I noticed that Google Drive reported a throttling error? 22:26:22.1: Warning: 0 : [ApiGoogleDrive:14] Google Drive returned error: [unable to parse] 22:26:22.1: Warning: 0 : [ApiHttp:14] HTTP protocol exception (Code=ServiceUnavailable). 22:26:22.1: Warning: 0 : [ApiHttp] Server is throttling us, waiting 1,327ms and retrying. 22:26:23.5: Warning: 0 : [IoManager:14] Error performing I/O operation on provider. Retrying. The request was aborted: The request was canceled. I thought write data is cached to the local drive first then slowly uploaded to the cloud? Why would there be a throttling error with many large files are copied? Thanks.
  19. My setup is: 30TB via single google drive cloud drive. That's broken up into two 15TB NTFS partitions, using 7TB on one and 4TB on the other. Everything was working fine... but about a week or so ago i started to get thousands of errors: "The requested mime type change is forbidden". This occurs when uploading. I have about 40GB sitting in my 'To Upload' and it hasn't moved in a week, even though the upload rate has been over 100mbps, practically constantly. Any ideas what this error is and how to fix. I think I have tried everything I can on my end, reauth, etc... Thanks.
  20. Hello, I'm using Windows Server 2016 TP5 (Upgraded from 2012R2 Datacenter..for containers....) and have been trying to convert my Storage Spaces to StableBit Pools. So far so good, but I'm having a bit of an issue with the Cloud Drive. Current: - Use SSD Optimizer to write to one of the 8 SSDs (2x 240GB / 5x 64GB) and then offload to one of my harddisks ( 6x WD Red 3TB / 4x WD Red 4 TB). - I've set balancing to percentage (as the disks are different size) - 1x 64GB SSD dedicated to Local Cache for Google Drive Mount (Unlimited size / specified 20TB) Problem 1: I've set my Hyper-V folder to duplicate [3x] so I can keep 1 file on SSD, 1 on HDD and 1 on Cloud Drive... but it is loading from CLoud Drive only. This obviously doesn't work as it tries to stream the .vhd from the cloud. Any way to have it read from the ssd/local disk, and just have the CloudDrive as backup? Problem 2: Once the CacheDisk fills up everything slows down to a crowl..... any way I can have it fill up an HDD after the ssd so other transfers can continue? After which it re-balances that data off? Problem 3: During large file transfers the system becomes unresponsive, and at times even crashes. I've tried using Teracopy (which doesn't seem to fill the 'modified' RAM cache, but is only 20% slower... = less crashes.... but system still unresponsive. Problem 4: I'm having I/O Error: Trouble downloading data from Google Drive. I/O Error: Thread was being aborted. The requested mime type change is forbidden (this error has occurred 101 times). Causing the Google Drive uploads to halt from time to time. I found a solution on the forum (manually deleting the chunks that are stuck). But instead of deleting I moved them to the root, so they could be analysed later on (if neccesary). Problem 5 / Question 1: I have Amazon Unlimited Cloud Drive, but it's still an experimental provider. I've tried it and had a lot of lockups/crashes and an average of 10mbits upload - so I removed it. Can I re-enable it once it exists experimental and allow the data from the Google Drive to be balanced out to Amazon Cloud Drive (essentially migrating/duplicating to the other cloud)? Question 2: Does Google Drive require Upload Verification? Couldn't find any best practices/guidelines on the settings per provider. Settings Screenshots:
  21. Hello, all! I have been testing out CloudDrive's latest beta, and run into an issue. I had some odd errors, unrelated to CloudDrive, on my server, which resuled in my decision to destroy my 10TB cloud drive [on two separate occasions]. As part of a reinstall, I decided to delete the CloudDrive folder on my Google Drive, as well, as I will reauthorize and recreate another drive later. However, now I have found an issue. My Google Drive still reports as having over a TB in use, which is approximately what I had on the drive prior to deleting it. While I do not see any chunk files, I can perform a search by looking for title:chunk and get results. It looks like I still have all of the chunks from the destroyed drives, even if I do not actively see them in the Drive directory. I would like to find a method to delete them. Is this stubborn chunk data staying behind normal? Can I utilize CloudDrive to delete them? Doing so manually would be a bit impracticable. I see in my research that Google gives access to an API for deleting files. Will I need to create my own script to mass delete these pseudo-hidden chunk files? Lots of questions, I know. Let me know if I can provide any helpful data. I haven't deleted/uninstalled CloudDrive, so any logs it has created by default would still exist. Windows Server 2012 R2 Datacenter x64 16GB RAM, 3.5Ghz CPU CloudDrive at latest version as of May 17, 2016 -Tomas Z.
×
×
  • Create New...