Jump to content

KiaraEvirm

Members
  • Posts

    0
  • Joined

  • Last visited

Reputation Activity

  1. Like
    KiaraEvirm reacted to AMCross in Licence question   
    decided to do away with home server 2011 and installed windows 10 pro
     
    so much faster using an i7 and 32gb ram but forgot to unlicence driver pool and scanner
     
    can i reuse the licence or do i need a new one
  2. Like
    KiaraEvirm reacted to Christopher (Drashna) in Amazon Cloud Drive - Why is it not supported?   
    Well, as it took 2 weeks and several emails originally ..... the answer is "I don't know". Sorry.
     
    We've had much better communication with Amazon lately (eg, they're actually replying).
     
    In fact, right now, the emails are about the exact API usage and clearing up some functionality, between Alex and Amazon.  So, things are looking good.   
     
     If you check the first post, Alex has updated it with the latest responses.
  3. Like
    KiaraEvirm reacted to Christopher (Drashna) in Why isn't XYZ Provider available?   
    Once we have a new public release, then after t hat, we may add more providers.
     
    however, we do have a request for OpenDrive already.... 
     
    But ... they're having issues today apparently ("Error establishing database connection").... Ooops.
  4. Like
    KiaraEvirm reacted to RulliDidd in Google Drive: Server is throttling downloads   
    Hi
     
    I'm using version 1.0.0.725 with Google Drive. Even with prefetch disabled and only 1 download thread i constantly "Server is throttling downloads.".
     
    When looking at the service log I can see bunch of :  [ApiGoogleDrive:101] Google Drive returned error (rateLimitExceeded): Rate Limit Exceeded
     
    Is there something I'm doing wrong?
     
     
  5. Like
    KiaraEvirm reacted to Alex in Large Chunks and the I/O Manager   
    In this post I'm going to talk about the new large storage chunk support in StableBit CloudDrive 1.0.0.421 BETA, why that's important, and how StableBit CloudDrive manages provider I/O overall.
     
    Want to Download it?
     
    Currently, StableBit CloudDrive 1.0.0.421 BETA is an internal development build and like any of our internal builds, if you'd like, you can download it (or most likely a newer build) here:
    http://wiki.covecube.com/Downloads
     
    The I/O Manager
     
    Before I start talking about chunks and what the current change actually means, let's talk a bit about how StableBit CloudDrive handles provider I/O. Well, first let's define what provider I/O actually is. Provider I/O is the combination of all of the read and write request (or download and upload requests) that are serviced by your provider of choice. For example, if your cloud drive is storing data in Amazon S3, provider I/O consists of all of the download and upload requests from and to Amazon S3.
     
    Now it's important to differentiate provider I/O from cloud drive I/O because provider I/O is not really the same thing as cloud drive I/O. That's because all I/O to and from the drive itself is performed directly in the kernel by our cloud drive driver (cloudfs_disk.sys). But as a result of some cloud drive I/O, provider I/O can be generated. For example, this happens when there is an incoming read request to the drive for some data that is not stored in the local cache. In this case, the kernel driver cooperatively coordinates with the StableBit CloudDrive system service in order generate provider I/O and to complete the incoming read request in a timely manner.
     
    All provider I/O is serviced by the I/O Manager, which lives in the StableBit CloudDrive system service.
     
    Particularly, the I/O Manager is responsible for:
    As an optimization, coalescing incoming provider read and write I/O requests into larger requests. Parallelizing all provider read and write I/O requests using multiple threads. Retrying failed provider I/O operations. Error handling and error reporting logic. Chunks
     
    Now that I've described a little bit about the I/O manager in StableBit CloudDrive, let's talk chunks. StableBit CloudDrive doesn't inherently work with any types of chunks. They are simply the format in which data is stored by your provider of choice. They are an implementation that exists completely outside of the I/O manager, and provide some convenient functions that all chunked providers can use.
     
    How do Chunks Work?
     
    When a chunked cloud provider is first initialized, it is asked about its capabilities, such as whether it can perform partial reads, whether the remote server is performing proper multi-threaded transactional synchronization, etc... In other words, the chunking system needs to know how advanced the provider is, and based on those capabilities it will construct a custom chunk I/O processing pipeline for that particular provider.
     
    The chunk I/O pipeline provides automatic services for the provider such as:
    Whole and partial caching of chunks for performance reasons. Performing checksum insertion on write, and checksum verification on read. Read or write (or both) transactional locking for cloud providers that require it (for example, never try to read chunk 458 when chunk 458 is being written to). Translation of I/O that would end up being a partial chunk read / write request into a whole chunk read / write request for providers that require this. This is actually very complicated. If a partial chunk needs to be read, and the provider doesn't support partial reads, the whole chunk is read (and possibly cached) and only the part needed is returned. If a partial chunk needs to be written, and the provider doesn't support partial writes, then the whole chunk is downloaded (or retrieved from the cache), only the part that needs to be written to is updated, and the whole chunk is written back.If while this is happening another partial write request comes in for the same chunk (in parallel, on a different thread), and we're still in the process of reading that whole chunk, then coalesce the [whole read -> partial write -> whole write] into [whole read -> multiple partial writes -> whole write]. This is purely done as an optimization and is also very complicated. And in the future the chunk I/O processing pipeline can be extended to support other services as the need arises. Large Chunk Support
     
    Speaking of extending the chunk I/O pipeline, that's exactly what happened recently with the addition of large chunk support (> 1 MB) for most cloud providers.
     
    Previously, most cloud providers were limited to a maximum chunk size of 1 MB. This limit was in place because:
    Cloud drive read I/O requests, which can't be satisfied by the local cache, would generate provider read I/O that needed to be satisfied fairly quickly. For providers that didn't support partial reads, this meant that the entire chunk needed to be downloaded at all times, no matter how much data was being read. Additionally, if checksumming was enabled (which would be typical), then by necessity, only whole chunks could be read and written. This had some disadvantages, mostly for users with fast broadband connections:
    Writing a lot of data to the provider would generate a lot of upload requests very quickly (one request per 1 MB uploaded). This wasn't optimal because each request would add some overhead. Generating a lot of upload requests very quickly was also an issue for some cloud providers that were limiting their users based on the number of requests per second, rather than the total bandwidth used. Using smaller chunks with a fast broadband connection and a lot of threads would generate a lot of requests per second. Now, with large chunk support (up to 100 MB per chunk in most cases), we don't have those disadvantages.
     
    What was changed to allow for large chunks?
    In order to support the new large chunks a provider has to support partial reads. That's because it's still very necessary to ensure that all cloud drive read I/O is serviced quickly. Support for a new block based checksumming algorithm was introduced into the chunk I/O pipeline. With this new algorithm it's no longer necessary to read or write whole chunks in order to get checksumming support. This was crucial because it is very important to verify that your data in the cloud is not corrupted, and turning off checksumming wasn't a very good option. Are there any disadvantages to large chunks?
    If you're concerned about using the least possible amount of bandwidth (as opposed to using less provider calls per second), it may be advantageous to use smaller chunks. If you know for sure that you will be storing relatively small files (1-2 MB per file or less) and you will only be updating a few files at a time, there may be less overhead when you use smaller chunks. For providers that don't support partial writes (most cloud providers), larger chunks are more optimal if most of your files are > 1 MB, or if you will be updating a lot of smaller files all at the same time. As far as checksumming, the new algorithm completely supersedes the old one and is enabled by default on all new cloud drives. It really has no disadvantages over the old algorithm. Older cloud drives will continue to use the old checksumming algorithm, which really only has the disadvantage of not supporting large chunks. Which providers currently support large chunks?
    I don't want to post a list here because it would get obsolete as we add more providers. When you create a new cloud drive, look under Advanced Settings. If you see a storage chunk size > 1 MB available, then that provider supports large chunks.  
    Going Chunk-less in the Future?
     
    I should mention that StableBit CloudDrive doesn't actually require chunks. If it at all makes sense for a particular provider, it really doesn't have to store its data in chunks. For example, it's entirely possible that StableBit CloudDrive will have a provider that stores its data in a VHDX file. Sure, why not. For that matter, StableBit CloudDrive providers don't even need to store their data in files at all. I can imagine writing providers that can store their data on an email server or a NNTP server (a bit of a stretch, yes, but theoretically possible).
     
    In fact, the only thing that StableBit CloudDrive really needs is some system that can save data and later retrieve it (in a timely manner). In that sense, StableBit CloudDrive can be a general purpose drive virtualization solution.
  6. Like
    KiaraEvirm reacted to pettakos in Unable to delete a second pool   
    I recently created a second pool at the same hardware under the same license consisting of a single spare drive that i had lying around. Today, I decided to delete the second pool that I have created and remain with the initial one. How am I supposed to do that. Is there an option somewhere, cause I tried to remove the drive and nothing happens. B.t.w. there isn't any data in the pool nor the drive, so if a force removal is indicated I wouldn't mind at all.
    Any help would be greatly appreciated.
  7. Like
    KiaraEvirm reacted to CalvinNL in Extremely poor download performance with Google Drive   
    Hi there,
     
    I have been using StableBit CloudDrive for around a month on a dedicated server.
    The upload speed is always between 0 and 200 Mbit/s per drive with an average of 150 Mbit/s.
    Since build 1.0.0.753 BETA the download speed maxes out on 10 Mbit/s which results in stuttering videos with PLEX and extremely slow copying.
     
    The number of threads doesn't seem to matter, I tried 2, but I also tried 5, 10 and 20. 
    I also tried another Google account, but that also doesn't help.
     
    I tried both a drive block size of 10 and 20MB, but it really seems to be a bug with the software, because changing settings doesn't help a bit, the download speed does not improve.
     
    - Calvin
  8. Like
    KiaraEvirm reacted to dasani3x in CloudDrive Nearly Unusable Recently (Google Drive)   
    Hello all, first of all, I am a huge fan of the software and will be a paying customer on day one. However, I feel the need to point out how much the performance of my CloudDrive has slipped in the recent week or two. I am using Google Drive and it seems like every time I update to the newest beta update, I get great performance for 12-24 hours. 400 mb/s up and no issues downloading and streaming over Plex. After that, it becomes nearly unusable. This is on version 1.0.0.756:
     

     
    I haven't had nearly that level of errors before, but now I find the drive unmounted after an hour or two worth of work, and data rarely actually uploads, usually 1.5 mb/s at a time. 
     
     
    After updating to 1.0.0.757, I am back to better speed with no errors.

     
     
    Is there anything I can be doing to have more consistent speeds and reliability?
  9. Like
    KiaraEvirm reacted to Christopher (Drashna) in Why isn't XYZ Provider available?   
    Not every Cloud Provider is going to be listed. And that's okay.

    For the public beta, we focused on the most prolific and popular cloud storage providers.

    If you don't see a specific provider in StableBit CloudDrive, let us know and we'll look into it.
    If you can provide a link to the SDK/API, it would be helpful, but it's not necessary.
     
    Just because you see it listed here does not me we will add the provider. Whether we add them or not depends on a number of factors, including time to develop them, stability, reliability, and functionality, among other factors.

    Providers already requested:
    Mega https://stablebit.com/Admin/IssueAnalysis/15659 SharePoint https://stablebit.com/Admin/IssueAnalysis/16678 WebDAV IceDrive OwnCloud (SabreDAV) https://stablebit.com/Admin/IssueAnalysis/16679 OpenStack Swift https://stablebit.com/Admin/IssueAnalysis/17692 OpenDrive https://stablebit.com/Admin/IssueAnalysis/17732 Added, but free tier not suitable for use with StableBit CloudDrive  Yandex.Disk https://stablebit.com/Admin/IssueAnalysis/20833 EMC Atmos https://stablebit.com/Admin/IssueAnalysis/25926 Strato HiDrive https://stablebit.com/Admin/IssueAnalysis/25959 Citrix ShareFile https://stablebit.com/Admin/IssueAnalysis/27082 Email support (IMAP, maybe Pop) https://stablebit.com/Admin/IssueAnalysis/27124 May not be reliable , as this heavily depends on the amount of space that the provider allows. And some providers may prune messages that are too old, go over the quota, etc. JottaCloud https://stablebit.com/Admin/IssueAnalysis/27327 May not be usable, as there is no publicly documented API. FileRun https://stablebit.com/Admin/IssueAnalysis/27383 FileRun is a Self Hosted, PHP based cloud storage solution.  Free version limited to 3 users, enterprise/business licensing for more users/features available.    JSON based API.  SpiderOak https://stablebit.com/Admin/IssueAnalysis/27532 JSON based API Privacy minded pCloud https://stablebit.com/Admin/IssueAnalysis/27939 JSON based API OVH Cloud https://stablebit.com/Admin/IssueAnalysis/28204 StorJ https://stablebit.com/Admin/IssueAnalysis/28364 Providers like tardigrade.io appear to use this library/API iDrive Amazon S3 Compatible API. No need for separate provider. ASUS WebStorage https://stablebit.com/Admin/IssueAnalysis/28407 Documentation ... is difficult to read, making it hard to tell if support is possible.  Apple iCloud https://stablebit.com/Admin/IssueAnalysis/28548 uptobox https://stablebit.com/Admin/IssueAnalysis/28633 sCloud https://stablebit.com/Admin/IssueAnalysis/28650 Providers that will not be added:
    Degoo No publicly accessible API. Without an API that support reading and writing files to the provider, there is no possibility of adding support for this provider.  Sync.com No publicly accessible API Without an API that support reading and writing files to the provider, there is no possibility of adding support for this provider.  Amazon Glacier https://stablebit.com/Admin/IssueAnalysis/16676 There is a 4+ hour wait time to access uploaded data. This means Amazon Glacier completely unusable to us. We couldn't even perform upload verification on content due to this limitation.  HubiC https://stablebit.com/Admin/IssueAnalysis/16677 It's OpenStack - No need for a separate provider. CrashPlan? https://stablebit.com/Admin/IssueAnalysis/15664 The API provided appears to be mostly for monitoring and maintenance. No upload/download calls, so not suitable for StableBit CloudDrive, unfortunately.  MediaFire Not suitable due to stability issues. Thunderdrive No publicly accessible API. LiveDrive No Publicly accessible API. Zoolz No Publicly accessible (developer) API Proton Drive No Publicly accessible API
  10. Like
    KiaraEvirm reacted to Freppa in Stablebit "overupload" data   
    This is a topic I think have been discussed before but is still an issue on my setup.
    If I upload 1GB of data, it takes three to five times as long time as if I was uploading directly utilizing the same speed.
    I looked at the "technical details" and noticed something peculiar - progress on some threads where over 100% in completion. Why would that be?

  11. Like
    KiaraEvirm reacted to hansolo77 in Building new server from scratch!   
    Hey all!  I've been working on learning how to setup and install WSE 2012 R2 and I think I'm finally there.  However, I'm feeling very limited in my capacities as my case is overflowing with drives.  So I'm going to be building a new server from the ground up and would like some advice and suggestions as to what I should get.  I'm only a part-time worker and don't really make a lot of money.  So the purchasing time line is going to be really long.  So far, I know for certain that I'm going to but the Norco 4224 case.  I've read reviews and it has been recommended time and time again.  The only issue with this case appears to be the fans, and the potential for the backplanes to be DOA.  But at the price, it's a steal compared to other similar cases.  Plus, I'm going to get it through NewEgg, as they're the cheapest place around, and they offer really quick RMA's for exchanges if there's something wrong with.
     
    Now I'm at the point of internals.  The first order of business is the motherboard.  Form factor isn't really an issue as the Norco case fully supports a whole range.  Individual features are where I'm struggling.  I know that I want to have room for expansion.  So I'm kinda staying away from those Mini ITX boards, since they, for the most part, all seem to have only 1 expansion slot.  I already have 1 SAS controller, and plan on getting an expander.  So that would be 2 slots.  As for the processor, I'm not sure what I need.  My usage scenario is a simple home file server for client backups and media streaming.  So I suppose I don't need anything major.  The same goes for RAM. 
     
    As it is right now, I'm thinking about getting one of these:

    SUPERMICRO MBD-X9SCL-F-O LGA 1155 Intel C202 Micro ATX Intel Xeon E3 Server Motherboard
    or
    SUPERMICRO MBD-A1SAM-2550F-O uATX Server Motherboard FCBGA 1283 DDR3 1600/1333
     
    The first board is nice, in that it has plenty of expansion slots, Support for Xeon processors, and is cheap.  However, it only supports ECC memory of max 32gb, SATA 3.0GB/s, and is cheap.  The second board is nice, in that it has just enough expansion slots for my needs (controller and expander), already comes with a processor, supports (but not require) ECC memory of max 64gb, and has 2xSATA 6.0Gb/s, but is a little more expensive.
     
    So which board should I get?  Integrated CPU and more RAM, or more slots, less RAM, and mandatory ECC?  Or should I look at something else?  What are your suggestions?

    EDIT ->  I just looked at and am now also adding this contender:
    SUPERMICRO MBD-A1SAM-2550F-O uATX Server Motherboard FCBGA 1283 DDR3 1600/1333
     
    It's got more everything.  More SATA 6.0Gb/s, TONS more RAM, support for faster CPU, and more expensive.  To put it in perspective, this new board would probably take a month to save up for.
  12. Like
    KiaraEvirm reacted to Alex in SSD Optimizer Balancing Plugin   
    I've just finished coding a new balancing plugin for StableBit DrivePool, it's called the SSD Optimizer. This was actually a feature request, so here you go.
     
    I know that a lot of people use the Archive Optimizer plugin with SSD drives and I would like this plugin to replace the Archive Optimizer for that kind of balancing scenario.
     
    The idea behind the SSD Optimizer (as it was with the Archive Optimizer) is that if you have one or more SSDs in your pool, they can serve as "landing zones" for new files, and those files will later be automatically migrated to your slower spinning drives. Thus your SSDs would serve as a kind of super fast write buffer for the pool.
     
    The new functionality of the SSD Optimizer is that now it's able to fill your Archive disks one at a time, like the Ordered File Placement plugin, but with support for SSD "Feeder" disks.
     
    Check out the attached screenshot of what it looks like
     
    Notes: http://dl.covecube.com/DrivePoolBalancingPlugins/SsdOptimizer/Notes.txt
    Download: http://stablebit.com/DrivePool/Plugins
     
    Edit: Now up on stablebit.com, link updated.

  13. Like
    KiaraEvirm reacted to happydish in Stable Bit Twitter Giveaway cheating   
    So something I've noticed recently, is that some of the 'Tweets' for the stablebit giveaway seem to be from people with multiple accounts.  Most of the tweets are from accounts with 0 followers, 0 following, and only tweet once a week for the contest.
    Exhibit A
     
    3 Accounts, named @555Helo, @helo3212 and @123helo2.  All have 0 following and 0 followers, and no tweets other than entering the contest each week every week, within a few minutes of eachother, along with a dozen other accounts.  I wouldn't be suprised if they are all the same person or bot, but the 3 helo's are obviously tied together.
     
    Exhibit B

    These same 3 accounts, all created and starting tweeting on the same day.  
     
     
    If you look at the tweet 'spread' It's pretty evenly spaced though-out the week, normally 4-6 tweets a day spread out.  But taking a look at the tweets from this morning show a different story, 14 tweets, between 9:14AM and 9:20AM, all from 'suspicious' accounts, only two of these have accounts have any following or followers (@A_steelbourne with 1 follower and @pittsa with 1 follower and 3 following).  Obviously someone is trying to game the system, I wouldn't be that surprised if it is just one person with 14 accounts given the proximity of the tweeting times.  Is there any verification with the giveaway that the accounts are real?  I think the giveaway is a fantastic Idea, to get exposure for stablebit while letting people have a chance to receive keys, but this abuse seems pretty bad.
     
    Thanks for reading this!
    -e
  14. Like
    KiaraEvirm reacted to Diablosblizz in DrivePool Prevents Write While Disk is Missing   
    Hi,
     
    I've been a long time DrivePool user and love the software, but I recently noticed something that I'm not sure was intended. I have one bad disk that frequently keeps disconnecting from the machine. I have replaced the disk, but the damn thing keeps disconnecting, I just haven't looked into it further as doing a "Scan for Hardware" in Device Manager usually brings it back.

    Anyways, I noticed that when the disk is missing I cannot write or delete files from the DrivePool disk (in this case M:\). If I try, I get an error message stating that I need permission from Administrators on the local machine. I almost went nuts trying to figure out the permissions within Windows thinking something was wrong, simply because the Administrators group on the machine was the owner of the drive and all subsequent folders. Reads from the drive still work as expected. I can replicate this by finding any drive in Disk Management on the machine, right clicking and then clicking Offline to turn off the disk (at least in Windows). Once I bring it back online, it works again.

    Is this intended? As mentioned, I almost went absolutely crazy trying to figure the permissions out. The thing that made me look at DrivePool was that if I rebooted the machine, it usually worked for a little while before disconnecting again. I tried transferring files, and at roughly the same time I got an email stating the drive was missing which made me think maybe it wasn't permissions. As soon as it was back online, I could write to it again.
     
    If it is intended, is somebody able to walk me through why this it is the way it is (I'm sure there is a good reason for it), and if it's possible to disable?

    Thanks!
  15. Like
    KiaraEvirm reacted to Reptile in +1 for GoogleDrive for Work support   
    Google Drive for Work means unlimited storage for about 40 dollar a month. And even normal googledrive accounts could be pooled together with drivepool. And nothing stops you from having multiple google accounts right.
    Furthermore google has amazing speed. I get around 220 Mbit/s. Yes on gigabit fiber google allows to sync with up to 250 mbit/s per second. It would be wonderful to have Google Drive support.
     
    Fast, afordable, unlimited.
    Is there a beta release supporting this provider already?
     
    Yours sincerely 
     
    Reptile
     
     
    Edit:

  16. Like
    KiaraEvirm reacted to evulhotdog in Amazon Cloud Drive - Why is it not supported?   
    I've read that you don't need the other four users to qualify. It says its a req, but when actually using the service you have unlimited regardless.
  17. Like
    KiaraEvirm reacted to Statikk in ACD recognizing some chunks as photos   
    When I log in from the ACD web app some of my chunk files are showing up under the photos section.  I guess my concern is that at some point amazon may treat these files differently than files that it sees as misc data files and I'll end up with missing chunks.  Should I be concerned?
  18. Like
    KiaraEvirm reacted to santacruzskim in To What Degree do DrivePool and Scanner Work Together?   
    I recently had Scanner flag a disk as containing "unreadable" sectors.  I went into the UI and ran the file scan utility to identify which files, if any, had been damaged by the 48 bad sectors Scanner had identified.  Turns out all 48 sectors were part of the same (1) ~1.5GB video file, which had become corrupted.
     
    As Scanner spent the following hours scrubbing all over the platters of this fairly new WD RED spinner in an attempt to recover the data, it dawned on me that my injured file was part of a redundant pool, courtesy of DrivePool.  Meaning, a perfectly good copy of the file was sitting 1 disk over.
     
    SO...
    Is Scanner not aware of this file? What is the best way to handle this manually if the file cannot be recovered?  Should I manually delete the file and let DrivePool figure out the discrepancy and re-duplicate the file onto a healthy set of sectors on another drive in the pool?  Should I overwrite the bad file with the good one??? IN A PERFECT WORLD, I WOULD LOVE TO SEE...
    Scanner identifies the bad sectors, checks to see if any files were damaged, and presents that information to the user. (currently i was alerted to possible issues, manually started a scan, was told there may be damaged files, manually started a file scan, then I was presented with the list of damaged files). At this point, the user can take action with a list of options which, in one way or another, allow the user to: Flag the sectors-in-question as bad so no future data is written to them (remapped). Automatically (with user authority) create a new copy of the damaged file(s) using a healthy copy found in the same pool. Attempt to recover the damaged file (with a warning that this could be a very lengthy operation) Thanks for your ears and some really great software.  Would love to see what the developers and community think about this as I'm sure its been discussed before, but couldn't find anything relevant in the forums.
  19. Like
    KiaraEvirm reacted to Boody in Recent purchase, activation is   
    Wow, apologize for the topic error.  It's supposed to read "Recent purchase, activation error"
     
    I purchased the entire suite of products and now when I install and enter my activation ID, I get an invalid activation ID error and it asks me to verify it was typed correctly.
     
    Any ideas? 
     
    Thanks!
  20. Like
    KiaraEvirm reacted to Mavol in No Upload since Dec 20th. Downloading still works fine. ACD .784   
    Hello!

    So i'm at a loss of what to do here. I am using the experimental ACD provider and everything was going very well up until a couple weeks ago.

    1st, whenever my server would do a normal scheduled restart for windows updates or a reboot done by me, the stablebit drive would always need to do recovery and it would take like about 8 hours since i have such a large cache of course becasue my lame ISP only has 6mbps MAX upload (Disgusting). Any way to fix this?


    2nd, no uploads have happened since December 20th. When I login to amazon cloud drive through the web browser though, there are files in the "recent files" area from the 28th. When I go to all files though the last file i can see when sorted by date is the 20th as pictured below. Not sure what that is about. The upload graphic on the stabelbit software though never changes just sits at 0bps forever. I can still acccess all files and download all files as normal. I then was using the version of stablebit previous to 784, so then just two days ago updated to .784 but no luck.

    I should also mention, that I'm still able to upload through the amazon web app and desktop app as normal as well.


  21. Like
    KiaraEvirm reacted to Joel Kåberg in Poor performance Google Drive (not rate limit)   
    I'm seeing poor upload performance with GD, it goes somewhere between 0 to 150 mbps (on average 50-70mbps) but very rarely above that.
    Here's my settings https://goo.gl/iXdlWc | https://goo.gl/H5T8G7
     
    An regular Speedtest https://goo.gl/iRjC8o
     
    Looking over the logs I can't see any Rate limit or similar, just a bunch of Write chunk
     
     
    My workflow
    Ubuntu server with samba (main storage for now)->1gbit lan->Windows 10 PC with clouddrive->google drive
     
    So I take it this is something related to my ISP or something between me and Google?
     
    I'm investigateing now if my data gets uploaded to EU or US datacenters (I'm located in the EU) Uploaded to EU/Amsterdam
     
    For comparision I can saturate my bandwith with the same Google Drive using rlcone, so this must be a CloudDrive issue.
     
    One thing that I've noticed is when I pause the transfer to the CloudDrive (and Clouddrive just sits uploading) I'm seeing a lot better bandwith usage (nearly 100%). I've now tried several (SSD) disk on different PC but they all suffer the same fate.

    So I think I got this down to IO issues, it seems better without encryption but I'd love haveing that. Task manager gives a pretty good hint as to the disk beeing excessevly used (I'm assuming due to the cache in CloudDrive). Perhaps this need some attention, also an RAM option would be nice here (for us with plenty of it). I tried creating a RAMdrive to see if that helped (use as a cache drive) but CloudDrive won't recognize it
  22. Like
    KiaraEvirm reacted to w00ki3 in Is there any plans to support EMC Atmos   
    Hi,
     
    Are there any plans to support the EMC Atmos storage?
  23. Like
    KiaraEvirm reacted to Christopher (Drashna) in There is no OneDrive Provider?   
    In case you are wondering why we have chosen to not include a cloud provider for OneDrive, there is a very good reason for this.
     
    Namely, the service is heavily throttled, which makes it nigh unusable for CloudDrive. 
    Additionally, I beleive that we have had stability issues with their API (Alex will have to confirm that). 
     
    Because of this, we have elected to not include OneDrive support at this time.
    At such a time that they lift the bandwidth limits, we will re-assess this.
     
     
    We do allow the option to enable the provider, by using the advanced settings, if you are very curious.
    http://wiki.covecube.com/StableBit_CloudDrive_Advanced_Settings#ProviderRegistry
     
  24. Like
    KiaraEvirm reacted to ConnectionProblem in Fix File Permissions Windows 10   
    First of all been using your products for 3 years and they are great. Thank you for your support of a niche capability that I loved about WHS.
     
    I recently reformatted installed Windows 10 and restored my pool, but my file permissions are a little messed up. Writing to folders of child folders that existed pre reinstall require User Access Control Confirmation and some programs pop an error saying I don't have access to write to them.
     
    Thank you
  25. Like
    KiaraEvirm reacted to Alex in Amazon Cloud Drive - Why is it not supported?   
    Some people have mentioned that we're not transparent enough with regards to what's going on with Amazon Cloud Drive support. That was definitely not the intention, and if that's how you guys feel, then I'm sorry about that and I want to change that. We really have nothing to gain by keeping any of this secret.
     
    Here, you will find a timeline of exactly what's happening with our ongoing communications with the Amazon Cloud Drive team, and I will keep this thread up to date as things develop.
     
    Timeline:
    On May 28th the first public BETA of StableBit CloudDrive was released without Amazon Cloud Drive production access enabled. At the time, I thought that the semi-automated whitelisting process that we went through was "Production Access". While this is similar to other providers, like Dropbox, it became apparent that for Amazon Cloud Drive, it's not. Upon closer reading of their documentation, it appears that the whitelisting process actually imposed "Developer Access" on us. To get upgraded to "Production Access" Amazon Cloud Drive requires direct email communications with the Amazon Cloud Drive team. We submitted our application for approval for production access originally on Aug. 15 over email: On Aug 24th I wrote in requesting a status update, because no one had replied to me, so I had no idea whether the email was read or not. On Aug. 27th I finally got an email back letting me know that our application was approved for production use. I was pleasantly surprised. On Sept 1st, after some testing, I wrote another email to Amazon letting them know that we are having issues with Amazon not respecting the Content-Type of uploads, and that we are also having issues with the new permission scopes that have been changes since we initially submitted our application for approval. No one answered this particular email until... On Sept 7th I received a panicked email from Amazon addressed to me (with CCs to other Amazon addresses) letting me know that Amazon is seeing unusual call patterns from one of our users.  On Sept 11th I replied explaining that we do not keep track of what our customers are doing, and that our software can scale very well, as long as the server permits it and the network bandwidth is sufficient. Our software does respect 429 throttling responses from the server and it does perform exponential backoff, as is standard practice in such cases. Nevertheless, I offered to limit the number of threads that we use, or to apply any other limits that Amazon deems necessary on the client side. I highlighted on this page the key question that really needs answered. Also, please note that obviously Amazon knows who this user is, since they have to be their customer in order to be logged into Amazon Cloud Drive. Also note that instead of banning or throttling that particular customer, Amazon has chosen to block the entire user base of our application.
    On Sept. 16th I received a response from Amazon. On Sept. 21st I haven't heard anything back yet, so I sent them this. I waited until Oct. 29th and no one answered. At that point I informed them that we're going ahead with the next public BETA regardless. Some time in the beginning of November an Amazon employee, on the Amazon forums, started claiming that we're not answering our emails. This was amidst their users asking why Amazon Cloud Drive is not supported with StableBit CloudDrive. So I did post a reply to that, listing a similar timeline. One Nov. 11th I sent another email to them. On Nov 13th Amazon finally replied. I redacted the limits, I don't know if they want those public.
    On Nov. 19th I sent them another email asking to clarify what the limits mean exactly. On Nov 25th Amazon replied with some questions and concerns regarding the number of API calls per second: On Dec. 2nd I replied with the answers and a new build that implemented large chunk support. This was a fairly complicated change to our code designed to minimize the number of calls per second for large file uploads.

    You can read my Nuts & Bolts post on large chunk support here: http://community.covecube.com/index.php?/topic/1622-large-chunks-and-the-io-manager/ On Dec. 10th 2015, Amazon replied: I have no idea what they mean regarding the encrypted data size. AES-CBC is a 1:1 encryption scheme. Some number of bytes go in and the same exact number of bytes come out, encrypted. We do have some minor overhead for the checksum / authentication signature at the end of every 1 MB unit of data, but that's at most 64 bytes per 1 MB when using HMAC-SHA512 (which is 0.006 % overhead). You can easily verify this by creating an encrypted cloud drive of some size, filling it up to capacity, and then checking how much data is used on the server.

    Here's the data for a 5 GB encrypted drive:



    Yes, it's 5 GBs.  
     
      To clarify the error issue here (I'm not 100% certain about this), Amazon doesn't provide a good way to ID these files. We have to try to upload them again, and then grab the error ID to get the actual file ID so we can update the file.  This is inefficient and would be solved with a more robust API that included a search functionality, or a better file list call. So, this is basically "by design" and required currently.  -Christopher     Unfortunately, we haven't pursued this with Amazon recently. This is due to a number of big bugs that we have been following up on.  However, these bugs have lead to a lot of performance, stability and reliability fixes. And a lot of users have reported that these fixes have significantly improved the Amazon Cloud Drive provider.  That is something that is great to hear, as it may help to get the provider to a stable/reliable state.    That said, once we get to a more stable state (after the next public beta build (after 1.0.0.463) or a stable/RC release), we do plan on pursuing this again.     But in the meanwhile, we have held off on this as we want to focus on the entire product rather than a single, problematic provider.  -Christopher     Amazon has "snuck in" additional guidelines that don't bode well for us.  https://developer.amazon.com/public/apis/experience/cloud-drive/content/developer-guide Don’t build apps that encrypt customer data  
    What does this mean for us? We have no idea right now.  Hopefully, this is a guideline and not a hard rule (other apps allow encryption, so that's hopeful, at least). 
     
    But if we don't get re-approved, we'll deal with that when the time comes (though, we will push hard to get approval).
     
    - Christopher (Jan 15. 2017)
     
    If you haven't seen already, we've released a "gold" version of StableBit CloudDrive. Meaning that we have an official release! 
    Unfortunately, because of increasing issues with Amazon Cloud Drive, that appear to be ENTIRELY server side (drive issues due to "429" errors, odd outages, etc), and that we are STILL not approved for production status (despite sending off additional emails a month ago, requesting approval or at least an update), we have dropped support Amazon Cloud Drive. 

    This does not impact existing users, as you will still be able to mount and use your existing drives. However, we have blocked the ability to create new drives for Amazon Cloud Drive.   
     
    This was not a decision that we made lightly, and while we don't regret this decision, we are saddened by it. We would have loved to come to some sort of outcome that included keeping full support for Amazon Cloud Drive. 
    -Christopher (May 17, 2017)
×
×
  • Create New...