Jump to content

Search the Community

Showing results for tags 'feature request'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Covecube Inc.
    • Announcements
  • StableBit Cloud
    • General
  • StableBit Scanner
    • General
    • Compatibility
    • Nuts & Bolts
  • StableBit DrivePool
    • General
    • Hardware
    • Nuts & Bolts
  • StableBit CloudDrive
    • General
    • Providers
    • Nuts & Bolts
  • Other
    • Off-topic
  • BitFlock
    • General

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 10 results

  1. Is this where I should be submitting feature requests? I ask because I have just gone through the forums about the Google Drive API limits/throttling having bumped up against the infamous userRateLimitExceeded issues - presumably after hitting their 750GB per day limit. What I noticed was that once this rate limit is hit there isn't really anything for the application "to do" except for cache writes until Google lifts the ban/quota resets/etc. but I noticed that the write attempts just keeps hammering google which takes bandwidth unnecessarily. I was curious about the potential to just stop making the attempts after a while, and just go dormant (though continuing to write data to cache) until the throttle is lifted? I would imagine the logic would be something like: CloudDrive starts receiving userRateLimitExceeded responses and it then puts itself in a local caching only mode (opt-in or by default - doesn't really matter to me) CloudDrive then starts sending some type of a "canary" small data packet every few minutes to test and see if Google Drive API and/or Google Drive Backend are accepting writes again, and then start writing full chunks again whenever applicable. Rinse/repeat. I realize that there is a method people have used to throttle the traffic in the settings basically to make it impossible to hit the 750GB per day quota but, in my tests for what I am using CloudDrive for I expect to ONLY stumble upon this limit maybe 10% of the time. The other 90% of the time I want to be able to use the full on bandwidth. So while a mbps throttle can help 10% of the time it ends up being an unnecessary bottleneck for the other 90%. Does this sound useful to people or am I crazy? I don't mind hitting the limit from Google every once in a while but I don't really understand why the CloudDrive cannot be more efficient when it becomes clear that the upload quota has been reached. To me it looks like it keeps trying to write the same chunks over and over (sending the full chunks all the way to Google, for the chunks to end up being denied at the door) I think for bandwidth efficiency something like this could be helpful. But maybe this is just me trying to min/max the efficiency of the application too much in a rare situation. Thanks
  2. I'm using Drivepool for a couple of years now and I'm very happy. Since the development has somewhat slowed down, I have an idea for a feature that I would pay for again. Maybe call it Drivepool NextGen or whatever. Since Snapraid is open source, would it not be possible to integrate that into Drivepool? Imagine you set certain drives as the snapraid volumes with options and then Drivepool would take care of all the things you need to script Snapraid to do. Creating, scrubbing etc etc. Right now I use Pool Duplication, but I think having a Snapraid option would be much better. I tried doing this manually in the past but wasn't very happy with the workflow. What do you guys think? Would that be possible?
  3. Would it be possible to add a wrapper on to the encryption key similar to how bitlocker does it so we can change the "password" without needing to re-encrypt the whole thing or depend on another program like bitlocker to go? The main benefit of this is if the drive gets big enough such as a terabyte or even a pentabyte the user could easily change their password without needing to re-encrypt or transferring everything. Along with that if the users password is somehow exposed their symmetrical encryption key itself may still be safe as long as they change their password in time, basically providing another additional layer of security. This could be an option in the program like "use asymmetrical key" when creating the drive. Doing some quick research it looks like to accomplish this you would store the symmetrical key that we use to encrypt and decrypt our drives on the cloud itself, or some other methods like how it's currently stored on windows key management, and encrypt that key somehow with an asymmetrical key that could easily be changed based on their password.
  4. I have 10TB expandable drive hosted on Google Drive with a 2TB cache. Whenever I add a large amount of data to the drive (for example I added 1TB yesterday) CloudDrive will occasionally stop all read and write operations to do "metadata pinning". This prevents Plex from doing anything while it does its thing, and took over 3 hours to do yesterday. I don't want to disable metadata pinning, but I would like to be able to schedule it, if necessary, for 3AM or something like that. In the meantime, is there a drawback to having such a large local cache? Would it improve operations like metadata pinning and crash recovery if I decreased it?
  5. I recurring would like/need to print the list of drives incl their bays, names and/or serialnumbers to identify the drives much easier when the server is shut down. If DrivePool would be linked and the pools would be shown, too, would be a big help, too. Or an additional print output from DrivePool with a list of pools and belonging drives. Thanks - we really like your tools. Obvioulsy, as we're using all of them
  6. I would like to be able to pin specific folders, so they are always local. My usage would be to have current TV shows local and old TV shows only in the cloud. I can probably use DrivePool to set up mirroring (I haven't used it yet), but built in to CloudDrive would be so much easier.
  7. Chris To kick off - (not done a search to see if this has come up before) Initial idea (1) In scanner you have the volume map option which is OK but could be a whole lot more useful So what could be different - Have a look at http://www.uderzo.it/main_products/space_sniffer/index.html it looks very similar but is deceptively simple. Also it could be enhanced to add some missing features The major difference is the display is live and you can interact with it etc plus the filters are very useful and you can use colour coding (although this could be more useful with extra options) Labels for file date etc are also very handy Its a live interface to in that it will update as you change the files/click through to lower directories/ separate windows ..... If this or something similar could be added to Scanner/DP it would make a huge difference to whole experience as you would have the start of an analytical addon family.......
  8. lateworm

    macOS port?

    Could there be a macOS version of this at some point? I could see it being really useful to a lot of the developers who are on Macbooks. Even with fusion drives and SSD's, I often hear complaints about lack of space, speed and having to bring external storage because people have big, complex dependency trees that use absolute paths and are very tedious to sync.
  9. Hi, Taking a page from unRAID really. Given that windows doesn't seem to indicate if a drive has been spun down, is it possible to add a status of each drive attached to the pool for easy reference? I'm not sure if this is a feature most of your users would want but it might help those that run the ordered file placement plugin to determine energy usage of their servers. Cheers, Chris
  10. I have a few systems with a similar hardware configuration - several internal hard drives (up to 10) each with its own drive letter, plus at least one external Drobo disk array. My conundrum has always been - include the Drobo in the pool, or leave it out? I use duplication on all my folders... Drobo's are great, they're fault tolerant using either single or dual parity; however when coupled with a DrivePool box, adding the Drobo to a pool that has all its contents duplicated would be a waste of drive space that's already protected by duplication. So it would be great to be able to qualify a disk as "fault tolerant," allowing DrivePool to write files/folders that are enabled for duplication to only the fault tolerant disk (such as a Drobo). I also have had RAID-6 arrays that I had broke up and put each drive in DP; however, it would be nice to be able to mix and match single drives along with RAID arrays without wasting disk space.
×
×
  • Create New...