Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by steffenmand

  1. It is only possible on drives made after it was implemented. If the was made after it was implemented, then you can simply detach and attach again to be able to set the settings
  2. i think you forgot to change the minimal download while creating /attaching. You cant change it before the stored chunk size is increased. you most likely left it on 1 MB by mistake as it is the default. Its one of the last calues you can change in the bottom - but remember to change it last :-) The minimal download works just fine, you just set it up wrong
  3. It is not bugged. It is because you made the drive with an old version of CloudDrive. Unfortunately the increase prefetch amount was only added in later versions, so the drives need to be remade to utilize this feature. I've read 20 MB chunks with prefetching with no issue at all since version .463 or something close. Recreate the drive using the newest version found here: http://dl.covecube.com/CloudDriveWindows/beta/download/
  4. You can only do it if you created a drive using one of the newer versions. If the drive was created with < .463 or something like that, it won't be able to increase the prefetch amount
  5. If you are copying from the drive, you should be reading so fast that it kicks in with multiple threads. Are you on the latest version ? And did you try reboot and detach/reattaching?
  6. After the initial prefetch, it will only grab a new chunk as you read a chunk - therefore you will most likely only see one thread after the initial prefetch always
  7. Will try to get logs. After i reboot it goes back to normal 500-2000ms, however builts up over time. Will come back when i have logs!
  8. Experiencing the same. This is my bottleneck: http://imgur.com/UlWpiw9 HUGE http delays - only happening through stablebit clouddrive. 1 gbit full duplex line, going to google directly i can get files with full speed and no delay
  9. The high MB is due to the NTFS storing data in memory about each file. Due to the high volume of files due to the chunk system, it will over time use more and more memory as files get indexed. It is due to the way NTFS works and not something to do with Cloud Drive. Try adding a couple of million files to your own drive and see the same result over time
  10. Trust me... getting them to rush a release won't make things any better But thank god they prefer to keep it in beta until the product really is stabil enough However i must say that i have had data in the cloud with stablebit for more than a year now and i haven't lost any of it yet
  11. You have to manually edit the config file - you can't do it through the UI.
  12. Just disable the auto dismount thing in the advanced settings - that did it for me :-)
  13. I bet i could barely power my bed lamp with solar power here Guess that is why we went to Wind Power - makes a lot more sense here with our shitty weather I've seen those videoes with people doing barbecue in the sun and using their car as an oven for cookies - hope to get to see San Diego within the next few years myself though - i know the US has a whole different idea of what a King Size meal is compared to here, so pretty much food heaven!
  14. But atleast it is green energy :-D But yea nice internet speeds are nice - and they are cheap and uncapped
  15. I highly doubt it Hate Sidse Babett Knudsen in Westworld because of her danish accent shining through Then i would rather highlight Nicolaj Coster-Waldau and Mads Mikkelsen Or just the nice roots that Viggo Mortensen has
  16. You're welcome here Being a small country has its benefits regarding infrastructure - You would hate our high taxes though! But we are the happiest country in the world for a reason
  17. Maybe it might be the encryption they don't like Most likely Amazon want to be able to monitor all the content they host
  18. Aarhus, Denmark here. 1gbit/1gbit Getting ~300-400 mbit upload and ~200-300 mbit download currently with Google Drive at 20 MB chunks. Prior we did have 100 MB chunks for a very short time, where i could utilize ~950 mbit upload and ~600-700 mbit download (miss that ) Amazon is giving me similar results. Speed issues currently is caused by the HTTP response time which often is between 5000-15000ms, therefore eventhough the actual transfer is done in a partial of a second, the actual http response wait time is causing it to seem as a way slower transfer. The User Rate Exceeded is also a quite common error in my logs
  19. It really sounds great - I'm pretty sure all the high bandwidth users are pulling their limits Using it for cold storage I would only benefit from large chunks, as i rarely would need to do much more than open a folder. Really look forward to these things getting in - The rate exceeded issue also seems to have gotten better
  20. Wth limit pr app and not pr user? So they wanna punish an app for getting popular? Regarding lowering API calls i say bigger chunks :-D
  21. It does really basic default encryption per default to ensure Google and Amazon don't index the files as video/pictures whatever. This encryption is easily cracked. The Full Disc Encryption is proper encryption with a good key involved.
  22. Nope, not with Cloud Drive as everything is chunked anyway and get their own hash value.
  23. After everything has been pinned and indexed it is decent. Still expect 30-60 min. in big libraries. The less files in a folder, the quicker it gets, so try to sort items in subfolders (I have a folder for each letter)
  24. Took me 5 reboots and 3 hours to mount my drives When i finally got them mounted i had 83 red errors in the top) Even after they are mounted i'm getting Rate Limit Exceeded like every 20 sec
×
×
  • Create New...