Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by steffenmand

  1. Would be great if dynamic disks could be used as the cache drive - i use it for mirroring to avoid corruption but is unable to use it for clouddrive. I understand why USB drives shouldn't be used, but dynamic discs is pretty much just a group of normal discs!
  2. Recuva can recover it. It takes time, but most if not all should be recoverable
  3. The more threads, the more concurrent chunks it downloads, thus increases speeds. But then we just end up getting throttled Im almost certain that minimal download is meant more for partial downloads. So setting it above chunk size don't seem to have any impact (atleast on my drives) This is because other providers offer up to 100 MB chunks, so people might want to download only a partial chunk instead of the full 100 MB. Logically 40 MB would also mean two downloads, thus two threads anyway
  4. Just remember your hard upload max of 750 GB /day at GDrive. So if upload is your concern, you won't be able to do more on a single account pr. day
  5. My guess is that you have the same "issue" as me where the response times pr. download thread is long, causing the time pr. thread to increase and thus slowing down speeds. As GDrive works with maximum 20 MB chunks, it has to request a chunk and await the reply from google. in my case this takes 2300-2800 ms pr. chunk. You can see details under "Technical details" (Found top right on the settings wheel) and then select your drive. Details are shown in the bottom. I have requested 100 MB chunks for GDrive with a minimum required download size of 20 MB or more to help speed it up for us high-speed users.
  6. Minimal download size above 20 MB on google drive makes no difference on my system
  7. srcrist. Nobody said it was CPU threads. HOWEVER your CPU has a limitation as to how many threads it can run as a system at a time - thus putting it higher than the CPU capacity will only build queues. Not comparing it to the same type of threads it still means a limitation in concurrent operations. Also faily certain that the "minimal download size" will never download anything larger than the chunk size. So if the chunk size is set to 1 MB, then 1 MB is the max download. Im pretty sure that the feature was made for partial downloads of data from larger chunks - feel free to correct me here.
  8. you can do several drives with no problem. just note the limitations for upload, api calls etc will be shared between the drives
  9. my guess is that you made the drive with 1 mb chunks. your cpu has 16 cores and can run up to 32 threads total. so 14 threads should be fine. for high speeds always remember to do large chunk drives - else you cant get proper speeds.
  10. The more threads, the more it downloads at the same time, thus faster speed. However of course you should not do more threads than your CPU can handle - as well there can be an upper limit from the cloud provider for max api calls. I use 14 threads
  11. Did you do big chunks when creating the drive ? Also remember to increase the thread count in I/O operations.
  12. Hmm seems stablebit is accessing it for some reason. I would try taking it offline again and back online. Chkdsk /f should fix it (or atleast tell you if they couldnt be recovered) If stuff has to be recovered, you can use Recuva Deep Search to find and recover the files - it is a bigger task though
  13. did you try chkdsk /f ? i use recuva for recovery
  14. Been using this product for several years. Usually a drive can be fixed by either: 1. Putting the disk offline and then removing the cache. Put it online and it will try to download the filesystem files again 2. Run chkdsk DRIVELETTER: /f to fix the filesystem. i only tried once having lost data - and that was way way back. The last incident was due to a google drive issue causing people to get corrupted files locally (i suspect) But as others mention, remember that this is not a back-up solution.
  15. As long as it is only attached at one place at a time, you can move it around as you want
  16. Detach and then reattach choosing the new cache drive
  17. Thanks for your thorough explaination. but no its not a hardware issue. running enterprise level gear in a top end datacenter. the latency is based on google response times and not the internet itself
  18. You are not counting the thread response time into the equation. The delay you get there will end up giving you slower responses. In my case that is 2500-2800 ms. Ie. i lose 2,5 second per verification. But ofc. if you work with very few threads it is the case, but with a high amount of threads you cant really increase it further without getting throttling
  19. But as far as i saw the new upload thread doesnt start before the old one was verified - am i wrong here?
  20. If data is lost by the provider it can fuck up - but most can be recovered then using chkdsk - it just takes a long time when you have a lot stored
  21. i think you mean mbit :-P Yes. It all depends on the response time you have. Speed is not the issue, it's my response time to google's servers You're just lucky to be closer. Plus i got upload verification on, that also cuts off speeds on uploads I get around 2500-2800 ms response time pr. thread and then instant download. So the less calls and the bigger download would do wonders for me
  22. and this is what would fix it! people using 100 MB chunks are usually high bandwidth users, so forcing minimum download of 20 MB or more should be fine! right now we just have a bottleneck with the response times slowing down downloads. i would go for for 100 mb downloads anyway
  23. Look at my Edit, i did acknowledge it was there - remember so as well - but they removed in an update or two later (updates almost every other day back then). Yes, but that was with a use-case of partial downloads in mind. I do remember them highlighting issues with responsiveness as being the reason why we couldn't go above 20 MB, but the limitations of max amount of reads pr. day. pr file. is of course a valid argument as well. Was unaware that they kept it for the remaining providers. Remember saying that those block sizes should be intended for high bandwidth use ie. minimal download of 50 MB-> 100MB. Still think that is the case today. In general the desire is to allow high bandwidth to utilize speeds. The bottleneck today is the responsetime pr. thread causing speed to massively drop! A good solution is 100 MB chunks = minimal download atleast of 20 MB, again a lot of "if that then that" I will reach out to Christopher about getting my old feature back on track
  24. steffenmand

    Backup system

    Would also love a feature to make a "raid" over cloud accounts. Would also make it possible to utilize multiple gdrive accounts to increase the thread count or just avoid throttling in general. Plus its a great protection vs data corruption. But ofc. it would require more bandwidth, but for high-end 1 gbit or 10 gbit users that shouldnt make a big difference
  25. We never had bigger chunks than 20 MB (me who early on pushed for higher blocks than 1 MB) EDIT: Or actually we did, but they removed it within no time. If it really was a provider issue, it should be an easy "if that then that" solution as they know what provider we are building drives on Their worries was the drive being not responsive as chunks would take time to download - which wouldnt be the case with the right setup Just seeing the limitations now from using that block size as both servers and internet get faster as well as the data getting bigger and bigger
×
×
  • Create New...