Jump to content
Covecube Inc.

steffenmand

Members
  • Content Count

    418
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by steffenmand

  1. Maybe this is related to the issues we have with Internal Error etc. if we are already above the 10 TB mark. Then it fails when trying to write new chunks because we are already above the 10 TB mark
  2. Have to note that it downloads and reads just fine - only writes that fail
  3. I know the reason - i was here as well you know All back in the days when the system was single-threaded and only 1 MB chunks. Also never said the issue was bandwidth - i just said that internet speeds increased and that more people could potentially download bugger chunks at a time. EDIT: with the chunks being the bottleneck as my HDDs hit their max IOPS. Thus 100 MB chunks could result in less I/O and thus higher speeds. I completely agree that partial reads is a no go - which is why i mentioned increasing minimal download size (this could be to the chunk size you know) So I ag
  4. I cant see how its different from 20 MB vs 100 MB. The content needed is usually within a chunk download anyway. Your problem is more if you do PARTIAL reads of a chunk in which you end up having too many actions on a file - however i always download full chunks. Thus i agree that it shouldnt be possible to download partials on such a chunk size, but if i can finish 100 MB in 1 sec anyway, then it doesnt really matter for me
  5. I actually think this might be a bug! [IoManager:214] HTTP error (InternalServerError) performing Write I/O operation on provider. 20:02:40.7: Warning: 0 : [IoManager:214] Error performing Write I/O operation on provider. Failed. Internal Error 20:02:40.8: Warning: 0 : [ApiGoogleDrive:244] Google Drive returned error (internalError): Internal Error 20:02:40.8: Warning: 0 : [ApiHttp:244] HTTP protocol exception (Code=InternalServerError). 20:02:40.9: Warning: 0 : [IoManager:244] HTTP error (InternalServerError) performing Write I/O operation on provider. 20:02:40.9: Warnin
  6. An option could also be to expose it in the settings file. Then only people who are Tech savvy will know what to do - and also be aware of the requirements for the change!
  7. 20:09:24.7: Warning: 0 : [ReadModifyWriteRecoveryImplementation:371] [W] Failed write (Chunk:133, Offset:0x00000000 Length:0x01400500). Internal Error 20:09:24.9: Warning: 0 : [TemporaryWritesImplementation:371] Error performing read-modify-write, marking as failed (Chunk=133, Offset=0x00000000, Length=0x01400500). Internal Error 20:09:24.9: Warning: 0 : [WholeChunkIoImplementation:371] Error on write when performing master partial write. Internal Error 20:09:24.9: Warning: 0 : [WholeChunkIoImplementation:371] Error when performing master partial write. Internal Error 20:09:24.9: Warni
  8. But this would be avoided by having large minimum required download size. The purpose is to download 100 MB at a time instead of 20 MB
  9. As internet speeds have increased over the last couple of years, is it maybe possible to increase the chunk size up to 100 MB again ? With a 10 gbit line on my server, my bottleneck is pretty much the 20 MB chunks that is slowing down speeds. An increase to 100 MB could really make an impact here speedwise. Especially because the current size builds up the Disk Queue which is causing the bottleneck. I'm fully aware that this isn't for the average user - but it could be added as an "advanced" feature with a warning about high speeds being required! Besides that - i still do love
  10. Yes - it just creates a seperate folder to save in
  11. You can always follow the changes in the beta here: http://dl.covecube.com/CloudDriveWindows/beta/download/changes.txt I always use it to see if an update was needed from my side Beta's can ofc. be found at http://dl.covecube.com/CloudDriveWindows/beta/download/
  12. Try the newest version: http://dl.covecube.com/CloudDriveWindows/beta/download/StableBit.CloudDrive_1.1.2.1174_x64_BETA.exe It should fix the issues
  13. Depending on the sector size when creating, you should be able to expand with no issues. Try going into the hard drive partitions settings in windows and expand the drive there - maybe its just sitting as an unpartitioned part of the drive which just needs to be extended to the current You will never be able to go above your sector size limit though Recommend never going above 50 TB, so chkdsk will always work. Above that it will fail. Then its better to do two different drives
  14. steffenmand

    Drive Duplication

    Maybe this could be a slow process taking maybe maximum of 100 GB per day or something (maybe a limit you set yourself, where you showcase it would take ~X amount days to finish), with a progress overview. Having maybe 50 TB would be a huge pain to migrate manually, while i would be OK with just waiting months for it to finish properly - ofc. knowing the risk that data could be lost meanwhile as its not duplicated yet Would also make it possible to decide to change to duplication later on instead of having to choose early on with Google Drive it does limit the upload per day, so could be
  15. Save as PDF and just cancel
  16. I got a dedicated SSD for my cache, maybe that is why i never encountered it. Maybe your SSD's hit their max IOPS due to the OS and more stuff also running there?
  17. usually it would continue or throw an actual error. I always see that one you see in the middle
  18. Does it throw an error when it ends ? Insufficient space is expected.
  19. try chkdsk e: /f instead /r is recover and search for bad sectors
  20. I got 19 drives on 5 accounts and no issues at all - im at 10 download threads and 12 upload threads and 20 MB chunks with limited throttling My guess is something in your system is causing it
  21. No, but using multiple accounts you could in theory increase speeds, while also ensure stability by using multiple providers
  22. It's not because it is marked as Offline? Did CloudDrive give a notice that it couldn't assign the drive letter?
×
×
  • Create New...