Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by steffenmand

  1. Did you create your drive with 20 MB chunks ? If not you should do that and also increase the minimal download size to 20 MB. If its currently at 1 MB you get limited by the thread count and response times for retrieving 1 MB / Thread
  2. My disc only lost reference to the files - recovery programs could find them all. So i think your first fix shouldve solved it as im certain its because an old chunk with data indexing was retrieved! Nice with the data duplication though. Quick question - does drivepool make it possible to "raid" different google drive accounts into 1 for less throttling and better speeds ? - If not this could be a nice request for either stablebit or drivepool as it could help in many ways also regarding data safety. So pretty much building a raid based on drive accounts
  3. This is because it will initially prefetch multiple chunks with multiple threads. After the prefetch is retrieved, it will only download as required to keep the same prefetched amount.
  4. Latest version doesn't do it! So starting the loooong process of moving files to new drives!
  5. I think you are on the right track - my drives lost reference to the latest files! The drives that were not affected were those ones i didnt touch for a while
  6. Simply not true - the issue is due to your provider having issues. If it was google drive its because google sends old data back during outages - they think. They have listened so much to their customers and the product is what it is today due to that. You just cant blame them for issues caused by someone else :-)
  7. Force attach just makes it iterate chunks - drive still works! Just takes a long time as it has to index all chunks
  8. Well it's different based on the use-case I would have no need for a feature like that, but i guess its because not everyone has flatrate internet or that their internet get saturated when it's running. What about limiting the upload speed ?
  9. You could just pause uploads and manually start it when you want! Pausing downloads is easily done by not accessing the data
  10. Think you got the samme issue that i get on all new drives... the ntfs is formatted wrong and the partition volume cant be read correctly. Thus neither chkdsk, testdrive or anything else can read it!
  11. Got a notice on my support that they might have found a way to avoid this in the future! Hopefully a fix coming soon. Only fixes it for future outages but better than nothing! Almost used to constant recovery on drives anyway :-P
  12. TestDrive says bad GPT data - are the drives being formatted wrong ?
  13. Im also a bit in the blank as to how it can happen - but i guess it must be google throwing wrong messages back or something. Usually i can get mine fixed with chkdsk /f - however all my new drives seem to get an error so chkdsk doesn't work My ReFS drives never had any issues though - so maybe thats the way to go
  14. The issue is when Google Drive have outages. Then data gets corrupt.
  15. Hi, All new drives created on the newest version seems to give me: "Unable to determine volume version and state. CHKDSK aborted." whenever i run chkdsk on any of them. Is the drive somewhat created in a wrong way?
  16. Do a chkdsk /f on the drive. Google had an outtage yesterday so that can have caused some corrupted data to happen
  17. Some one had a bad day for sure. its a forum and made for discussion.
  18. Calm down. Just said i dont see the need for such a feature.
  19. Don't see the need - It aint illegal to use Stablebit CloudDrive
  20. you can detach and reattach at another drive at any time - the code is just needed to attach
  21. You can't recover the key - thats why they ask you to print it when making the drive. If you prefer, try to make your drives with your own passphrase, then it might be easier to remember but easier to crack also most likely
  22. You should never go above 60 TB for a drive as chkdsk wont work
  23. You are thinking the raid controller is the issue ? This both happens on a raid or on a standalone pcie nvme card though
  24. what is stablebit cloud? sounds interesting :-)
  25. Hi, I was wondering what impacts the write speed to the disks. I've been upgrading some cache servers lately (although i couldn't software raid them ) and are able to transfer at around 500 MB/s normally to the disk (NVMe PCIe disk) However, when mounting a drive and transferring to that drive, i get a maximum of 50 MB/s. Could it be the encryption slowing it down ? and how do i figure out the bottleneck ? (HDD is far from fully used and CPU doesn't seem to be anywhere near to 100%) I want to upgrade the hardware appropriately, but need to make sure what is the issue to purchase correctly! I know the CPU's are a bit old atm. (2 x X5570) and will be upgrading soon to 4 x E7-4860v2 Anything else i can do to speed it up?
×
×
  • Create New...