Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Posts posted by steffenmand

  1. Recently some of my drives have started to make pretty much all files corrupt on my drives.

     

    At first i thought, OK, BETA errors may happen, but now this is starting to happen on multiple drives.

     

    Stablebit Cloud Drive seems to prefetch fine and throws no errors, but what arrives is just corrupt data.

     

    The data in question have all been lying at rest, so we have not uploaded new corrupt data on top

     

    This is the second drive this has happened to,

     

    The funny part is, if i try to copy one of the corrupt files from the cloud drive to my own disc, it shows copy speeds of 500+ MB/s and finishes in a few seconds as if it didnt even have to try and download stuff first. In the technical overview i can see prefetch showing "Transfer Completed Successfully" with 0 kbps and 0% completion (attached a snippet of one so you can see)

     

    lSh7nMx.jpg

     

    Something is seriously wrong here and it seems that it somehow can avoid trying to download the actual chunk and instead just fill in some empty data.

     

    I have verified that the chunks do exist on the google drive and i am able to download it myself

     

    Been talking to other users who have experienced the same thing!

  2. Nice i think i will try to combine my current setup with that, as i have a lot more bandwidth to utilize and I were just waiting for 100 MB chunks. This way i can increase it to 100 MB this way :)

    Seems this only works up to the max chunk size. IE. you can't get multiple chunks, but only get multiple parts of a single chunk

  3. Alternatively, you can manually increase IoManager_DefaultMaximumReadAggregation in the service config file - by default its set to 1MB. This will combine consecutive 1MB downloads into larger requests (ie. while prefetching), while leaving smaller reads at 1MB. That way you can get the benefit of fewer API calls/larger downloads, without suffering the extra overhead of increasing the minimum download size on smaller.

     

    Here's some clarification on what it does: https://stablebit.com/Admin/IssueAnalysis/27309

     

    Nice i think i will try to combine my current setup with that, as i have a lot more bandwidth to utilize and I were just waiting for 100 MB chunks. This way i can increase it to 100 MB this way :)

  4. Gonna try these settings, at the moment it has worked fine but it pauses to essentially buffer which is what i am trying to fix. Have you tried to access to videos at once via plex? I am curious how the performance is to access multiple vids

     

    i can play them fine without buffering. they only take 6 seconds to load

     

    but the 20 MB minimal download was important for me, as 1 MB chunks would cause too slow a dl.

  5. Thanks i always thought anything that was prefetched ends up being cached, but clearly that isn't the case.

     

    Your cache will save it also, but you dont have unlimited space locally so the cache tries to save what is used most regularly. I dont have cache on myself, as i tend to access different stuff all the time caching gives me no effect but using extra space

  6. For fast connections it is also recommended to increase the minimal download size, so you can utilize your speed to get more data pr. thread. To change this you would have to detach and reattach your drive

     

    This is my settings with no lag with plex:

     

    20 MB chunks and 20 MB minimal download

     

    1 MB prefetch

    400 MB Forward (20x20 MB)

    1000 Time Window

     

    20 Download Threads

    10 Upload Threads

     

    1 gbit/1gbit line, so these might not be good for you, bit they give you an idea on how to increase speed on high speed lines

  7. The time window is how long the data will be saved on the drive. So for instance in plex this could be nice if you want to pause or rewind - else it would have to prefetch again.

     

    I use 20 MB chunks and have a 20 minimal download

     

    20 download threads and 400 MB prefetch.

     

    My time window is set to 1000

  8. Yeah, due to the API Pool change, there was a version or two, that could cause multiple "stableBit CloudDrive" folders to appear.

     

    Since the different "accounts" (basically) had different permission scopes, it could cause multiple drives to show up. 

     

    Deleting the older on is the "simple" fix.  But if you have multiple drives, this can be a bit more tricky. 

     

    90% of the files ended up corrupt though :-( will have to reupload

  9. in .784 im getting issue.

     

    seems that the more i dowbload the faster it happens, so could be related to how the chunks are handled. did a stress test with constant download during the night and went to 99% memory usage. if i dont download non stop it takes some days, which could be because itbtakes some days to reach thesame chunk amount

  10. I have gigabit and i can't pull 300-400 on google drive. Using a student unlimited account. And have you tried watching while running uploads?

     

    Missed this one, but might as well reply!

     

    I can upload fine while watching.

     

    I have upload verification on, but only have set upload threads to 10 while download threads is set to 20. This way there is always 10 threads dedicated for watching - works for me :)

  11. well, if you read my previous posts, i did comment on having already sending a ticket to my problem, no response either. maybe coz its the holidays but come on, a potential paid service should have fast replies even during holidays.

     

    i have a question, have you even tried rclone, your competition? your response says a clear no. as rclone is able to encrypt both the files and filenames.

     

    so, anything thats encrypted will be a jumbled mess without any extensions whatsoever.

    i.e tvshow.mkv > hydgvd76473jfifhhfrj

     

    i get that block might be much more secure, but at what cost? the speed difference is night and day.

     

    having 80/40 for stablebit

     

    800/400 for rclone.

     

    i just want to share what i learnt so unless you get it up and running to get 800/400, I don't see the real benefit in using your product at all. its just how the way it is.

    I prefer CloudDrive over rclone every day :) Don't like the product ? Then just move along, no reason to try and convince other parties to switch.

     

    I'm getting 400/400 mbit with stablebit on a google drive. I'm pretty sure that some of the overhead comes from the fact that they have to do all the encryption/decryption on the fly, which causes delays.

     

    400 mbit is fine for me and this way i know my files are properly protected.

     

    Stablebit is also BETA, so it is expected to encounter bugs etc. Trust me, a lot of us have followed the application for a long time and it really has been  improved in a massive scale since

  12. Steffenmand,

     

    Can you give us an update on any progress you have made in getting these drives working again?  I am holding off on any update until I hear back from you.

     

    No progress - gave Stablebit access to my google drive accounts so they can look at the stuff, so waiting for them to get back - They most likely took some days off between christmas and new years, so I think i will hear something soon :)

  13. I'm sorry if i come about it aggressively. trust me, i really want this to work.

     

    what are your settings? i would like to copy yours and test it again. 300-400 download will make me buy it instantly tbh.

     

    20 mb chunks,

     

    20 mb minimal dowbload

     

    1 mb prefetch activation

    400 MB prefetch forward

    1500 prefetch timeframe

     

    10 upload threads

    20 download threads

     

    to set minimal download you have to detach and reattach the drive - it is set in the attach screen! if the drive was made in .463 you maybe will have to remake the drive as they added the feature afterwards

  14. What speeds are you getting?

     

    I have no choice but to not use your product if this goes on. No support to resolve my issue. Has sent a ticket 2 days ago = no replies. Post forum = no replies.

     

    TBH this is a dead service.

     

    If you could stop being a prick and actually look at what date it is! a lot of companies close down between chriatmas and new years!

     

    i get around 300-400 mbit download on my 1 gbit line!

     

    Trust me, hostility doesnt result in good customer service

  15. [sOLVED]

     

    Hi,

     

    After upgrading from .770 to the latest version i suddently got two corrupt drives that wouldn't load on mounting

     

    As i wanted to see if a detach and reattach would fix it, i decided to detach both - with success.

     

    However they are now registered as "Destroyed Drives"

     

    I got 40 TB data total on these drives and would really be sad if this was suddently gone!

     

    What to do ?!?!? Do you have a way to recover these drives?

  16. The technical reason for this is that there is no way to ensure that t he cache is synced between systems.  That means there is no way to properly do this without causing corruption.

     

    Period. 

     

    So, the answer is always going to be "no".  Sorry. 

     

     

     

    The reason for this is that StableBit Clouddrive doesn't deal with files. It deals with raw disk data, eg sectors/clusters of data on the drive.  

    When you write a file, NTFS or whatever file system you're using writes the data to the disk as an NTFS entry and as raw data on the drive.  

     

    However, this takes time to be uploaded to the cloud provider. 

    But in the meanwhile, what happens when another system adds data and uses the same blocks?  The Cloud Provider doesn't care, and will happily replace the first system's data. And even it's NTFS entry, if needed.  Meaning that you lose that data. 

     

     

    Since this is completely unacceptable, this is the reason there is the one system limit for mounting the drive. 

     

    The only way around this issue is if we offered some sort of intermediary service. That means a LOT of traffic, a lot of storage, and violates the "trust no one" encryption goal. 

     

     

    However, using a server of some sort (eg, file share server, web server) or other utility (such as BT Sync, SeaFile, etc) are options that would allow you to access the data at multiple different locations without the potentially corruption issue mentioned above. 

     

    Couldn't "READ ONLY" connections be possible ? It shouldn't be necessary to reupload new data all the time should it?

  17. I got 20 MB chunks (Google Drive limit, would have wanted 100 MB) and 20 download threads.

     

    1 MB trigger and 400 MB forward (20x 20 MB) got a long window of 1500 to allow for pause and rewinding :)

     

    Got 500 GB SSD as Cache

     

    In general the optimal settings depends on your connection speed. In my case it is 1 gbit/1gbit

×
×
  • Create New...