Jump to content

steffenmand

Members
  • Posts

    418
  • Joined

  • Last visited

  • Days Won

    20

Everything posted by steffenmand

  1. I noticed! Really look forward to this advanced pinning engine Now i only lack a bit larger chunks (maybe 50 mb if you find 100 mb too large) In general Stablebit CloudDrive has become a really awesome app. When the advanced pinning feature comes online i will post about it on the plex forums, as it then will be a perfect solution for plex users!
  2. Ahh forgot about thinking in chunks so we got a huge amount of files. would a ReFS drive perform better in this way?
  3. NTFS metadata cant be that much. But glad to know im not the only one experiencing it
  4. using 0.629 i have been experiencing a slow ram usage build up making my ram hit 100 % after a week or week and a half. my server got 32 gb ram and only running clouddrive and plex media server. Could the memory usage during caching have a memoryleak?
  5. Hi Christopher! Was this added as a request ? It really does cause the drive to go on overload in folders with lots of files. It will just start loading data from the drive for ages often with prefetching kicking in at each file if set at 1 MB start. Really would be a feature to make everything feel a lot smoother. Currently my usage is "Open folder -> as quickly as possible find the file i want --> open it --> hurry and close the main folder, so i can stop all the download process for MFT headers"
  6. I will happily be a test bunny for you with various chunk sizes on a 1 gbit line
  7. So for larger chunks just warn about high bandwidth requirement :-) Not all settings will work for all in every application.
  8. Possibly the MFT records yea - Not sure if that is what windows uses to "CheckIfFileExists". Like mentioned before Plex does this to all video files when trying to access the media on the web player or media player = massive load time per title. The file headers would also be great, as you have to load data from all files when entering a folder. Having a folder with thousands of images means massive load, which seems to just give up at some point, showing only data for some of the files. Letting both these types of data get cached (by user choice) would be an awesome feature which could speed up stuff at several places! If you could add it as a request, then Alex can look into if it is possible and if it is worthwhile doing vs time it takes.
  9. Come on, just a quick yes/no and make a man happy
  10. 100 mb chunks were on legacy drives... i think you have to recreate! Hoping for the 100mb to return though tons of throttling and constant retries with current 20mb.
  11. The drive only changed due to the implementation of the minimal chunk size, which required changes to the drive. Almost all updates are client based only, which wont require a new drive. Regarding your copy of data, you could use ultracopier to limit copy speed so that CloudDrive can follow along with the upload. I seriously doubt them closing their unlimited space, but i'm sure that they will enforce closure of accounts with maybe 50tb or more data stored. A good safe bet if you want to save huge amounts of data is to have several accounts and log into them all and have maybe like max 10 tb per account. It will cost more, but get you less noticed by Amazon/Google. Currently using Google myself
  12. No, the change for drives is due to their implementation of minimal download size, which makes it possible to download larger pieces at a time. If 1-2 mb pieces are fine for you, just keep your existing drives.
  13. A LOT of speed and stability! you should update! You have to make new drives to utilize things fully though
  14. Weird! I got no issues with prefetching at all - besides a block when uploading due to prefetching threads getting stuck in a checksumblockreadcopy state untill prefetch timeframe is exceeded
  15. two numbered versions behind the most recent is what they told before, though 606 is running perfect for me and prefetching was improved recently
  16. You have 105 MB pinned + 919 MB cached = 1024 MB. Pinned data is data about folders etc. to make your browsing better. Cached data is partial data located on your drive to help speed up load times - this value can be changed at any point to none or a lot more. Left pie chart shows data on your drive and what parts is what. Mostly this chart will be dominated by "to upload" when you have copied data to your drive. Right side shows how much if your data is located locally vs in the cloud. You can resize your drive in the drive options located under the left pie chart.
  17. Try to take a memory dump and send it to them. After that reboot and then uninstall
  18. Are you able to cancel the format in Disk Management ? If so try that and do a new format
  19. Try going in Disk Management in windows and see if formatting even began and if not, just format it from there
  20. I don't know if it is possible, but would be nice if you somehow could be able to cache the header/file attribute data of files locally so plex could index nice and fast. Loading in some videos is taking some time as well as the constant load to check if info is the same and that the file stille exists. Just tried the drive with plex and everything runs fine, but indexing is slow as well as you lag while entering a title because plex tries to read the header/file attribute of each file and asks the OS to confirm the file exists. If it is possible it would be a nice feature to be able to activate caching of the headers and/or file attributes of each file :-) Besides helping in plex, i also believe it would make folders load faster while browsing in windows, as they use the headers as well - especially with lots of files. Perhaps using the GetFileAttributesEx function in the WinAPI 8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads. I see that this issue is happening if you have upload pending and threads running for upload and you start prefetching. It will then block both upload ans download and have threads sitting stuck in "ChecksumBlocksReadCopy" state untill prefetch timeframe runs out.
  21. #1 happens as soon as i copy maybe 100 gb while the cache drive still has plenty of room. #2 will try to log it next time i experience this #3 i doubt this low prefetching will work, but i will try It seems to work a lot better! I can now get 3 files running at the same time with .606 I think larger chunks will will be the final fix to get data down faster for me, but besides that everything is pretty much perfect! Awesome work!! #4 im sad that it seems you wont prioritize on optimizing for high speeds, when it wont affect users with lower speeds. Remember that in Europe it is normal to have 100mbit+. over the next years speeds will get higher and higher for everyone and the increased chunk size more important. im also pretty sure that the throttling might be one of the issues making some of the other issues worse, due to extra wait time for a chunk. 100 mb chunks also showed perfect results earlier, but back then we just didnt have the minimal download size and therefore got issues with too many downloads of the same file - that issue is gone now #5 Sounds great #6 Will try that when/if i encounter the issue again Hopefully you don't get annoyed about my requests - i love CloudDrive, just trying to make it better
  22. I have used ultracopier, but hate having a program running as a service. Like to keep my server running with as little software installed as possible But of course for now that is a workaround. Regarding the files, he mentioned it was complicated, but never said it couldn't be done. The system must somehow know what chunks belong to which file - else it wouldn't know where to begin and where to stop. My guess is that they keep info on this in the first chunks on the drive. If prefetching on first file is possible, then they must have gotten the starting location from some place - when another file is opened they should have a second location and should be able to run them individually
  23. Thought i would give an update on .605 and the issues i still have: 1# Like mentioned above, when copying to the drive (Dedicated 512gb SSD, with nothing else running) the drive gets locked to running none or close to no threads. An option to throttle copies to the drive would be great, so we can limit the copy speed depending on our cache drive and still get an available drive meanwhile 2# When prefetching, it seems to sometimes stop in the middle, to then resume. Example: 80 MB prefetch with 20 mb chunks, #1, #2, #3, #4. Initially it seems as it reads #1 it starts on #5 however suddently it can reach #9 and "forget" to start on #13 waiting untill #12 was read before starting on #13. This gives some issues with data not being available sometimes and requires me to load my data package all over. I could image that if it was a video, then it would cause a lag. 3# Opening multiple files have huge issues. Prefetching is currently set up to run a total MB amount, which means that the first file in theory could have prefetched the total amount, meaning that any new files are unable to prefetch forward. To me the ideal solution would be to make the system work with a prefetch value pr. file(as you can identify files in the drive, you should know which chunks relate to what file), so that each file would prefetch the value set. So a 80 MB prefetch would be 160 MB with 2 files running and 240 MB with 3 and so on. For me atleast, the issue is not the bandwidth, as the prefetch is running like 1 thread at a time when it reaches its max, so there should be plenty of bandwidth available for most people (if not it is because they are saturating their line anyway, which is not the fault of CloudDrive) 4# CONSTANT throttling from Google due to userRateLimitExceeded (Too many API calls i guess). High speed connections need way bigger chunks. I could imagine that some of these throttles could be making some of the above issues worse, as they cause wait times for threads because you put a delay on the retry. The download time for a 20 MB vs a 100 MB chunk for me on a 1 gbit line really doesn't have a big difference(Previously when we had 100 mb chunks Google gave me around 600 mbit download on a thread, making it possible to utilize my entire 1 gbit line without throttling) and therefore wouldn't increase the latency - All the latency i have currently is the API request times from Google, which is the same if it was 1 MB or 100 MB chunks. 5# You are not able to see your attached drives in the provider overview, running with multiple accounts, it would be great to perhaps see which accounts i have drives mounted from (perhaps marked with a green color?) - You currently show attached drives at other locations, marking the currently attached would also be nice 6# Had the issue with chunks being marked with success at 0% (view the last page) i don't know if it was fixed, but if not then it might be an issue! Nothing would get uploaded meanwhile and it required a reboot to get fixed. 7# Chunks can sometimes sit stuck at 100% in technical details for 5-10 seconds, before moving to completed stage. 8# I have seen threads being stuck in the state "ChecksumBlocksReadCopy" spending CPU and blocking new threads. These are the current issues i am seeing, but as you know i will most likely find more But only in the spirit of making the application better
×
×
  • Create New...