Jump to content
Covecube Inc.

All Activity

This stream auto-updates     

  1. Today
  2. While it claims the background service isn' t working, something seems to work. The drive letter that contains the pooled disk does still exist. It shows filenames and it shows the correct size of all pooled drives capacities added up. But the capacity isn't correct because a drive crashed and in no longer part of the pool. I also removed files from the remaining pool disks. (manually) So, it looks like what it shows is just a memory of what it knew before the crash.
  3. I was able to get by this by uninstalling again and then reinstalling. Not sure if this will help you.
  4. Yesterday
  5. I'd deleted part of what I'd written... but obviously not quickly enough.
  6. If there is x2 duplication then you need two SSDs for tj SSD optimizer/write cache. I would advise against using an OS drive for data. But certainly you could do what you propose with 2 SSDs. It can even be done with one if you use hierarchical Pools but I would advise against that too. What I do not know is whether you can use two SSD as cache _and_ designate the same as targets for, say, *.db files. Chris would know i guess.
  7. Having just had a quick test, in terms of setting up a rule then it's dead simple, & you then get the same options for each rule as you do for directories in terms of drive usage & %ages. However, having checked, what you don't get is anything additional in the duplication options - so each file clearly still inherits the duplication from its parent directory. So that means that with 2x duplication on the parent folders you'd have to set limit the rule to using SSD + one other drive to get the read boost on all of the db files... (& then disable the SSD as a storage option for all of the directories) ...I believe we know for certain from other threads the DP will prioritise reads from the fastest drive(s) with Read Striping enabled.
  8. ok I will try to change the number of threads in upload, I will keep you updated because these warnings occur every time I send something to the drive. Thank you very much
  9. That's just a warning. You thread count is a bit too high, and you're probably getting throttled. Google only allows around 15 simultaneous threads at a time. Try dropping your upload threads to 5 and keeping your download threads where they are. That warning will probably go away. Ultimately, though, even temporary network hiccups can occasionally cause those warnings. So it might also be nothing. It's only something to worry about if it happens regularly and frequently.
  10. srcrist

    Data corrupted..?

    Right. That is how chkdsk works. It repairs the corrupted volume information and will discard entries if it needs to. Now you have a healthy volume, but you need to recover the files if you can. That is a separate process. It's important to understand how your file system works if you're going to be managing terabytes of data in the cloud. The alternative would have been operating with an unhealthy volume and continuing to corrupt data every time you wrote to the drive. Here is some additional information that may help you: https://www.minitool.com/data-recovery/recover-data-after-chkdsk.html
  11. I am not certain but to the extent a program postpones execution until a write is completed, I would expect it to receive the write-finished confirmation only when all duplicates are written. One thing to remember is that in DP, there is no primary/secondary idea. All duplicates are equal. Well, I think DP is intended to support this but it may not be that straightforward to get to work easily.
  12. LicQuyd

    Data corrupted..?

    Funny, all was working expert two folder, so I ran Chkdsk, seen some errors, then Chkdsk /f, and Bam, all messed up
  13. Hi guys, I have a problem with CloudDrive I hope that those who are more experienced than me can help me. Thanks I use CloudDrive with Plex and I have about 10 to 30 simultaneous streams, I have a 1gbps / 1gbps connection and the operating system and metadata both run on SSD and I never have problems using Plex even when I reach the maximum peak of 30 streams, different is when I try to upload, even having only 1 stream, CloudDrive gives me this error. After this error I find all the files loaded correctly on the drive, but I don't understand what error it is and if it can generate other problems. My configuration is this: I have 30TB drives in NTFS, chunk size is 20MB and the cache on SSD and is set 50GB + Download threads: 9 Upload threads: 9 Backgruond I/O: YES Upload Threshold: 1 MB or 5 minutes Minimum download size: 20MB Prefetch trigger: 20MB Prefetch forward: 180 MB Prefetch time windows: 30 seconds Probably something wrong in the configurations but I don't understand why if I don't upload I never get these warnings. I thank in advance those who can help me
  14. Thanks for finding this & proving me wrong - as, along with give the OP proper advice, it's honestly really useful to learn something new. Yeah, I can't think of an application for it for my personal use, but it's much better to be aware of useless (to me) capabilities than believe a load of nonsense.
  15. I'm truly sorry, as it clearly can be done. I won't delete the previous posts, but I will strike through everything that's incorrect so as to not to confuse anyone.
  16. @Umfriend , program name was mentioned in the last sentence of my previous comment. > if you have duplication then it is a bit more complex as you need two SSDs for this to work, unless the .db files only need to be read fast and writes can be "slow" Yes, the duplication of the .db files can be handy to 1 more drive, but not necessary i think. I would expect DP not to delay write to a "primary" destination just because "replication" drive is slower..
  17. I don't think that is correct. It seems to me that you can cause DP to place all files that conform to *.db to a single HDD/SSD (that is part of the Pool though). This can be done through the File Placement options. https://stablebit.com/Support/DrivePool/2.X/Manual?Section=File Placement And that would allow for the use of an SSD as part of the Pool dedicated to *.db files only. It is a bit complex to set up (see e.g. I can't say I would look forward to setting this up but it is intended to support what you want I think. Of course, if you have duplication then it is a bit more complex as you need two SSDs for this to work, unless the .db files only need to be read fast and writes can be "slow". I am curious though as to what program you are using that has these *.db files.
  18. I thought I'd made myself clear, but DP also cannot put selected file types within a pool on selected disks. The ONLY thing that you can do is to tell it to put a folder on 1 (or more) drive(s) - which it will carry out until there's not enough space on the drive(s). So - D:\DBfiles\[all of the *.db files] - could be on a specific drive (or drives). But with a structure akin to - D:\MediaFiles\Media File0000001\[bunch of files, inc a *.db file] D:\MediaFiles\Media File0000002\[bunch of files, inc a *.db file]... ...D:\MediaFiles\Media File9999999\[bunch of files, inc a *.db file] - then EVERYTHING in the D:\MediaFiles\ folder hierarchy would follow the same drive limitations. So, again, you would need to look at the documentation for or discuss it on a forum about Zer0net to see if it's feasible to do the former... ...as there is no solution within DP that will move ONLY the *.db files; UNLESS you can set it up to put them in a separate folder (or folders) from the media files. Otherwise, the only other option I can think of that 'might' work for what you're doing is look at SSD caching s/w to see if any of that can meet your needs. So something where you can set up a SSD to cache the most used data from a specific HDD (or DP pool or array) 'might' work - since they would normally tend to ignore very large files - which would then to prioritise your *.db files... ...but if you could find something that was more explicitly controllable then it would obviously be better.
  19. I'm having a similar issue. I tried to uninstall it, it would not completely unistall… I reinstalled and now it says background service failed to start. Try to restart the computer. I restarted and its still not working. Any thoughts?
  20. Thank you both, if the answer is NO (DrivePool can not selectively put certain file types from certain folder to my system drive which is not part of the pool), then i have to put another SSD drive and make it part of the pool OR i have to create the encrypted container on my system SSD drive (which is not part of the pool) to create virtual drive which i make part of the pool. If the DP will relocate these .db files, for example: data/folder1/subfolder_def/randomfilename.db data/folder2/subfolder_abc/randomfilename.db data/folder2/subfolder_abc/nextfolder/randomfilename.db i hope it will also duplicate them to other drive (i will always want any data of the pool be duplicated to one more physical drive). > @Umfriend perhaps you could relocate all to DP? No, because that data/ subfolders (in various depths) contains also very large media data and that is currently not suitable to store terabytes of data on a nowadays SSDs. I want to relocate .db files of that data/ folder subfolders because the app (Zer0net) has built in search functionality which is searching thru these .db and i want it to be fast (-> SSD), rest data are OK to be stored on the HDDs.
  21. Doh, yeah. I wonder whether going to sleep is an event that can trigger a task in task scheduler. If so, you could stop the service when going to sleep and start it when waking up perhaps.
  22. maybe I'm wrong, but when waking windows from standby the service is already started..isn't it?
  23. Hi I had the same problem and after a search on here it was recommended that you go into Drivepool settings and change duplication settings and just let the program rebalance and it removes files from the drive and when empty run remove disk and it worked for me, sorry can5 find the post now
  24. Last week
  25. I can't speak for whatever s/w the OP is using, but taking, for example, most of Adobe, then forcing it to place the temp files/scratch disk/media cache on a decent SSD can make a significant difference; irrespective (within reason of course) of where the main data files are stored... …&, naturally, that SSD doesn't need to be the system drive; which would be particularly relevant if budget limited the capacity of the SSDs you could afford to buy. So, for example, with <=1080p editing in Premiere, I've personally seen no benefit in using anything better than using short-stroked 4 drive R10 HDD arrays for the main video & audio files for a project - but that certainly isn't true for all of the other files. Then, as another example, with 16 threaded batch audio lossless compression/decompression (& sundry tasks), there was no speed difference whatsoever between using 2 reasonable SATA 250GB SSDs in R0 (repurposed 830 Samsungs that I'd bought when they were the new thing as a system drive) - vs using a 1TB 970 Evo... ...but the R0 set up was noticeably faster than using a single SATA SSD... ...& it's significantly quicker to move in the order of 100-300GB of audio from a DP'd HDD to the R0 array, do all of the batch processing I need to, & then move everything back. Now these are just 2 examples of my experience with my main setup of course... …but it's just about illustrating that whilst you're 100% correct that there's no reason why everything couldn't be on any drive type, that's not to say that different storage options can't be more appropriate for different processes/parts of processes. That said, I now have no idea 'if' the s/w they're using will show a material benefit or not - as I originally assumed that there must be a very good reason for what was being proposed - so they were working with something like massive databases that, for some reason, needed to be in a generic *.db format... ...but based on what's since been written I'm really not certain what the gain would be - which is part of the reason for suggesting looking into the specific s/w they're using. Well, I would imagine that on whatever forums are dedicated to the s/w, people would tell the OP if it was a worthwhile proposition.
  26. AFAIK, there is no reason why programs could or should not be located on a DrivePool so perhaps you could relocate all to DP?
  27. Okay, to be clear, DP cannot do this... ...& if whatever s/w you're using *has* to have the db files in the same folder as the data then there's no workaround that I can think of - as either they're all in the respective folders or the db files would be completely useless & your s/w would just create a new one in the respective folders during whatever process it's using. Yeah, with no info given as to what program is creating these files then it's impossible to try to find an answer - so I was simply working on the premise that most s/w allows you to alter the standard directories for specific things... ...however I'd suggest that you either look in the documentation &/or ask on a forum dedicated to that s/w to see if it's possible to relocate all of the db files into a single alt directory; as you're then getting the answer from people who have explicit knowledge.
  28. srcrist

    Google Corruption

    Your file data is probably largely still intact. What it appears that people lost was their NTFS data itself--which is why so much is missing. The data is still on the disk, but the file system data required to access it was corrupted by google's outage. It isn't a problem with an easy fix, unfortunately. You'll have to try data recovery software to see if it can restore the files.
  1. Load more activity

Announcements

×
×
  • Create New...