Jump to content

fazza

Members
  • Posts

    17
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by fazza

  1. fazza

    Clouddrive upload issue

    Thanks for the info on the thread feature - I can understand that this could well be a non-trivial thing to implement. As far as my system, I have run both the built in windows memory diagnostics and memtest86 both of which have turned up no errors. Chkdsk and sfc from admin cmd prompts also found no errors to fix. However - this is a crusty old workhorse install that's gone from 7 to 10 with various flavours of insider builds on it, so it'll be very cobwebby behind the scenes. Everything seems fine day to day (this issue aside), but I can well believe that the cruft may be causing weird problems. Some of the crashes may simply be due to me manually restarting the clouddrive service which it was not happy with. I have been doing some testing though, and as soon as I add more than a single onedrive provider, the issue recurs so it does seem to be provider related. 3+ google / dropbox / box accounts in a similar configuration do not display this behaviour - it is only when 2 or more onedrives are added that the issue starts, and it is remarkably consistent. I've also set up clouddrive on a second pc to try and eliminate device specific issues and found identical behaviour. Both machines use the same (upload constrained) internet connection so that is the common factor. Any thoughts are welcome, but I'm not really expecting a fix or workaround for this at this stage. I'm happy to leave Clouddrive alone for a few months at least until it's had a bit more time in the oven and some of these things have been worked out via the normal development process.
  2. fazza

    Clouddrive upload issue

    Something like the thread settings sounds like it would help. No drama on the troubleshooter - logs uploaded.
  3. fazza

    Clouddrive upload issue

    Sounds very likely! I've realised that this basically makes clouddrive unusable for me as it ties up the full upload bandwidth of my connection unless micromanaged The troubleshooter is erroring out uniformatively on collecting the error logs ('A task was cancelled'). I can compress and send the contents of the C:/ProgramData/Stablebit Clouddrive since that seems to be what is collected anyway?
  4. Outstanding thanks Drashna. I had mounted the drives to folder paths already but did not realise this allowed me to use the bitlocker management page to unlock. That's a much much easier alternative. Greatly appreciate the info as always.
  5. Hello, I've noticed an odd issue on my Clouddrives, consisting of 4 1tb onedrive clouddrives pooled together using drivepool. For some reason uploads simply do not happen unless I restrict the upload threads to a single drive (ie. untick the upload threads checkbox in performance for all except a single drive). Once I do this that drive completes it's uploads in a reasonable timeframe (couple of hours), after which I can move onto the next etc. If I leave all 4 drives with upload threads assigned, the amount of data to upload never seems to diminish (even after leaving it for 3-4 days). Bandwidth is definitely being used during this so clouddrive is uploading something. This constant management of upload threads obviously makes managing the drives a far more manual process than it's designed to be. I wondered if there was a provider specific issue or perhaps something about my setup that may be causing this, or any logs that may shed light onto what is going on? Any insights? Thanks in advance.
  6. Well this has worked out really well. Experimented with 4 drives bitlocked and pooled to give a single local volume and this *just works*. The only slight pain is that I remove the drive letters from the individual drives which means that to unlock them I have to go into disk management and reassign a drive letter every time. Can't see a way round this without encrypting the system drive, and I don't want anything to auto-unlock so I've not done this. The next part was inspired by ironhead's post above (thanks!) - I now have a large cloud drive which backs up portions of this local pool. Cloud backup with no monthly fee. The initial 'seeding' 2tb upload was done over a fast connection at work and I was worried about bandwidth once at home. However, since Clouddrive is only uploading changed parts, the backup process becomes completely manageable over that connection.
  7. Way simpler than I thought. Guess I'll leave clouddrive for the actual cloud stuff. Many thanks for the help.
  8. Ok thankyou for the info. I'm clearly going about this wrong! What I'm trying to achieve with the second setup is a single large local encrypted volume, essentially. I'd like it to be larger than any of my physical disks. From what you've said I need to either: 1) Skip clouddrive and enable bitlocker on the drives to create a single encrypted volume within drivepool 2) Use clouddrive to create local encrypted containers and then pool those within drivepool Would either of these produce the result I'm after?
  9. Hi, just started to use Clouddrive and am trying to get used to what it's capable of. I have a couple of questions to check my usage of the product and it's capabilities. Any insight would be appreciated. First, I have created a 4tb cloud drive spanning multiple (9) providers and pulled together with DrivePool to create a single large volume. On this pooled cloud drive just created, I notice there is over 1gb of data to be uploaded - I presume this is some form of metadata required by Clouddrive, and is expected behaviour? If so that's fine, but I was expecting an empty drive to not need any uploading - it may be worth flagging this behaviour for bandwidth constrained users like myself. Next, I would like to create a local encrypted drive using drivepool and clouddrive together. There will be a dedicated drivepool on which the encrypted volume will go. Once the volume is created by clouddrive (using local storage provider option), can I then remove the drive letter from the pooled drive and use only the clouddrive created drive letter? Thanks in advance.
  10. fazza

    temperature error

    Thankyou Drashna. Done both suggested things, solving the issue. The 8 digit string is below: G0BEDBZS
  11. fazza

    temperature error

    Hi Folks, Since installing the new version of scanner (a day or two ago maybe) I am seeing drive overheating errors on one of the boot ssd's in my system. The disk itself is clearly running nice and cool (27 deg tops) but scanner is flagging it as a warning. I assume this is because the SMART data is reporting a max operating temp of 0 deg for some reason. I've silenced the warning for now but don't like the fact that scanner is logging a warning when the disk is clearly fine. Is there a way to correct this operating range or is there something else I can do about it? I've attached a screenshot of scanner showing the temp in red Many thanks
  12. fazza

    Help with SMART errors

    Many thanks, decided to get them both replaced based on what you've said. I'm guessing this sort of return decision will become easier to gauge with experience - at the moment, even with technical explanations of the errors, it feels like I'm flying blind. Anyway... So in less than a month scanner has prevented significant data loss on one system and the potentially the complete failure of the boot drive on another. Impressed!
  13. Hi, Installed scanner a few weeks ago and have given my systems a little while to settle down. I am now seeing a couple of SMART errors thrown by scanner and would appreciate any insight on whether I should be jumping up and down or not. Error 1 is on the boot disk (250gb seagate 2.5") of my newly built NAS - scanner is flagging up 2 'recalibration retries', both times when the system restarts. I would not be concerned but it is a brand new drive, and none of the other 10+ drives monitored by scanner have a value above 0. Error 2 is on a data drive (3tb seagate) in a different machine. The reallocated sector count has jumped from 10 to 25000+ in a few hours, and the drive has been disappearing from windows. This is seems obviously serious and a good candidate for replacement. From my imperfect research, the drive showing error 2 is clearly going bad - but should I be worried about error 1? Lastly, if there is a resource i should be referring to that would help me interpret the SMART results without cluttering up the forum please point me towards it! Thanks in advance.
  14. Thanks. The remeasure did not change anything but after a reboot it is now reporting usage correctly. I'm really pleased with the software - does exactly what it says on the tin with absolutely no drama and is usable for a relative novice like myself. And the support gives me a lot of confidence moving forward. Apologies for the stupid questions, and thankyou very much for the help
  15. That's great, thanks for the clarification. One further question (not sure if this should go in it's own topic): I've copied ~500gb of data to a separate NAS which is also running drivepool (4x4tb drives, one big pool). I've got duplication turned on for all except the recycle bin folder and the system volume information folder (manually turned off for those). Having copied ~500gb the drivepool window now shows 613gb duplicated files, 205b unduplicated, and 415gb other (screenshot attached). Again, from what I've read and has been explained elsewhere, I would expect other data, but I can't work out why my other data is so huge, and I would like to figure out what is going on. Other info: The drives were blank and formatted before adding to the pool, so no pre-existing data. There is nothing on the individual drives making up the pool apart from the hidden poolpart folder. Can you provide any insight? Can I give any other information that would help? Thanks
  16. Thankyou Drashna. Looks like my reading wasn't as exhaustive as I thought! Just to make sure I've understood you correctly: By identical locations you mean the folder structure, yes? That's fine because my folder structures are identical for the dupes as well. Given that, if I add the drives to the pool and follow the directions for seeding my existing content then when drivepool runs a duplication pass it will recognise that there are two copies already and simply rebalance for optimal disk usage? If so that's perfect and will save me a lot of headache.
  17. Hi all, I'm currently in the process of moving my storage management over to drivepool. I've read up pretty exhaustively and have been trying the software for a little while now. It's going on my old media pc with 8 drives of varying sizes. The pc currently contains assorted media that I have manually duplicated over the years onto different spindles. Basically, 90% of the content is already duplicated. Managing all this manually is an absolute nightmare hence my turning to drivepool. The question I have is this, if I whack all these existing drives into a new pool is there a way to make drivepool spot that there are existing duplicates and rebalance as appropriate? Or do I need to back all the files up onto another machine, wipe my old drives, create a new blank pool on the media pc and then copy them back to let drivepool handle the duplicating/balancing on it's own? Edit: I'm not fussed on which drives the data resides - I'm happy for drivepool to manage that for me as long as everything is duplicated on a different spindle somewhere. I'd rather not do the second because with 6+ TB data and a glacial LAN (don't ask) it'll take literally weeks to do the file copying, but from my understanding of how the pools work it's the only option. Any help gratefully received!
×
×
  • Create New...