Jump to content

ironhead65

Members
  • Posts

    42
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by ironhead65

  1. Chris,

    Thanks!  I'll check this out soon.  For now, I just removed all the BOX drives and it seems to have made everything happy again.

    It could have also been causing a LOT of pool inconsistencies (where formatting or just purge and re-add was the only way to "fix" the Drive Pool issues with these cloud drives).  I cannot be certain, as I never saw "I did this thing" then Drive pool went wonky.

    So, I have not posted about it.

    For now, I have been using ONLY Google Drive and removed ALL other drives.  That means my space dropped quite a bit, but I see with only 11 drives (as few SMB as well), my processor use has dropped from being pegged to 100% for DAYS on end, to rarely getting over 10-20%, and when it does, it is for seconds.

    I think in total, just to push things, I had 44 drives in there.  That was unhappy.  I am thinking even with the UPLOADS paused, there was something else going on all at the same time?  Pinning data maybe?

    For now, with the 11 drives, I have not had to run my script (which I never uploaded to you as I've found issues when the processor use is HIGH...the script fails in a strange way).  Maybe I should post it privately to you just so you can see what I was trying to accomplish (and seemed to help quite a bit)?

    Thanks!

  2. Hi all,

    Been experimenting and I think I found a bug, however, I did not want to do anything until I ran it by the community.

    I have signed up for a Free BOX.com drive.  You get 10GB of space.  What they did not tell you, is it seems you have less than that.  I took a guess that they used hard drive manufacturers 10Gb (not 10gb) and it really was 9.7GB (there abouts).

    I found this by using this drive in a Drive Pool.  The pool got saturated and that drive had over 9.7GB of data.  That is, the drive was over 90% full.  As Cloud Drive was trying to upload, it seemed to get stuck in a loop.  After I noticed that this ONE drive was taking ALL my upload bandwidth, I hit PAUSE though the new PAUSE button.  I validated that the settings changed (manage drive -> Performance -> Uploads UNTICKED).  I also tried to use the "All drives PAUSE" under the pulldown (upper right Gear).  Nothing I did could stop the upload from happening.  I was sure this was happening as pulling the ethernet cable from the VM seemed to stop the upload (from the upload speed value being shown).

    After I plugged the ethernet back in, and a few hours later (I honestly do not know how long it was), I logged back into the machine and looked.  I had a message in the message ticker that stated the drive was FULL and the upload attempt failed 260 times.  EEEK!  I also saw at least 1 other box drive listed in the log.

    I then tried to STOP and RESTART the Cloud Drive Process.  This only seemed to anger other drives (now they had GB of data that was out of sync - this was being addressed in a Cloud Drive thread regarding rebooting and seemingly getting this error, which was brought to a conclusion with a TR/SR being submitted, so I did not bother...well bothering anyone about it).  OK, all that aside, this drive was NOT going to stop trying to upload.  I then tried a reboot.  Still, no change.  For a while, I just TURNED it's speed WAY down.  That let other drives "catch back up" from the service stop and the reboot.

    At this point, I now think I discovered the issue.  I changed a few settings on Drive Pool to fill to 80% as best it can, instead of 90% (just to be safe).  I also mapped my box accounts to drive letters.  Then MOVED an appropriate amount of data off to let me use the resize feature and set to 9.7GB.  It seems just doing the move has made cloud drive happy and EACH drive that was STUCK like this has STOPPED trying to upload.  I have only tested this with box (hence the post here).  It might be with all - however there was a pretty long check when I SETUP the Google drives.  If I put in a value that was too large, the original drive creation would fail.  I would think this is the same with BOX?

    I am on the latest beta 1.0.2.963

     

    So, has anyone else seen this?  Any tips?  Is this just BOX?

    Thanks!

  3. Chris,

    I just updated to the latest beta...an UPLOAD ALL PAUSE feature!  I really slacked and did not send you my script!

    This is going to make things so nice!  I'll add that into my script as the original way, I also validated settings of the performance window.

    Using the new mechanism will be so much faster!  I check for color around the PAUSE button, if grey, paused, if white, not paused.  Loop.  So much easier than clicking buttons and looking around for a popup!  Now, if you have to move things, I only have to adjust 1 thing, VS every button location!  Same with color.  This might make this easy enough to just check the window size and calculate where the buttons are.  This would REALLY make this fancy!

    This is great!

  4. Chris,

     

    I never made the support ticket.  I ended up just wiping the drive.

    Basically what I could determine, the meta files as well as folder structure was different between using SMB and direct drive.  That is, you can't just DUMP a drive over a USB connection to say SEED a drive, prior to mailing it to someone else, whom then attaches it to their share network or FTP for your remote use.  At least not without hand creating a slew of files and changing the directory structure (the CloudDrive folder on the raw hard drive itself).

     

    I could not wait as I needed the drive up and running, so I just wiped it and re-created it...which was a bit painful due to how LONG it took to Bitlock and Encrypt the 1.5TB drive...but it is up and running now.

     

    My thoughts are this either should be noted, changed, or an "upgrade" path/conversion tool should be provided.  I'm guessing based on what I saw in the meta-data files, the easiest path is to just tell people NOT to do this.

     

    For now, I'll seed my drive using a local FTP server.  Then I can mail the drive to my remote site and just have them attach over FTP as well (we have a VPN between sites so...ftp is just fine).

    I'm thinking that the meta data doesn't include THAT much data, where at least moving from one FTP server to another, so long as the folder structure matches, I'll be fine.

    It is easier to just use SMB, however, I'm thinking because of my shoestring internet, probably better to use something more efficient like FTP.

     

    Thanks again for all of your support!

  5. Hi all,

     

    It is possible what I did is not valid, so hear me out.

    I have a local hard drive.  It is attached through a DDWRT converted WIFI router.  I access this over CIFS (\\<router ip>\share).

     

    I tested this out using CloudDrive with drive encryption enabled.  Just to check the speeds / performance.  Very nice!  I was limited by the WIFI link - as expected.

    Keep in mind, my reason for doing CloudDrives is to get my photos of my kids out of the house in case of fire.  I have long wanted to also get the videos of my kids out of the house - but without paying for the service, I cannot allocate that much space.  Also, I have a shoestring for internet, so it would take probably until the house DID burn down to get everything uploaded.

     

    • As I intend to put this 1.5TB drive in my neighbor's house, I thought, why not also toss on Bitlocker?  Just playing around - not like this is going to make it ANY more secure.  
      • This was taking forever over the WIFI link.
    • My next step was to disconnect the drive and attach it locally to my server.  I then re-created the drive and turned on bitlocker.  This took a while to finish as the initial encryption of bitlocker usually does.  It also means that the ENTIRE CloudDrive was written.  So, I have 1.36TB of chunks on that 1.5TB drive.  OK great!
    • This morning, I disconnected the "Local" Drive by unplugging the USB (using proper USB eject).  Then, plugged the router back in and attached this drive.  While I waited for that to boot, I updated the CloudDrive version and reset the server.  I got a notification in RED that said it could not reconnect to the local drive.  I destroyed the local drive as I do not intend to use it like that again.
    • I tested my DDWRT router connectivity using Explorer.  Everything was good.
    • I then added a File Share.  CloudDrive was able to connect to the drive.  Here is where I am stuck.  USUALLY when I reattach existing drives from say Google Drive, or DropBox, if my memory is serving me...I get a menu that allows me to reattach / enter my decrypt key.

     

    With the local share, I do not see that.  I only get the option to create a new drive.  Am I mistaken and I only need to go back through the recreation steps?  I do not want to lose what i have (Bitlocker took a REALLY long time!!).  What are the steps I need to re-add this back in?  I can certainly handle any bitlocker side.  I just want to ensure I get the drive attached without wiping it and starting over.  If I must, I will recreate it, but again, I'd hate to lose what I did with Bitlocker only because of the time investment.  I still do not think it is going to make a difference security wise, and if someone is going to break the AES that CloudDrive puts on there, ok, have fun looking at pictures / video of my kids.

     

    Thanks!

  6. Also, I just added 2 more cores to the Virtual Machine.  What a piggy...for now, this seems to have REALLY done the most to help any time-outs I was having (above and beyond the LINEAR Upload process).  I now rarely see the machine topped out at 100% CPU.  When there were only 2 cores assigned to the Virtual Machine, I would see it pegged at 100% most of the time.  Now, I get occasional spikes to 100%, but typically sits at 20-30% when uploading...which doesn't make a LOT of sense to me, as with 2 cores I would be pegged at 100%...i.e. 50% of the 4 cores?  Maybe I math bad...

  7. Chris,

     

    I also believe I just figured out WHY I was hanging before.  Apparently if you download a single file too many times, Google Drive FORCES you to back off.  Since CloudDrive keeps asking for the same thing, it seems that the file is never let out of contention.  So, I do not think it was installing the beta that fixed me.  I think it was finally having the CloudDrive turned off long enough to allow it to continue.  Now, I am not going to just SAY it wasn't the beta, but I have some Google Drive accounts on here that are experiencing the SAME issue.

     

    The message I get is: "the download quota for this file has been exceeded"

     

    At any rate, I'm working through things.  Made some HUGE improvements to the script and am validating them now.  I wouldn't want to push a junky program out to you.  It already has enough caveats (no pre-fetching enabled, must be a 1920x1080 resolution).

  8. No problem, it really did not bother me.  I just had a face-palm moment as my script ran through clicking the wrong things over and over.  It also brought up memories.  I did a similar thing macro/script clicking buttons on a program that could not be automated.  Used that to program boards in a factory.  Just to mess with me, the developer moved the buttons around by a nearly unnoticeable amount each revision of the software...just to mess with me.  Eventually when I caught on, he reverted the layout...

     

    Anyway, the new version seems to have uploaded all my files without issue.  I'll get back to correcting the script and get that over to you once it bubbles back up to the top of my queue.

     

    Thanks again!!

  9. Chris,

     

    I was messing around with this to ensure my cleanup didn't bugger anything up and I noticed that I have a drive that has 1.27GB of data to upload, but it never uploads.  I tried quite a few combinations of settings (of course enabling upload threads).  Is this something you have seen?  Or is this something I should start a new thread about?

     

    Thanks!

  10. Chris,

     

    I worked it out. I believe the SAME issue that is causing my re-attach is causing the drives to disconnect upon reboot.

    Essentially, they all hammer my internet at the same time (or it seems so based on some bandwidth measurements...not really accurate I'll admit as there are bursts of data...not streaming).

     

    As such, certain drives decide they lost connection and drop into a fail-safe mode and require re-authorization.

     

    I assume this will get corrected once the global threading is added.  I would hope that ALL communication will be able to be able to be controlled this way.  TO me, this would correct every issue I've had thus far!  However, this I know you already added on my behalf!


    Also, I'll add that even if it is NOT the everyone hammering to re-connect...it could very well be DrivePool.  I'm pretty sure it goes out after a reboot and re-checks all the drives for all the things it needs.  This would certainly cause the hammering at reboot as well.

  11. Chris,

    Thanks, I'll fire that over ASAP.  I need to pull a clean copy from my CloudDrive machine.  I was editing it a while ago and added a feature that doesn't work at this time.

    It's also VERY basic at this time, as in if your screen size does not match 1920x1080 the clicks are not in the right place, thus the macro won't work.

  12. Chris,

     

    I'm not sure I created a ticket on this.  If I did, I am sorry I did not track my status.  I do not see anything in G-mail, maybe I have no "search skillz".  ;-)

     

    I've reset my PC a few times since and have not seen this again.  I am assuming it has something to do with my updating versions?  I am not sure.  If I see this again, I'll create a ticket.  IN the mean time, I'm just patiently waiting for the Global upload limitation feature!  I really am!  My script is mostly working - at least enough that I'm happy with everything enough to wait while the devs. update DrivePool (I think that is what you said they are working on at this time).

     

    Thanks again!  I'd consider this closed for now.

  13. Hello all,A 

     

    little feedback..This MOSTLY works.  The issue seems to be that whenever a do my compare (for mirroring), there is ~768KB of data written to EVERY drive.  As this is the case, the ordered placement does not really help.  If I leave all the drives enabled for upload, as soon as I do the compare, ALL the drives start to hammer the internet to try to push their data out.  The BULK is still in the SINGLE drive (using ordered placment).  However, for the first few hours, that 768KB kills my drives and loops back through the above described undesired process (drives requiring re-authorization).

     

    Guess I'm back to using my script mechanism for controlling the upload stream.

     

    I will say that as long as I leave 1 drive running at a time, I can pull FULL bandwidth for that 1 drive!  This really is an awesome addon for my network!

  14. Hello all,

     

    I had an idea.  I'm going to try to use the Drivepool plugin "Ordered File Placment", which as I understand it will let me run the disks more like JBOD.  If I disable all optimizations for the Drivepool and just let it JBOD...that might resolve my need for the "single install" macro at this time.  I still would like the feature however maybe this will work by keeping most of my updates on one or two drives at a time!

  15. I would agree with Chris.  With Clouddrive's ability to only upload changed sectors really makes this a good solution if you want a VeraCrypt container on the cloud.  I've tried this in the past with Google Drive's desktop app and Truecrypt.  I ended up re-uploading the file over and over...or so it seemed way back when I was trying this.  So, I gave up on that for a while (at least until CloudDrive showed up!).

     

    Love this piece of software / middleware!  I've off loaded all my most precious files (pictures of the kids, and other unique files that I would never be able to get back if my house were to burn down).  It would be nice to mirror my entire DrivePool...but I can not see dumping THAT much data over my shoe-string internet connection.  Funny thing to me, I used to think I was fast....but then again, I can still recall the tones used on 300baud modems up to 14.4kbaud!  Then the digital bongs of 56K (I skipped 28.8kbaud).

  16. Hello,

     

    As of this post, I am on version:

     

     

    After battling the many slowness issues I seem to have run into due to the numerous CloudDrives I have coupled with my dirt slow internet, I have been very happy with CloudDrive.  I have a macro script that goes through and allows the drives to run until they are flushed of "to upload" files allowing them to run one at a time.  This gives me pretty good performance as well!

     

    As I discovered early on, and on purpose, if you  reboot before the drive's cache is 100% flushed out, you have to restart the upload of all the data in the cache.  Fair enough.

     

    As I was able to make the drives perform properly through my crazy macro programming skillz, I recently was able to get the drive all flushed out to the internet.  So, I decided to reboot the PC.  I wanted to test my macro, backup software, etc. loading up upon reboot to try to try to make things automated.  Think upside down house of cards.  Everything has to be JUST RIGHT for this to work...but it is only used as a backup dumpsite..so ... I'm ok with that.

     

    I was surprised that upon reboot:

    1. EVERY drive needed to be unlocked, then re-authenticated.  They were ALL configured for automatic unlock upon creation.  Even if I made a mistake, I did not do that on EVERY drive
    2. every drive LOST my upload configuration.

    I understand for #2 you would automatically assume upload being enabled.  However, there was a reason I disabled prefetching as well as changed the upload threshold to [Off] instead of 1MB.  I also had a few drives that had Upload verification reenabled.  That might have been my fault for missing that on those drives, however I am mentioning it to ensure it is captured before that information leaves my brain.

     

    Any idea what is going on?  As it takes SO long to unlock and re-authenticate that many drives, as well as reconfigure the settings on that many drives...I'm hopeful there is something I can do about this?

  17. Chris,

     

    Is there anything against my posting my code?  It seems to work now and I can toss some instructions around it.  Seems to work pretty well managing things for me.  It may not be bulletproof, but what do you expect from being written in a basement office?  :-)

     

    I'm sure there are bugs or something, but my linear upload process seems to be working.  I tried it out a few times and the tool SEEMS to be able to manage working through everything I needed it to do.  There are some notes in the code for improvements that I need to add...but I'll get there when I get there.

     

    I can post the script file or put the code into a code block in a forum post.  I'm not asking for $$ or anything and people can feel free to use the code.  I'm just trying to provide some assistance to others that like me, needed a way to keep the number of threads down (I mean I got over 3x the performance using this method!).

  18. thanks for the tip...but since the script is operating on the actual GUI objects, it just fails any time I RDP in and quite out.  I don't plan on needing this feature often...so I guess I'll just connect over VMWARE Console.  Not a big deal.  My guess is once the RDP is minimized or the session is closed, the GUI moves to another session...for some reason the script program doesn't move with it.

  19. Sounds good.  That was the direction I took.

     

    I have a macro program I wrote that now has a menu.

    It has selections for going through and mirroring the Upload Threads Checkbox setting.

    You can select to move back, forwards by 1 drive, or skip to the first, or to the last drive.

    I added a toggle menu.  Using the database it creates I display ALL the drives UPLOAD status.  Then you can toggle the status in that menu.  Once you say OK, the macro goes through and pushed those settings to the CloudDrive software (which should now mirror the database)

    I added enable all and Disable all.

    The last thing I am adding is a Linear Upload selection. The idea behind that is to just start a the first drive. Enable Upload (if not already enabled), monitor the drive cache pie diagram for insignificant cache left (i.e. can't see any more light blue).  Once that happens, the macro moves to the second drive, rinse, repeat.  In a way, this gives me a base to then add a Round Robin, which would just disable the drive as it goes.

     

    For now, I am leaving this like this as usually there is 10-20K of data remaining that needs to be pushed out of cache.  I have to wait for the "size or time" to run down.  Instead of changing those settings, I thought I would just finish all the drives to the point of insignificant cache left.  Once the last drive is finished, I can go back through and toggle them all back to off - except the last one.  This will ensure all drives have enough time to upload.  I think...

    ;-)

     

    I realize this is a GOOFY macro, but it seems to give me what I want and man, I'm just ripping through my data now that I am either not being throttled or I've reduced the overhead by limiting to 1 drive upload at a time.

     

    Thanks again for all your help!

  20. Chris,

     

    I am smart - sometimes clever.  I am also a programmer...maybe not a great one, but I have my moments.

    Any further hints on what you mean?  I'd be happy to accept that things are NOT supported and would refrain from bugging you if I hit issues.

     

    At this time, I can only think of using something like a Macro program to simulate clicks in the GUI.  Are you maybe hinting that there are undocumented mechanisms in the cloud service executable I could use (or I guess abuse at this time)?

     

    If you do not want to answer, I understand as... unsupported!

×
×
  • Create New...