Jump to content

KiaraEvirm

Members
  • Posts

    0
  • Joined

  • Last visited

Reputation Activity

  1. Like
    KiaraEvirm reacted to fredde in FTPS error   
    Hi,
     
    I'm trying to setup a clouddrive using Explicit FTP + SSL (FTPS) to a Filezilla server and get this error: "Authentication type already set to TLS (534)"
     
    I can connect to the server using Filezilla client and it passes the test at ftptest.net. Any ideas?
     
  2. Like
    KiaraEvirm reacted to RJGNOW in Damaged File System Error Will Not Clear   
    Title says it all.... Damaged File System Error Will Not Clear.
     
    1) Removed drive from pool,
    2) Reformatted HDD
    3) Re-added drive to pool.
    4) Forced Scanner to rescan drive
     
    Error persists in scanner.
     
    5) double checked with the windows disk check and it return OK.
     
    Now what? (I just started a 2nd scan)
     
    I also noticed that even with a corrupted file system, DrivePool was still reading and writing data to that drive. I know that the drive will evacuate on a SMART error but it doesn't look like that happens with any other error???
     
    Scanner Version : 2.5.2.3074 BETA
     
  3. Like
    KiaraEvirm reacted to trumpeteer in Multiple onedrive accounts?   
    Is there a way to connect CloudDrive to multiple OneDrive accounts at the same time?
  4. Like
    KiaraEvirm reacted to Kraevin in +1 for GoogleDrive for Work support   
    I just tried logging out and back in, still gives the permission error. Using the latest build. 
  5. Like
    KiaraEvirm reacted to dsteinschneider in Looking for recommendation on eSATA card and drive enclosure   
    I'm looking for recommendations for a PCIe eSATA card and 4 or 8 bay enclosure.
     
    Maybe someone might know why I'm having the issues described below:
     
    Back story:
    I have a Rosewill RC-219 connected to a MediaSonic Probox 4 drive enclosure. For 3 years I ran this card and enclosure in an Optiplex GX-620 running WHS 2011 and DrivePool. After about a year and a half of solid performance the Probox started to occasionally disappear. If I cycled its power it would come back without doing anything else. I replaced the Optiplex GX-620 with a Dell Vostro 430 running Windows 7 Pro 64. I also replaced the RC-219 with a new identical card.
     
    The Probox would disappear on the new setup a few times a week but I found if I rebooted every night the problem stopped with our normal use. Recently I started converting all our WTV mpeg2 recorded TV to h.264 MKV on a second machine that reads the movies from this box but saves them on its own drive. Under this use the Probox has been disappearing several times a day. 
     
    I get an Event ID 9. I saw someone online who running WHS-2011 with the same PCIe SATA card who flashed a certain BIOS and driver and the above behavior stopped - I can't replicate their experience because the drivers they used are 32bit. 
     
    BTW The 4 drives are 4 year old WD30 EZRX-00MMMB0 (Green 3TB). I ran wdidle3 to stop the heads from parking but a couple of the drives already had 250K plus load cycles before I figured that out.
     
     
     
     
     
     
  6. Like
    KiaraEvirm reacted to Neaox in Expand/Rename existing cloud drives   
    Hi there,
     
    Is it possible to modify existing cloud drives, for instance expanding them in size or renaming them?
     
    Thanks.
  7. Like
    KiaraEvirm reacted to Desani in Error - Files not constistant across pooled drives   
    I have three cloud drives that are all pooled together with DrivePool. I have it set to duplicate the data across all three cloud drives for 3x duplication for redundancy.
     
    Drive 1: 100 TB Amazon Cloud Drive
    Drive 2: 100 TB Gdrive
    Drive 3: 100 TB Gdrive
     
    I went to access a media file on the pooled drive and I was unable to play the file. The file appeared corrupt. At first I thought that it may have gotten corrupted in transit for the initial transfer. Then I checked other files in the same show and other seasons in the same show, all tv eposides for the one show exhibit the same corruption and refuse to play even copying the file locally from the cloud.
     
    I manually went into each drive in the pool and found the file and downloaded it. What I found was, the file was corrupt and would not play on both GDrives volumes but the file was working properly off of the ACD volume.
     
    I believe when I added the show I only had 2 cloud drives, 1 ACD and 1 GDrive. When I added the 3rd drive I think it replicated from the 1 GDrive that already had the error with the file and thus duplicating the error to the second GDrive.
     
    My question is, how is the file in an inconsistent state across the pool? Shouldn't the file be an exact copy on each volume? I tested removing one of the episodes on both GDrives and it proceeded to mirror the file and it is now working as expected and plays without issue. I would like to be able to tell if there are more shows like this and correct the issue before it becomes unrecoverable. Drivepool should be able to see if the files are in an inconstant state and perhaps even prompt me for which version I would like to keep and mirror to the other drives.
     
    I have left the rest of the show in an inconstant state so that I can assist with troubleshooting how to track down and fix the issue.
     
    OS: Windows Server 2012 R2 x64
    CloudDrive Version: 10.0.0.842
    DrivePool Version: 2.2.0.740
  8. Like
    KiaraEvirm reacted to JasonC in Changes to file name slow to propigate on network shares?   
    I've noticed that I fairly often run into a situation where if I rename a file on a network share, it's very slow to reflect the name change on other clients on the network.  Is this a normal situation with DrivePool? It's definitely not normal with any other Windows share that I've ever had. So for instance:
    rename a file on client 1
    client 2, if it has the same folder open at that time, doesn't reflect tihs change. In fact, I'm not sure how long, if ever, it will reflect the change.  Refreshing the folder doesn't update the file name. It seems like I have to leave the folder (up a level for instance) and then re-enter to get the file name change to be reflected.
     
    I currently have the windows indexer on the server disabled, although I've never seen the indexer be responsible for this kind of behavior before. At a guess, Windows isn't being informed of the change, so cached values are sent to other clients until an operation that forces an explicit re-read happens.
     
    Thank you
     
  9. Like
    KiaraEvirm reacted to JohnSpoonz in Empty PoolPart folder - cause of mis-reporting Other?   
    Hi
     
    I've set up a 2-disk pool using a pair of 4TB hard drives. Both drives are dedicated to the pool i.e. have no non-pooled data, yet the statistic displayed in DrivePool reports 1.5TB of Other.
      I did a bit of investigation and there appears to be a rogue PoolPart directory on one the 2nd disk which is empty. Could this be the cause?    If so, I tried to delete the empty folder, but Windows wouldn't let me. Is there a quick fix? If not, do you have any ideas what it could be?    Disk 1 - F: PoolPart.be5e2572-871d-4287-b954-8fa08274ea08   Disk 2 - G: PoolPart.74b7a9ae-35c3-47d7-b3a4-6076082c3e1d PoolPart.8129887a-a4f0-4a26-bfd5-dcd223b33c91 <<<< This is the rogue / empty folder    
  10. Like
    KiaraEvirm reacted to RobbieH in 8TB Archive showing up as 1.3 TB   
    I'm not sure where else to ask, you all seem to be the type of people that live out on the edge like I do.
     
    I have a Windows 7 VM running in VMWare 6.5. The system currently has two 5TB WD Red drives that I use in a DrivePool for storage. Opening "Computer" shows the two drives as 4.54 TB, all is well.
     
    So, I want to add an 8TB Archive drive. I set up the RDM in VMWare the same exact way I set up the 5TB drives. VMWare sees the drive as a capacity of 7.28 TB. In the Windows 7's VM settings, it also sees the drive there as 7.28 TB, and I have all the settings the same as how I configured the 5TB drives.
     
    BUT
     
    I go into W7 Disk Management and it shows the drive as only being 1307.91 GB. I've tried everything I can think of to get this drive to show as 8TB but nothing.
     
    Oh, and it's that way even when Unallocated. I did set it up as GPT (as are the 5TB drives) but still it's just 1.3TB.
  11. Like
    KiaraEvirm reacted to RobbieH in Hard Drive Errors - Next Steps   
    Hey Christopher,
     
    I've run into a problem that's showing itself in a way I've never seen before, or maybe it has been so long since I've had a problem I don't remember what to do correctly.
     
    First, I'm on 2.5.2.3103 Beta
     
    I have a hard drive that is reporting it is unable to read 104 sectors. I've deleted the corrupted files and rescanned, nothing is coming up with errors now. I set the readable blocks unchecked, and the unreadable blocks unchecked, and let it rescan. But, it is still reporting 104 unreadable sectors. I thought there was a way to mark these as unusable and go along my way, but I do not seem to be able to get to this point. Is there a way to do this?
  12. Like
    KiaraEvirm reacted to Desani in Feature Request - Assign drive priority for dupicated data   
    I have 3 cloud drives that are all pooled together and are set at duplication x3 for the entire pool so each drive has the same data.
     
    Drive 1: 100 TB ACD
    Drive 2: 100 TB GDrive
    Drive 3: 100 TB GDrive
     
    What I would like to accomplish is when accessing the data that is duplicated on all three drives, I want to assign a weight or priority to accessing the data to the two google drives as they have much better access time and speed and avoid using the ACD as it is there just as another level of redundancy.
     
    Ideally this would not be needed it DrivePool was able read from all the drives at the same time for the file being accessed.
     
    Please let me know if this is a possibility.
     
    Thanks
  13. Like
    KiaraEvirm reacted to skapy in access time   
    Hello,
    i use google drive for some videos.
    When im trying to stream a video the folders access time is very high.
    I mean the time when im opening the folder until i see what files are in the folder.
    How can i improve this?
    And english isnt my main language
  14. Like
    KiaraEvirm reacted to thatvalis in Google Drive Rate Limits and Threading/Exp Backoff   
    That sounds like a great idea if it's possible!
  15. Like
    KiaraEvirm reacted to gj80 in MyDigitalSSD BPX 480GB NVMe SSD - No SMART status   
    I've switched my desktop over to Windows 10 on one of these:
     
    http://www.mydigitaldiscount.com/mydigitalssd-480gb-bpx-80mm-2280-m.2-pcie-gen3-x4-nvme-ssd-mdnvme80-bpx-0512/
     
    CrystalDiskInfo shows the SMART status for the drive, but StableBit.Scanner_2.5.2.3103_BETA says "The on-disk SMART check is not accessible"
     
    The DirectIOTest results are attached. Thanks

  16. Like
    KiaraEvirm reacted to otispresley in How To: Get SMART data passed on from ESXI 5.1 Host   
    @aron, you will be better off getting support in the VMware forums as we are not ESXi experts here, and this has nothing to do with Stablebit products.  What I do know is that the command you have issued is a list of partitions, so this is one device with 8 partitions on it and is the disk you have ESXi installed on.  It could be that your other disks are not partitioned and formatted yet or your controller is not presenting the disks properly...not really sure.  I see that your controller does appear on the compatibility list, but that doesn't necessarily mean that SMART data is supported for it in the product.
  17. Like
    KiaraEvirm reacted to ufo56 in File copy very slow   
    Hello
     
    I have done windows 7 vm (in unraid) with latest clouddrive beta .797
     
    Two hdds, one with windows and another is for cache (140gb cache disk). I copied about 50gb files to VM over lan and then copy paste to mounted google drive share.
     
    Copy/Paste transfer speed is ~900kb/sec. What i am doing wrong ?
     
    Internet connection is 300/300 but it should not matter when i copy files to drive, it first goes to cache and then it uploads to gdrive ? Clouddrive shows constantly 8mbps. It uses only 1 thread to upload, however i have set it to 10.
  18. Like
    KiaraEvirm reacted to Maurizio in Suggestion basic hardware   
    Dear all,
    after having read a lot about drive pool, I'm thinking to updade my old WHS 1.
    I have a very limited uses case.
     
    1) Every week I sync all my files from main PC to the server
    2) I would like to set-up redundant policy by folder (drive pool should be able to do it)
    3) Every month I make a backup on an external hard drive to keep it in a different physical location
     
    Considering I would like 4 HD, 1 SSD for the system and 3 HD for the pool ( i could use the same I have in WHS 1) to have 3 duplication for top files, what do you suggest as hardware?
     
    Thanks
    Maurizio
  19. Like
    KiaraEvirm reacted to surfyogi in Notice different behavior for Win7 "copy" vs "move" - data recovery   
    I am recovering a drive I pulled out of a hot swap box ( the drive lost the drivepool partition because I didn't unmount the drive from Windows first). 
      I now have all the recovered folders/files sitting on a spare drive (thanks to WonderShare Data Recovery) and am slowing copying them back to the pool.   My questions is this:  Is there some difference between using Windows 7 copy, vs Win7 move? (as far as the drivepool emulation is concerned?)   What I have noticed is this:  (I know how to force Windows to copy or move, as I need to of course)   a) If I use "copy", I may or may not get a "merge folders" dialog, and a "these files are already present, replace or no?) dialogs. If I use copy, it will often times just look like its copying, I will come back after some time, expecting to see a merge dialog up, and I see nothing. No copy going on, no dialogs, nothing. Like it just copied the first few folders and then stopped?   If I use "move" on the other hand, it seems to always do a reliable deep transfer of files all the way down the tree, across all folders, very reliably.    I was not aware to expect different behaviors between these 2 operations, except of course, files that are moved will be deleted from the source drive.   I am of course, copying from a spare recovered filed drive, to the emulated pooled drive. I assume this is not a bug in windows. I'm thinking it's a bug in DrivePool filesystem emulation?   I'm on version: 2.1.1.561   Anyone else ever notice this behavior?
  20. Like
    KiaraEvirm reacted to Atarres in Any hope for CD on Linux?   
    Just curious if there are plans or even consideration as to going into Linux?. Cloud Drive on Linux would be an absolute killer app due to it's flexibility and ease of use. Altho having a CLI version as well might be preferable for many. Hopefully it's in the cards. 
  21. Like
    KiaraEvirm reacted to chcguy88 in Feature request: Cache of all thumbnails + file index on client side.   
    So I have been having an issue with using the product. I've been pretty happy with it but when I access my media archive (Sometimes 200+ videos and pictures) I like to use the thumbnail view to view the items to make sure I am copying the correct items from my drive. The problem is with these larger folders, it takes about 2-5 minutes of scrolling down the folder to get the thumbnails to appear. I have a dedicated drive for stablebit and I would like to cache all those images + file names so that all cloud drive has to do is copy the file when accessed, not stream everytime to generate file information/thumbnails.
     
    Feature request:  Cache of all thumbnails + file index on client side via a local harddrive. This could be implemented as a check box in the settings. This would definitely help with the bandwidth impaired and save time when working with the drive.
  22. Like
    KiaraEvirm reacted to MetalGeek in New pool creating failure   
    New install of Beta .651 on Windows 10. Only rebooted once not twice.    Didn't have a new pool button, just an option to add drives so I clicked on the  + Add link for the D: drive, it hung at 95%.  When the pool was never created I cancelled the creation, then tried with the E: Drive.  This sat at 95%, then it appeared the pool creation canceled on it's own.  I have no pool and I cannot add the D: or E: drives as they error out saying they are in the pool already.   I found the article that I need to reboot 2x for Windows 10 after I tried creating the pool.  
     
    How do I remove the pool that exists but I cannot see and start over?
     
    [Edit] - I have rebooted 2x and no change in behavior.
  23. Like
    KiaraEvirm reacted to surfyogi in Folder Duplication - backup file of settings? Settings lost with C: SSD failure   
    I lost my C: SSD drive some time ago. I seem to have lost a good working backup of it as well...
     
    So I installed a new Windows 7 on a new SSD. I now have the pool back up and running, with 1 difference.
     
    I don't know how to restore my Folder Duplication settings precisely as they were.
     
    I have a few questions regarding how drivepool will act for me, so please follow, and I'll be brief:
     
     
    1) Can I save my folder duplication config file (or, where is it and what's it called?) so that I can reload it sometime later if I lose my Windows C drive?  (assuming I am using same version of drivepool on new configuration? )   2) What happens to my backed up folders if I lose my C drive, have to re-install Windows 7? Will the original duplicated folders still be duplicated across 2 or more drives (for example, if I have 10 drives pooled)?   3) If for any reason I don't set up backup folders immediately after a Windows re-install, will those duplicate folders/files stay as is, or will they be deduplicated over time by DrivePool?   I have used this forum many times before, and I don't remember these questions ever being addressed, it may be a special situation. Please point me to other threads if this is a duplicate question.
  24. Like
    KiaraEvirm reacted to Christopher (Drashna) in How the StableBit CloudDrive Cache works   
    This information comes from Alex, in a ticket dealing with the Cache. Since he posted a nice long bit of information about it, I've posted it here, for anyone curious on the details of *how* it works.      What happens when the disk used for the local cache is being used? What happens when uploading?    The Upload cache and the Download cache and pinned data are seperate things and treated as such in the code Download cache is self learning, and tries to keep frequently accessed blocks of data on the local system. This speeds up access of the drive, for well... frequently accessed data.  Pinned data is generally drive and file system objects for the drive. This is kept (pinned to) the system, because this will be accessed frequently (written every time you create/modify/delete a file, and is accessed every time you read file properties.  Upload cache is everything written to the disk and may not be uploaded to the provider yet.  This is explained a bit more in detail below.    Upload cache can (is allowed) to exceed the Cache limit specified.  This is does, because otherwise will prevent files from being written. We could limit it to the specified size, but that would wipe out the self learning feature of the download cache. So... not ideal.   We do plan on implementing a maximum limit in the future, but for now, the max limit is the drive size. However, we do throttle based on free space remaining (we will always attempt to leave 5GBs free on the drive).      As you can see, we get more aggressive the closer to running out of disk space we get.       Now, for data integrity issues, because that is always a concern for us. Despite any issues that may be brought up here, we do everything we can to ensure data integrity. That's our number 1 priority.      This is what happens when you write something to the cloud drive: All writes to the cloud drive go directly to the local cache file. This happens directly in the kernel and is very fast.If the drive is encrypted, it's encrypted during this part. Anything stored on disks will be encrypted. At this point, your cache has some updated data that your cloud provider doesn't. The StableBit CloudDrive service is notified that some data in the cache is in the "To Upload" state, and need to be uploaded to the cloud provider. The service reads that data from the cache, uploads it to your provider and only once it's absolutely sure that the data has been uploaded correctly, will it tell the cloud drive kernel driver that it's safe to remove that data from the "To Upload" state. Now, in reality it can get much more complicated, because you can say, what happens if more new data gets written to the parts that are actively being uploaded, so this can get really complicated, really fast. But that's all handled by our driver, so let's just keep this simplistic view for this example. So what happens when you pull the plug in the middle of this process? StableBit CloudDrive loses  the "To Upload" state. Well, it doesn't really lose it, but it's in an indeterminate state, and we can't trust it any longer. In order to recover from this, StableBit CloudDrive assumes that all locally cached data was not uploaded to the provider. It is safe to make this assumption because uploading something that has already been uploaded before doesn't corrupt the drive as a whole in the cloud. While, not uploading something that needed to be uploaded would be catastrophic. Because that would mean that your data in the cloud would get "out of sync" with your local cloud drive cache and would get corrupted. Now up to this point I've described how StableBit CloudDrive maintains the data integrity of the bits on its drive, but that's another very important factor to consider here, and that's the file system.   The file system runs on top of the drive and, among other things, makes sure that the file metadata doesn't get corrupted if there's a power outage. The metadata is the part of the data on the disk that describes where the files are stored, and how directories are organized, so it's critically important.   This metadata is under the control of your file system (e.g. NTFS, ReFS, etc...). NTFS is designed to be resilient in the case of sudden power loss. It guarantees that the metadata always remains consistent (by performing repairs on it after a power loss). At least that's the theory. When this fails, that's when you need to run chkdsk.   But what it doesn't guarantee, is that the file data itself remains consistent after a power loss.   So there's that to consider. Also, Windows will cache data in memory. Even after you finish copying a file, Windows will not write the entire contents of that file to disk immediately. It will keep it in the cache and write out that data over time. If you look in the Resource Monitor under the Memory tab you may see some orange blocks in the chart, that memory is called "Modified". This is essentially the amount of data that is waiting to be written to the disk from the cache (and memory mapped files).
  25. Like
    KiaraEvirm reacted to ironhead65 in Feature Request? Overall View of all Drive Actions   
    Hi there,
     
    Awesome software - as always!  I've been using Drivepool and Scanner since nearly the beginning.  Currently I have something like 22 drives in my main drive pool.  They range from iSCSI, SATA-attached, USB (depending on the pool), Ximeta-attached (they have a custom network thing they do), or even virtualized under ESXi.  Anything that shows up to Windows as a physical drive can just be pooled.  I love it!
     
    Recently purchased Clouddrive, after messing around and some Google Searching of this forum, I think I'm fairly setup well.
     
    I have 13 clouddrives setup.
    1 box.com, 2 dropbox, 10 Google Drive (not the paid cloud drives, but the provided ones for personal use).
     
    I used all defaults for everything, except
    set the cache to MINIMIAL+ Encrypted it, set it to auto login to that encrypted drives (as I only care that the CLOUD data is encrypted...I mean, you want to look at my pictures of my kids THAT bad and break into my PC to do it...okay, enjoy, you earned it).   Pointed to a specific 100GB hard drive partition that I could dedicate to this (using drivepool on all other drives and one specific thing mentioned was the cache drive could NOT be part of a drivepool) renamed it removed the drive letter and set it to a folder name (use this with drivepooling to cut down on the displayed drive letters for a clean look) I am getting a slew of "throttling due to bandwidth" issues.  I admit that my cache drive is probably too small for the amount of data I DUMPED and will continue to monitor this as I do not feel that I was getting those messages when I did not just DUMP enough data to fill the ENTIRE CloudPool in one shot.
     
    So, my request is to have a view in the program to look at all drive upload/download at the same time.  Maybe even space?  I love the existing charts.  They are easy to look at, easy to read and understand.  I also like the "Technical Details" page as that shows a TON of information, such as the file - or chunk - and how much of it is uploaded / downloaded.
     
    I'm wondering if there is a way to view all drives at once?  I would use this to get a sense of the overall health of the system.  That is, if I have to scan through all 13 drives, I do not see where my bandwidth is being consumed to understand if the cache drive is FULL or if I am having Upload/Download issues.  This reason, by the time I click through each drive, I do not see where the bandwidth is being consumed as the bandwidth seems to shift between drives fast enough that I do not see a true representation of what is going on.  I'm sure part of that is redrawing the graphs.
     
    I find the technical details page much more useful, as I do not see what is LEFT to upload, but I get a much faster idea of what is going on and although annoying to click through ALL the drives, it seems to be giving me a better idea of what is going on.  I think that having an overall page would be fantastic.
     
    Thank you again for continuing to show what is possible!
     
    --Dan
×
×
  • Create New...