Jump to content

Ginoliggime

Members
  • Posts

    0
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Ginoliggime reacted to andy_d in Question about balancing...   
    Is it possible to ensure that a folder structure like this...
     
    videos\movies\movie\5-7 files for the movie
     
    That the 5-7 files are in the same folder when duplicated instead of scattered? 
  2. Like
    Ginoliggime reacted to modplan in Full Cache Re-upload after crash?   
    Sorry if this has been covered a quick search did not find me what I was looking for. 
     
    I have a CloudDrive that I send windows server backups to nightly. The full backup size is about 75 GB, but the nightly incremental change is only 6-7GB, easily uploadable in my backup window. 
     
    I have set the cache on this drive to 100GB to ensure the majority of the "full backup" data is stored in cache, so that when windows server backup is comparing blocks to determine the incremental changes, clouddrive does not have to (slowly) download ~75GB of data for comparison every single night. 
     
    This works very well.
     
    The problem comes when there is a power outage, crash, BSOD, etc. Even though the clouddrive is fully current and "To Upload" is 0, when I bring the server back up, after we go though drive recovery (which takes 5-8 hours for this 100GB), cloud drive then moves ALL 100GB of the cache into "To Upload" and starts re-uploading all of that data.
     
    Why? I can't think of how this is necessary. Incase a little data was written to the cache at the last minute before the unexpected reboot? If so, certainly there is a better way of handling this than a 100GB re-upload, some sort of new-unuploaded-blocks tag/database? What if a drive has a massive cache, a re-upload could take days/weeks!
     
    Thanks for any insight, I've gone through this process a couple of times using clouddrive and it has been painful every time. I'd be happy even if we downloaded every single block that is cached, compared it to the local cache block, and then only uploaded the ones that have changed.
  3. Like
    Ginoliggime reacted to HarvInSTL in Full folder duplication on some drives and partial on others possible?   
    I have a 6 drive pool. 4 local HDD's & 2 CloudDrives (1 Amazon & 1 MS).
     
    I'd like to duplicate 2 folders in the pool. I'd like both Cloud Drive to keep a full set of the duplicated folders and I'd like 3 of the 4 local disks to keep a partial set of the duplicated folders.
     
    Reasoning behind this is that I'd like full duplicated folder sets on the cloud storage for recovery if needed, but don't want to tie up the space with 3 full sets of duplicated folders on the local disk.
     
    Possible?
  4. Like
    Ginoliggime reacted to RJGNOW in Remove Disk "Access Denied"   
    Looks like I've now myself run into this issue. I can't remove ANY drive from my pool, as I run into this Access Denied wall. I've tried to run the permission fixer, and that also fails with "Access Denied". I've turned all balancers off, problem persists.
     
    WHS 2011
    DP Version 2.2.0.737
     
     
  5. Like
    Ginoliggime reacted to smartz34 in Double Upload Size   
    Not sure if this is a new thing happening on 802 Beta or something i  never noticed before.
    I'm uploading 60 GB of files thinking it will take 4 hours based on calculations. Four hours pass by and i notice i am only done with less than half of that. Luckily i had a network monitor running and it shows that i uploaded over 60 GB of data but SCD is only showing about 25 GB done uploading. Started looking at both SCD and network and it seems to be the case. Every 1 GB of upload from SCD requires 2-2.5GB of upload in network monitor.
     
    Opened up technical details (for the first time randomly) and noticing that some threads are uploading more than 100%
    How do i fix this? Is there a stetting that should be enabled/disabled?
     
    My cache was set to 10 GB expandable and the drive has 200 GB free space.



  6. Like
    Ginoliggime reacted to pmoon in File Placement Rules Not Working   
    Hi guys,
     
    I've set up my file placement rules to only allow files in a folder to go into a particular drive.  In this case it's a CloudDrive drive.  I'm finding that it's not following the rule and it just randomly putting it in any drive.  Does anyone know if this is the correct behaviour or whether it's not right.  I essentially only want files in a certain folder to only ever be put in a particular drive.  
     
    I've configured it so that underneath the file placement options I only have the one drive I want it to put the file on, checked.
     
    Any help would be much appreciated.
     
    Thanks,
    Phillip
  7. Like
    Ginoliggime reacted to Simon in Drive Letters and Balancing   
    New user here who is keen to move to DrivePool and SnapRAID from FlexRAID as it seems more transparent and better supported. On a trial for DrivePool and Scanner at the moment and have hit a couple of areas that I would like to get a bit of advice on before going any further:
     
    1. The initial Drivepool setup has chosen a drive letter that I want to change. I have now filled the pool quite a bit, and added a couple of shared folders, is it still as simple as making the adjustment in Disk Management or do I need to follow a more complex procedure?
     
    2. When changing drive letters (or any other disk tasks) do I need to exit DrivePool or Scanner? To close either down is there anything more to it than the top right X as Scanner still seems to show notifications after the window is closed, but there is no right click menu in the notification tray icon.
     
    3. When cutting and pasting folders round inside a drive pool, am I right in thinking by default it keeps the files on the same drive and just moves them to a matching folder on that drive? Interested as the moving seemed instant at the time.
     
    4. Given my intended setup of using with SnapRAID, I do not want the drives changing the location of existing data regularly. What I would ideally like to do is run a balancing process once when the majority of my data is moved across, then have Drivepool only balance new data it is adding. What would be the best method to achieve this? I can see the Disk Space Equalizer plugin which I am guessing installing, running then removing will get the balance sorted, what is the best settings for the balancing after that?
     
    Thanks in advance for any advice 
  8. Like
    Ginoliggime reacted to Kiptren in Network drives from within a virtual machine?   
    I'm wanting to check something before I start working on a project; Currently I'm running DrivePool with 4 drives on a Windows 10 machine. I want to convert this machine to a Linux host, install Windows 10 as a guest using VirtualBox, share my four drives to have read/write access between the host/guest machine, and then run DrivePool from within the Windows 10 guest to manage these drives. Virtualbox mounts the shared drives as network drives.
     
    Does anyone see possible issues with using DrivePool in this way? I don't have a huge performance demand, so read/write delays are OK. I'm just more worried about DrivePool even being able to manage networked drives in this way. I enjoy the product and find it an effective alternative to what I was using RAID for. I also have to much data on these four drives to off-load and re-establish as a RAID array under Linux. So I'm trying to do a work around to allow me to continue using the program within a virtual machine, with a few other Windows only programs I can't part with.
     
    Also, with the above example, would I need to remove the drives from my pool prior to reformatting the current Windows machine? Or would DrivePool find the data when I re-add them as a new pool, once everything is setup under the VirtualBox guest machine?
  9. Like
    Ginoliggime reacted to rhl in problem with drivepool/licensing   
    So ive set up my drivepool on a windows 2k12server using a trial version.
    i used another copy to administer it.  And everything worked perfectly
     
    Then the trial license went out. Fair enough, i bought a license.
    But now im stuck and think something is broken.
     
    i get a icon in the system tray to look at licensing.
    I then get a pop-up to open drivepool.  but then nothing happens.
     
    Tried reinstalling, but that didnt change anything.
     
    The faq points to a license wizzard.  but that ends in a 404 page
     
    So what should i do ?
     
  10. Like
    Ginoliggime reacted to CalvinNL in Extremely poor download performance with Google Drive   
    Hi there,
     
    I have been using StableBit CloudDrive for around a month on a dedicated server.
    The upload speed is always between 0 and 200 Mbit/s per drive with an average of 150 Mbit/s.
    Since build 1.0.0.753 BETA the download speed maxes out on 10 Mbit/s which results in stuttering videos with PLEX and extremely slow copying.
     
    The number of threads doesn't seem to matter, I tried 2, but I also tried 5, 10 and 20. 
    I also tried another Google account, but that also doesn't help.
     
    I tried both a drive block size of 10 and 20MB, but it really seems to be a bug with the software, because changing settings doesn't help a bit, the download speed does not improve.
     
    - Calvin
  11. Like
    Ginoliggime reacted to loganwolf in Slow Download from Google Drive   
    Hello,
     
    I'm having an issue where I'm able to upload data anywhere between 150mbps to 300mbps, however my download rates are consistently sub 20mbps (and at times less than 5).  I'm on a synchronous 1gig connection.  Anything I can do to troubleshoot this issue?  I had envisioned being able to stream from plex, however, even at 8mb/sec trans-coding, I'm getting constant buffering, and videos take quite a while to load. 
     
    Attached is fairly typical.

  12. Like
    Ginoliggime reacted to steffenmand in Files starting to go corrupt, Cloud Drive throws no errors   
    Recently some of my drives have started to make pretty much all files corrupt on my drives.
     
    At first i thought, OK, BETA errors may happen, but now this is starting to happen on multiple drives.
     
    Stablebit Cloud Drive seems to prefetch fine and throws no errors, but what arrives is just corrupt data.
     
    The data in question have all been lying at rest, so we have not uploaded new corrupt data on top
     
    This is the second drive this has happened to,
     
    The funny part is, if i try to copy one of the corrupt files from the cloud drive to my own disc, it shows copy speeds of 500+ MB/s and finishes in a few seconds as if it didnt even have to try and download stuff first. In the technical overview i can see prefetch showing "Transfer Completed Successfully" with 0 kbps and 0% completion (attached a snippet of one so you can see)
     

     
    Something is seriously wrong here and it seems that it somehow can avoid trying to download the actual chunk and instead just fill in some empty data.
     
    I have verified that the chunks do exist on the google drive and i am able to download it myself
     
    Been talking to other users who have experienced the same thing!
  13. Like
    Ginoliggime reacted to gomisweird88 in Local Cache Full   
    So if I were to attach my ACD or Google Drive account, by default (and I can't change this), all my local cache goes to my regular C drive which has a capacity of 500gb.
     
    So what this means is that I can only transfer 500gb of files at a time...because my cache fills up.
     
    However, I do have a 6TB external HDD, but I can't use this external drive for my local cache.
     
    I also don't know how to incorporate DrivePool or if that even applies to this situation. I don't know what the "Local Disk" option in Cloudrive does, but I'm testing out a few things.
     
     
    If anyone could give me a work around to not caching the files on my C drive but rather to my external HDD, that would be amazing and it would let me transfer files at nearly 6TB at a time!
     
    Thanks
  14. Like
    Ginoliggime reacted to Spider99 in Release Notes for new betas?   
    Chris
     
    Are there any release notes published (have done a bit of looking around) as cant find them
     
    I see with DP the latest is 739
     
    I generally just upgrade as one becomes available as (crossed fingers) have not had an issue with upgrading
     
    but as my pool if 75% full now i am getting a little more cautious of what i do to the server
     
    Hence interested in whats been fixed / changed
     
    Thanks
  15. Like
    Ginoliggime reacted to Wired4Data in Unable to Mount Drive   
    I love the concept of cloud drive but I'm very very concerned about the potential loss of data. I've uploaded over 3TB to my Google Drive account but now the application will no longer mount the drive. It tries to recover but I get a message saying "There was an error communicating with the storage provider. Internal Error".  I've opened a ticket for this issue #4314541, I'm hopeful that we can find a solution to this issue.
     
     
    Thanks
  16. Like
    Ginoliggime reacted to Brett8187 in Drivepool reliability issues...   
    Hello all.
     
    I've been demoing drivepool for the past few days and have had nothing but issues. I have a simple setup. 2x3TB drives in a pool running on Windows 10 fully updated. 1 folder marked for duplication.
     
    Issue 1 is, my drive pool completely disappeared. No more drive N (my pool). I launch the application, and it lists the drives and able to add to a pool. Curiously, i try to add them to a new pool and it errors out saying they are already in a pool... but there are no pools in the application anymore. I had to uninstall, reboot, reinstall, reboot stablebit in order for the drive pool to be visible again and operational.
     
    Issue 2 (before issue 1). I tested pulling out a drive and putting it in another system. Was picked up and worked great. Brought both drives over to that system and the pool showed up without me doing anything on the other computer running stablebit. I bring them back to the original system, reboot, and none of the stats appear correct and i cannot write to the drive pool. I reboot again, and now it is writable.
     
    Is there a more sable version out there or are these the type of odd issues I can expect? I'm currently running a trial of the software and am trying to choose between this and drivebender, but this is not helping the cause and I really want to go with drivepool... on a side note, I am a systems engineer and work on enterprise NAS devices all day long. Pretty sure thus far it isn't user error.
  17. Like
    Ginoliggime reacted to Christopher (Drashna) in Amazon Cloud Drive - Why is it not supported?   
    Well, as it took 2 weeks and several emails originally ..... the answer is "I don't know". Sorry.
     
    We've had much better communication with Amazon lately (eg, they're actually replying).
     
    In fact, right now, the emails are about the exact API usage and clearing up some functionality, between Alex and Amazon.  So, things are looking good.   
     
     If you check the first post, Alex has updated it with the latest responses.
  18. Like
    Ginoliggime reacted to andy_d in Need help - I think my pool is no longer functional (might be ok now)   
    I'm having serious issues with my pool since trying to move to Server 2016.  I should have not bothered as I had working server but I guess hate myself.  The worse that could happen though has occurred.  Drive Pool does not work properly anymore.  I'm now back on 2012 after an image restore but nothing has been fixed.   There are some serious issues going on at the moment...
     
    1)  It looks like paths are not correct anymore across the drives.  
     
          a )  If I check the folder size at a parent folder, it will be reported incorrectly.  If I go into the folder its fine
          b )  Network shares are not displaying correctly.  For example, a videos share should show 5 folders but it's only showing 1 folder.  This folder itself should have many folders within but that also has one folder
     
     
    2)  I cannot move files from the pool
     
    3)  It's not clear to me what Drive Pool needs to do after an install - does it need to go through a process of rebuilding?  It seems to be calculating at the moment but taking a long time.   I don't care if it takes a day and it works but it's not clear to me what I should expect.  If the expectation is that I can't do anything with the folders (that seems to be the case), that probably should be a warning in the app.  Even better would be some sort of indication that it's actually going through some process - % count down etc.
     
    I don't know what to do 
     
    UPDATE - I think it might be ok.  This was likely due to just not understanding what actually happens when switching / upgrading the OS.  The drives might have been just busy but would be great if someone can comment
  19. Like
    Ginoliggime reacted to andy_d in What is the appropriate way of handling an OS switch with a drive pool?   
    I'm not sure yet if I lost any data but if I didn't I'd like to still try to move to Server 2016 Essentials.  What isn't clear is what the best practice is to switch OS
     
    1)   Do I disconnect the drives when I do a clean install of Server 2016?   2)   After the OS is installed, do I just immediately install drive pool and it expect it to pick up the drives?   3)  Is there anything I should be (or not) doing after the initial installation of Drive Pool?  Like should the pool be left alone as it is discovering the files?
  20. Like
    Ginoliggime reacted to pguizze in Incoherent duplication   
    HI all,
     
    I just lost one disk. Then tried to remove it from the pool but without success. Always got errors (cannot access the disk). After having tried all the options of removal without success, I simply remove it from my pc.
     
    Now, after a reboot, DP stays stuck on "Pool organization" and mentioning "Incoherent duplication".
    What does it mean ???
     
    What should I do? That's several hours that it remains on the same status. If I reboot, I got the same results.
     
    My idea was simply to remove the duplication and then put it back again, but as I've lost a disk, I'm just afraid to lose some files by doing it...
     
    Thx for help.
  21. Like
    Ginoliggime reacted to lee1978 in Refs and storage spaces   
    Hi Chris
     
    I spent most of today in a meeting with a small tech company demonstrating refs and storage spaces that was the main focus however other software was also used in the demo drivepool was one of them.
     
    While I think due to reading negative reviews I have never even looked at storage spaces I have to say I was very impressed with how they presented it disconnecting drives pulling the power plug, taking a drive out and connecting to another machine to show the data is still readable and to find it's very much on par with drivepool. Also the read and write speeds were good.The closets software they compared it to was drivepool which they had much praise for the only negative comment was the lack of refs support in drivepool and the benefits of using refs so is refs support on the todo list and can it be fully integrated into drivepool to take advantage of its file healing this would eliminate the need for the file safe program we have been asking for so a bit of a trade off.
     
    Lee
  22. Like
    Ginoliggime reacted to wikke in prevent existing drive attachment to different google account   
    Using Google Drive, I had to reauthorize. I accidentally authorized to a different google account that was logged in at that moment.
     
    That basically damaged the drive because after reauthorizing with the correct google account, I was no longer able to upload because there were now missing pieces.
     
    Would it be possible to implement a check so this can't happen?
     
    regards
  23. Like
    Ginoliggime reacted to innocentx in DrivePool Beta Version and Windows File Cache   
    Hi!
     
    I'm using DrivePool on my Windows Home Server 2011 for almost 2 years now and it works great so far. Thank you so much for that great piece of software.
     
    I wanted to try out the latest beta version because I've read in the forum, that it contains the latest bugfixes. The beta works fine (2.2.0.738_x64_BETA) and it seems a reboot of my server runs faster with the beta.
     
    On the other hand reading of files with the beta version runs very slow. Usually a scan of my music files with Logitech Media Server takes appr. 2 hours when DrivePool release version is installed. With the the beta version the scan of music files takes more than 12(!) hours).
     
    With the beta version there's the tab missing for activating Windows Cache in the properties of "Covecube Virtual Disk" in the device manager. After reinstalling the release version of DrivePool, everything works again as expected and the missing tab in the properties of "Covecube Virtual Disk" is back again.
     
    Is there any (not dangerous) possibility to activate the Windows Cache with the beta version? Of course I understand if not... because it's beta.
     
    Thanks a lot.
     
  24. Like
    Ginoliggime reacted to fishie in Startup issues (Coming from Flexraid)   
    Hi guys,
     
    Im just posting this to get a feel for how to configure this, I have for the last couple of years used Flexraid Pooling for my drives, as I really only need it to put together the drives into a single fileshare, this has been quite okay so far but im starting to feel the urge to test something new.
     
    DrivePool appears to allow me to add some SSDs as write cache, however I cannot seem to figure out how to get this working smoothly, sometimes it would flush to the drives, other times it wouldn't - The most recent one before giving up for now it stated that it flushed all data, but the data remained on the SSD but the folder for the file was created on all 4 drives at the same time, one drive had the "subs" folder for the movie, another had the .nfo file but the actual files were still on the SSD.
     
    What im looking for here, is basically a way to let everything land on the SSD, when that gets to about ~70% used it will start flushing the oldest data onto the drives, filling the drives up one by one until the threshold that I set in settings, but it appears that there's so many duplicate places to define this.
     
    Can someone please help me here? I just need this as a basic pooling option, no need for duplication or anything like that as I already have backups in place for the important data.
     
    In the future, I would like to use clouddrive with a google drive linked to it as a duplication destination, but it appears to not be working as intended, it will randomly drop duplicated data on the local drives aswell even though I have set them for unduplicated data only, if someone can point me in the right direction on that matter it would be awesome aswell!
     
    Thanks!
  25. Like
    Ginoliggime reacted to Christopher (Drashna) in How the StableBit CloudDrive Cache works   
    This information comes from Alex, in a ticket dealing with the Cache. Since he posted a nice long bit of information about it, I've posted it here, for anyone curious on the details of *how* it works.      What happens when the disk used for the local cache is being used? What happens when uploading?    The Upload cache and the Download cache and pinned data are seperate things and treated as such in the code Download cache is self learning, and tries to keep frequently accessed blocks of data on the local system. This speeds up access of the drive, for well... frequently accessed data.  Pinned data is generally drive and file system objects for the drive. This is kept (pinned to) the system, because this will be accessed frequently (written every time you create/modify/delete a file, and is accessed every time you read file properties.  Upload cache is everything written to the disk and may not be uploaded to the provider yet.  This is explained a bit more in detail below.    Upload cache can (is allowed) to exceed the Cache limit specified.  This is does, because otherwise will prevent files from being written. We could limit it to the specified size, but that would wipe out the self learning feature of the download cache. So... not ideal.   We do plan on implementing a maximum limit in the future, but for now, the max limit is the drive size. However, we do throttle based on free space remaining (we will always attempt to leave 5GBs free on the drive).      As you can see, we get more aggressive the closer to running out of disk space we get.       Now, for data integrity issues, because that is always a concern for us. Despite any issues that may be brought up here, we do everything we can to ensure data integrity. That's our number 1 priority.      This is what happens when you write something to the cloud drive: All writes to the cloud drive go directly to the local cache file. This happens directly in the kernel and is very fast.If the drive is encrypted, it's encrypted during this part. Anything stored on disks will be encrypted. At this point, your cache has some updated data that your cloud provider doesn't. The StableBit CloudDrive service is notified that some data in the cache is in the "To Upload" state, and need to be uploaded to the cloud provider. The service reads that data from the cache, uploads it to your provider and only once it's absolutely sure that the data has been uploaded correctly, will it tell the cloud drive kernel driver that it's safe to remove that data from the "To Upload" state. Now, in reality it can get much more complicated, because you can say, what happens if more new data gets written to the parts that are actively being uploaded, so this can get really complicated, really fast. But that's all handled by our driver, so let's just keep this simplistic view for this example. So what happens when you pull the plug in the middle of this process? StableBit CloudDrive loses  the "To Upload" state. Well, it doesn't really lose it, but it's in an indeterminate state, and we can't trust it any longer. In order to recover from this, StableBit CloudDrive assumes that all locally cached data was not uploaded to the provider. It is safe to make this assumption because uploading something that has already been uploaded before doesn't corrupt the drive as a whole in the cloud. While, not uploading something that needed to be uploaded would be catastrophic. Because that would mean that your data in the cloud would get "out of sync" with your local cloud drive cache and would get corrupted. Now up to this point I've described how StableBit CloudDrive maintains the data integrity of the bits on its drive, but that's another very important factor to consider here, and that's the file system.   The file system runs on top of the drive and, among other things, makes sure that the file metadata doesn't get corrupted if there's a power outage. The metadata is the part of the data on the disk that describes where the files are stored, and how directories are organized, so it's critically important.   This metadata is under the control of your file system (e.g. NTFS, ReFS, etc...). NTFS is designed to be resilient in the case of sudden power loss. It guarantees that the metadata always remains consistent (by performing repairs on it after a power loss). At least that's the theory. When this fails, that's when you need to run chkdsk.   But what it doesn't guarantee, is that the file data itself remains consistent after a power loss.   So there's that to consider. Also, Windows will cache data in memory. Even after you finish copying a file, Windows will not write the entire contents of that file to disk immediately. It will keep it in the cache and write out that data over time. If you look in the Resource Monitor under the Memory tab you may see some orange blocks in the chart, that memory is called "Modified". This is essentially the amount of data that is waiting to be written to the disk from the cache (and memory mapped files).
×
×
  • Create New...