Jump to content

Midoxai

Members
  • Posts

    10
  • Joined

Everything posted by Midoxai

  1. Hello, Recently i read about HDFS. I think rebalancing the files on file hashes with a quorum would be a cool feature. This would work in the following way: Rebalancing is scheduled by another plugin, e.g StableBit Scanner Plugin File movements are calculated Before a file is moved, it is read on multiple Disks and a hash (e.g md5, sha256, ... ) is calculated If the hash is the same on all disks, all disks are used to move the file If the hash is diffrent on the disks a Quorum is used to identify the disks where the file is read from Files getting copied/moved The Balancing could also do the following On a schedule, the hash for every file on every disc is calculated. If the hashes are diffrent on some disks, Quorum is used to determine from wich disk the files are replaced. Files are copied This would add a new level of integrity for the files.
  2. Thanks for your explanation. Then it makes sense to use only 1 MB as chunk size. I hope Microsoft will add partial download soon.
  3. I've seen the public response of Alex in the issue: If I understand it correct, there is a big overhead while making small reads. On the other hand I think that the actual max chunk size of 1 MB increases the overhead in download-time for big files, because the maximal number of download threads is limited so it adds one Round-Trip-Time to the download-time for every additional Request.
  4. Are there any plans to add increased chunk sizes with Onedrive for Business? I can choose max 1 MB. (v .763)
  5. For me that means that the cache should be as little as possible / as little as you can upload in a accepteble period of time.
  6. When i add CloudDrive to a DrivePool Pool and i use replication of all Files. For example i have a Hard Disk 'A', which is full of Data and a CloudDrive 'B' with equal Size. No more other Hard Disks. Does this mean i need a second hard disk with at least the size of A, so that CloudDrive can use this Drive as Cache? Or does DrivePool only replicates as much data as the cache can hold?
  7. Midoxai

    [Feature] Snapshot

    I did not find information about it... Would CloudDrive support something like snapshoting? At the moment Encrypting-Trojans (aka CryptoLocker) are popular and because DrivePool is adding a real Drive, these Trojans can encrypt all files in it too... I know, that DrivePool supports Volume Shadow Copies, but normally these will get deleted with all original Data I'm thinking about multiple Versions of the same File and in general the following possibilities: revert to a previos snapshot mount a read-only Volume out of a previos snapshot automatic snapshot rotation (for example with Grandfather-father-son rotation scheme) automatic snapshot when no write operation happens (for a consistent filesystem) snapshot replication (nice to have, but DrivePool can also be used) Does Cloud-Drive uses internally a b-tree representation?
  8. Deduplication would be a cool feature. It would save bandwidth and storage.
  9. Hi, i suggest Strato HiDrive as the biggest (private and business) Storage Provider from Germany. General Infos: https://dev.strato.com/hidrive/Get Started https://dev.strato.com/hidrive/getstartedSDKs https://dev.strato.com/hidrive/content.php?r=142-SDKsAPI Refernce https://dev.strato.com/hidrive/content.php?r=115-referencePricing https://www.strato.com/en_us/cloud-storage/Free Account https://www.free-hidrive.com/ Best regards
  10. Hi, I have some Questions about using Drivepool with server 2012. i'd like to duplicate a lot of files and some Hyper-V VHDs. The files are no Problem, but what can i suppose on VHDs? Can i store vhds on my pool without worrying about the data, which is written in a vm? One vhd can get a size of 500Gb or more. What happens when i enable the automatic balancing with the vhds? Do they get balanced without shutting down the vms? Can i exclude the vms/vhds from the balancing process? What happens when i remove one disk from the pool? What happens, when i create a vhd which is greater then the free space of the volume on which it is stored? Can i use the integrated per-volume-deduplication feature of Server 2012 to reduce the data, which is stored on one Volume? Can Drivepool still read the deduplicated files? Thanks for the answers best reagrds
×
×
  • Create New...