Jump to content

p3x-749

Members
  • Posts

    115
  • Joined

  • Last visited

  • Days Won

    6

Posts posted by p3x-749

  1. -> https://en.wikipedia.org/wiki/Transcoding

     

    One example is when you want to stream your BluRay material to a device that is not capable of

    playing it natively, like your 1280x768 tablet.

    Transcoding means to convert the codecs (video and/or audio) from source to desired/supported destination formats.

     

    ...you can do that "offline", meaning to store different versions of your media for each player...this will take time and a faster CPU helps, but eventually

    you'll finish the jobs and the only thing you really need is disk-space.

    Some people are in for doing this live, on-the-fly, which means you need a really beefy CPU for this.

    A famous application that supports this is Plex...and having a beefy Socket-1150v3 or Socket-2011 quad+ core XEON really helps here.

     

    ...but for streaming material "raw" (when the client device can handle it natively), this is not needed.

    You can create digital "backups" of your media, like from your dvds or blurays.

    .... don't know if it is legit to do so for copy protected material in your jurisdiction though...there are apps that help in the process, like AnyDVD or makemkv.

    This is *not* transcoding (as the codecs auf audio/video stay the same, what changes is the container, like from avi to mkv, only)...the speed of the optical drive is usually the limit...CPU performance is not an issue here.

     

    You can "shrink" (transcode) the material to create smaller media files to stream later with apps like handbrake...these usually work offline.

    I don't do this on my server but on my (XEON based) workstation...server just holds/stores the data, as disk space is getting cheaper and cheaper

    ...and this saves a lot of energy  ;)

  2. No, the CPU and memory won't really affect this, as long as you have enough resources for the system in general. The determining factor is the drives in question, and the drive controller.

     

    Both products were written to be as resource efficient as possible.  You could absolutely run them on an intel Atom or Celeron system without problems.

     

    However, what else you want to do  with the system will absolutely determine your needs. If it's just reading from shares, then a Celeron should be plenty enough for you. However, if you want to do more, I'd recommend a low end Core i3 processor, as it will provide plenty of horsepower to do a majority of tasks that you can throw at it.

     

    But the idea to start at a celeron and then upgrade as needed is a good idea.

     

     

    If you're going to stream multiple HD Streams, then the Xeon would be a better option. A Core i3 may be up to it, but in this case, it would be better to err on the side of caution.

     

    Well, aren't these two statements contradictory?

     

    IMHO, when talking about streaming - *not* transcoding - the first statement holds true, even for multiple clients,

    as long as  a single GBit NIC is the bottleneck and can be saturated.

    I can easily achieve this with my Celeron 847 board and 7.2k Seagate Baracuda Disks in my pool.

  3. I totally agree with what you said regarding NIC make and model.

    Usally the ones based on intel chips are a safe buy...

     

    However, my actual mini-box where I use DrivePool on, employs an onboard RealTek NIC

    and with that  I can sustain 112MB/sec read *and* write to the pool over the net...this is without any tweaks in the OS, only installed the driver from the board manufacturer.

    So I am quite happy but maybe I was just lucky this time (I did choose the motherboard with the capability in mind to add another NIC if need be, based on experience from earlier builds).

  4. ....for writes, in unRAID (and I believe in DrivePool too) you will only get to the speed of a single disk (no striping)..for a single file/user.

    So the speed of the disk, where the file is written to, is what matters.

    Both products offer the concept of a cache drive / feeder disk...choice of a faster SSD should help here....with good spinning disks, this is not needed IMHO.

  5. ...I am running my TV-Recorder based on Win7-HomePremium and a 3x3TB Disks with DrivePool for over a year now.

    It is, to me, complete free of hassles...I activated Windows-auto-update and never have touched the OS since.

     

    2 remarks to what the OP said regarding unRAID, NTFS and speed.

     

    - unRAID does *not* support NTFS for disks inside the array..so "staying with NTFS" is not an option with unRAID (no marketing, I am a user of both products...and others)

    - speed...this will essentially only depend on the speed of your disks

       I am running 7.2k Seagate baracudas in my Pool and can saturate a 1GB NIC for read and write with my small Celeron-847 board and without duplication.

       ..it won't do that when running all green disks, not with DrivePool, unRAID or ZFS or else.

  6. I'd thought of the Supermicro boards, but even though this will function as a server I want to use a desktop OS and there are compatibility issues with some of the server boards (including the X10SL7-F) running Windows 7.

     

    Hmmm...I haven't tried myself...but according to this, the 64bit version of Win7 should do with the X10SL-7-F -> http://www.supermicro.com/support/resources/OS/C222.cfm

    If you just need the Desktop occasionally, the on-board GPU will do fine and it comes with IPMI...better than using RDP :D

    ...you could always add another, dedicated GPU if need be.

  7. I agree, the PSU will add the lowest amount to your bill, but a PSU too large will be inefficient...depending on where you live, energy may not come cheap.

    A PSU should be running at its 20% nominal wattage when the system is idle in order to achieve the 80Plus mark.

     

    If this is for a server, why not going for a server board?

    I agree the ASUS is nice, but with 10 disks, you will have to add another HBA.

    What about the Supermicro X10SL7-F...it comes with a LSI 2308 on-board...this combo is a bargain, but you will need to run ECC RAM and a XEON.

     

    As for Plex...how many clients do you need to support concurrently?

  8. ...how many drives do you plan for in total?

    The 600W PSU is a single rail one, which is the right choice, but with 600W and 46A on 12V it is definitely an overkill for anything below of 15 disks.

    You need to calculate 2A on 12V for each disk plus an additional 3-6A for Board, CPU and RAM during a cold start...also calculate some little headroom as PSU will age.

  9. Unfortunately my skillset does not yet cover iSCSI and related technologies, although the synology is here now.

     

    Just check with your synology's manual if iSCSI is supported.

    The NAS will be the iSCSI target and you will have to set it up / configure it there.

    Your Windows machine will be using that iSCSI disk(s) on your NAS with the use of the iSCSI initiator application (which is available as standard from win7 onwards).

    There are many howtos available on the net.

  10. I don't think you should go that route and combine these two things.

     

    It is however not uncommon to use an external datastore for hosting VM disks, like via NFS.

    If your NAS is available at the time you need to run the VMs and if the NAS performance over wire is enough (with link aggregation and RAID on the NAS there are possibilities that are within the range of a consumer SSD)

    you could for example use the iSCSI target method and exclusively move the VMs to the NAS...freeing up all two local disks in your RAID1 VM-datastore

  11. ...I am actually not certain that this can be achieved at all.

    IMHO drivepool will build the pool on physical disks and only after that you can enable duplication on the pool.

    In order to incorporate the NAS, I doubt that drivepool will allow you to build a pool with a network share in it.

    The only way of using resources on the NAS, that I can think of, is using it as an iSCSI target and importing that iSCSI disk into your windows system with the iSCSI initiator.

    Then drivepool should see it as a physical disk. However, that data will not be shared by the NAS in order to be seen by others.

     

    What is your usecase behind that?

     

    When using this for duplicating a virtual machine config and its virtual disks, I think there is a lot of risk that the duplicated copy of a virtual disk will be unusable.

    This risk exists independent of the way on how you copy it (manually or by means of drivepool duplication with/without matching disks speeds)

    The normal way of operating this, AFAIK is to create a VM-snapshot. When you are able to locate the snapshots in an other directory, you should be able to duplicate these safely

    and use these to recreate the VM resume state later in case of a crash of the VM 

  12. ...with snapraid you can add a parity drive (or even multiple ones), but it is not the same as with an online-RAID setup as snapraid only offers offline parity.

    Also I wouldn't call that an integration of both technologies..they are co-existing and each one will work fine on its own, knowing nothing but little about the other part.

     

    To do that, you would use the real, physical disks to be incorporated into drivepool and at the same time into the snapraid setup.

    You *could* run snapraid against the Pool-disk, IF your parity disk is large enough (at least the size of the pool), but normally you wouldn't as your pool is most likely (to grow) too large for a single disk.

     

    There is a thread here somewhere about that topic and where a snapraid config has been published that seems appropriate for coexisting with drivepool.

  13. However, ECC  has built in "Error Checking and Correction" built into it. So it would make the system more stable.

     

    IMHO this is not a matter of stability but that of data intregrity.

    There is a risk that a bit in RAM can "just flip"...when this bit is part of a data-set that is currently written to disks, your data (the file) will get written with that bit not in original state, which can render the file useless.

    ECC memory can correct single bit errors and detect double bit erros in your RAM.

    For single bit error, your data will stay intact, for a double error, at least you'll get a warning/error-log and take action.

    With non-ECC RAM this error will go without notice and at some point in time later, you'll discover that the file is corrupt..

  14. Yes, I agree that the isue with the AOC is mainly with using it in a windoze system....I *am* using one with linux and it doing fine with 3TB disks.

     

    For alternatives, see here: http://community.covecube.com/index.php?/topic/330-opinions-and-advice-on-a-controller-card-and-power/&do=findComment&comment=2089

    Especially for that large number of disks in a non-striped array, an expander might be a good option.

    I am using the Intel RES2CV240 with a M1015 in a Norco4224-like case....works like a charm with one path connected and 20 disks on it.

    When employing a second M1015 you could connect both cards and use 16 disks, with redundancy on the card connection.

  15. Interesting question.

    I think that I have indirect proof, that each disk is spun-up individually and only when access is required.

     

    As I am running DrivePool in my TV-Recorder, where the recording SW creates a thumbnail for every media-file/recording,

    I can see an interesting effect when trying to access the files.

    When browsing the repository via Web-UI, the Meta-Data for each recording is fetched from a database on the system disk.

    The thumbnail is stored alongside with the recording file on the Pool and this is fetched during the browsing action as well.

     

    However, on a first attempt only the metadata is shown..and only after repeating the query the thumbnail comes up too.

    This must be due to the pool-disk spinning-up.

    Now, after the first thumbnail *is* shown, other thumbnails (where the files reside on other disks) still show that behavior (whilst others from the same pool-disk display immediately at first attempt now).

    If the whole pool had been spun-up in the first attempt, this effect  would not be observed.

    Hence I conclude, that each disk is getting spun-up individually.

  16. By being a "kind of waste", I was referring to their original purpose.

    These cards, being originally a "real" RAID type are only entry level in this domian.

    And using the RAID driver in JBOD mode with DrivePool or in software raid *is* a waste and a risk, too.

     

    Well...cheap times are long over.

    Especially the M1015 was shipped as the standard card with all IBM X-Servers...but all enterprise customers did rip them out, replaced them in their servers and sold the parts as crap in the bay.

    I remember buying a pair of two, brand new in their boxes, for 70USD...good old times.

    Since the word is out that these can be cross-flashed to a HBA, they became the best known secret and prices went through the roof.

    Same is now for the Dell models...the Fujitsu is the youngest secret as of today...maybe there are some deals waiting ahead.

  17. The M1015 is a very nice SAS controller.

    I am using several of them in my builds, also in conjunction with SAS Expanders., like the Intel RES2CV240.

     

    The original M1015 is a RAID card, which is kind of a waste with Drivepool or software-"raid" based systems in general.

    There are many threads out there on how to crosss-flash it to a HBA with IT-mode Firmware.

     

    Also other LSI2008 based controllers that will work similar to thze M1015 (and can be cross flashed) are available, like

    - Dell Perc H310

    - Fujitsu D2607

  18. For a FileServer with many spinning drives, always go for a single 12V-Rail PSU.

    One single disk drive will eat up 2+ Amps on the 12V rail when performing a cold start.

     

    ATX standard actually requires a PSU to be dual rail, but for this purpose a single rail is recommended...no need for doing any maths...just connect an go.

     

    You will find the no. of rails in the specs of the PSU.

    If this information is not listed, you can determine it from the Output-Amps given on the spec plate.

    Like a non-single-Rail PSU will advertise the output per Rail, like "12V1-22A, 12V2-18A" for a dual-rail....a single Rail PSU will not have the 12V rail-output numbered, like stating "12V-40A" as in this example.

  19. ..an alternative is to apply -one or more- offline parity disk(s) using snapraid.

    You'll need a disk at least the size of your largest disk in the pool for that.

    This can be an USB based one or other...for myself I am using an iSCSI disk with the target residing on my main NAS.

     

    ...works like a charm.

  20. I think you're right...only ATX full-size...but not for all AM3+ chipsets and all boards.

    Looks like all boards that claim win-8 compatibility and that are in full-ATX size should have a TPM header.

     

    This one is -at this time- the cheapest ASUS with ECC Support and with

    a TPM header around my side of the pond: http://www.asus.com/Motherboards/M5A97_EVO_R20/

     

    ...regarding USB3, well I never relied heavily on that feature, as I am doing my data on-/off-loading

    via iSCSI to my main NAS (which is also build based on the ASUS+Opteron combo from my first post ATM, based on ZFS and Linux).

    I see currently avrg. of 120MB/sec from my little Celeron-847 that is housing the StableBit Pool to that NAS ...nothing to complain about, I'd think.  :D

     

    Edit: ...you're right...TPM module from ASUS is easily available with a decent price-tag....should make a nice combo with Bitlocker....I'll consider this in a future build  B)

  21. That particular Board does not have a build-in TPM or a socket for one.

    I *think* you'll only find these with the more advanced AM3+ chipsets.

     

    Like with this board ...it is Win-8 compatible, has ECC support and a TPM socket/: M5A99FX Pro -> http://www.asus.com/Motherboards/M5A99FX_PRO_R20/#specifications

    But you'll need a GPU card for it to POST since it does not have one on-board...and it is quit expensive compared to mine with the "older" chipset.

  22. I did a lot of research in the past to get the "most bang for the buck" for my setup.

    Sometimes I am on a tight budget or the UseCase does not justify the spending (like employing a "real" server for a backup array, that would run a couple of hours a week only).

     

    Well, what  is on my shopping list for a (file-)server, are these two features....always:

     

    - ECC memory support

    - AES-NI (Hardware Encryption in CPU) support...I need/want full-disk encryption and performance as well.

     

    But with Intel based CPUs, this is *only* available with XEONs, which are quite expensive and ECC-RAM support only comes with server motherboards here.

     

    With AMD, things are a bit different though.

     

    - Almost all AM2, AM3 and AM3+ socket CPUs support ECC memory

    - all AM3+ CPUs support AES-NI instructions

     

    ...and almost all ASUS AM3+ socket based motherboards support ECC RAM...officially...check the specs!

     

    I am running a system based on

     

    - ASUS M5A78L-M/USB3 (cheapest with 4x DIMM-Slots, USB3, onboard GPU and microATX)

    - 4x ECC UDIMMs (this is unbuffered, unregistered memory)

    - AMD Opteron 3350HE (Opteron AM3+ socket based)

     

    with great success.

    ECC memory is confirmed to work and I see no performance penalty from using encryption on the array (7 disks curently) as I can easily max out the GBit connection.

    Costs were about 50% of the comparable Intel based setup/combo (35% if I would have gone for a FX-CPU instead of the HE-Opteron).

×
×
  • Create New...