Jump to content
Covecube Inc.

Carlo

Members
  • Content Count

    32
  • Joined

  • Last visited

  • Days Won

    7

Reputation Activity

  1. Like
    Carlo got a reaction from SootStack in Using a cloud drive and no other for duplicates   
    I'm trying to follow the directions for this but coming up at a loss.
    Any chance specific directions could be given to show how to set this up?
     
    I for example have about a dozen drives as part of my local storage and part of DrivePool.
    I have created a 50TB drive using DriveCloud using Amazon Cloud Drive.
     
    What I want to be able to do is have it Duplicate all LOCAL media to the 50TB cloud drive.
    1) no duplicates should be local
    2) cloud drive is ONLY duplicates of the LOCAL media
     
    Essentially this would become an "automated backup" if it works.
     
    If this can be done I'd like to ask for an option to be added to the next version.
    1) The ability to set this up to NEVER delete a file in the cloud drive if it is locally deleted unless given permission.  Updates yes but no deletes.
    2) Should be configurable per cloud drive
    3) Have the option to via the GUI to perform the deletes ONLY when clicked.
     
    This would help with user error (accidential delete of directory for example) or if something happens to a local drive and the drive pool thinks the files are gone (but not supposed to be).
    Essentially what I'm asking for is an automated "XCOPY" operation that can create or update media but never delete it "automatically".
     
    IMHO backups (online or local) serve 2 purposes.  They protect against hardware failure and user screwups.  I hate to admit it but I've lost more data over time due to user goofs then hardware issues.  By not automatically deleting data in the cloud I can get some protection against the 2nd cause of loss of data.
     
    Thanks,
    Carlo
  2. Like
    Carlo got a reaction from ~Slyfox in Cloud Drive Using up my Local Storage   
    I here what your saying Chris but don't fully agree.
    I do understand that you need some "working space" to store a few blocks to have ready for the thread(s) doing the uploading.
     
    However from a high level view it appears a background process is creating all these blocks getting them ready for the upload and just does it things without communicating with the uploading thread(s).  It appears this process has no concept of how far ahead of the upload it is or not and just continues until it has created all the blocks needed for the duplicate section or it runs out of space (which ever comes first).
     
    In reality it only needs to stay X blocks ahead of the upload threads.  Weather you have 5 blocks ready or 5,000 makes no difference as you can only upload so fast.  Anything more than this is "wasted".  This shouldn't be hard to calculate since you know at any time the max upload threads available.
     
    But with all that said I just want to clarify I'm not against a reasonble working storage area.  But it doesn't need to be 4TB which will take a few days at best to upload.  No way, shape or form should it need to be that far ahead.
     
    Carlo
     
    PS I know this is BETA and fully expect issues, so no issues in that regard.  Just hoping to give feedback along the way to help you guys deliver the best product possible as it has so much promise especially combined with DrivePool.
  3. Like
    Carlo got a reaction from Christopher (Drashna) in Speeding up network access to DrivePool   
    It's more resource intensive, but it gets the job done quicker/faster during scanning which is more important to me with a big library.  But during scanning Plex gets done faster and doesn't put any hurt on my machines so I'd never consider this a reason to switch. I myself would say Emby is far more resource intensive overall when it doesn't need to be then Plex is.  read on
     
    As a "power user", I too prefer the user management style of Emby over Plex.  To the "average" system op, this probably isn't as important as plex.tv gets the job done (I prefer the user control myself).
     
    Not sure about your "quality comment".  Quality of what? ie Server, Clients, etc?   How do you quantify that?
     
    Without core work, Emby isn't going to be able to take advantage of GPU offloading in a way to make a lot of difference across the board.  Plex could still serve up the videos to more clients even without using this offloading. Take for example the following:  In Plex land you can pre-transcode all your videos to MP4 using h.264 (4.0 comp) with an AAC audio track.  This will natively playback via Plex on every device assuming it has the bandwidth.  So if you aren't bandwidth bound (clients or server) you can direct play to many clients via Plex just by have pre-transcoded/remuxed your media to this "common/universal" format.
     
    Now take this same media and playback from Emby and it's hit or miss.  For example if a user plays the same video back from Chrome web browser that they just direct played via Plex it will transcode via Emby as it will want to transcode to Webm format which isn't needed. It also needlessly downgraded the quality.  So there is really no such thing as a "universal" format for Emby as there can be for Plex.  This limits the amount of clients Emby can stream to compared to Plex with a well thought out library format.
     
    This ALSO hinders the use off GPU offloading.  What good does it do to have the ability to offload up to 2 QS streams if not all clients can use 264?  The hardware offloading would be easier right now to incorporate into Plex then Emby.
     
    Plex has a much better thought out "engine" for transcoding then Emby does.  You can really see the differences when you know what to look for and what are common problems.  For example how each handles transcoding when you have subtitles turned on (depends on device doing playback).  voice/audio sync problems are much more a problem with Emby than Plex, etc...  CPU use during transcoding, number of simultaneous transcodes, etc...
     
    Emby right now can't compete with Plex in this department.  This to me is the "heart" of a media streaming solution.  I've got family/friends who have iPads, Androids, PS4, ChromeCasts, Samsung TVs, Xbox, Rokus, web browsers.   IMHO, when you get right down to it, you NEED to have the proper clients and a "transcode" engine that is well thought out and fully functional.  Everything else is "eye candy".
     
    With that said there are other things I truly love about Emby and wish Plex would do.  I have a "love/hate" relationship both both programs!
     
    Carlo
  4. Like
    Carlo got a reaction from gringott in The Largest Stablebit Drivepool In The World!!   
    I know you said for the C64.  My very first hard drive was the 5.25 full height ST-506 5MB HDD.
    5MB of storage at the time of 1980 was a HUGE amount of space and cost $1,500 back in 1980.  That would be close to 5K in today's money.
     
    The ST-506 was an MFM drive which was pre IDE/SCSI and was manufactured by Shugert Associates (later became Seagate). I also had the next gen drive that came out late in 1981 which was another MFM drive but was 10MB and was the ST-412 (type 1).  After these drives they moved to RLL encoding which packed 50% more in the same space.
     
    By 1982/83 I had around 160 MB of storage which was close to unheard of back in those days for the BBS I ran. 
     
    Guess I'm dating myself,
    Carlo
  5. Like
    Carlo got a reaction from gringott in The Largest Stablebit Drivepool In The World!!   
    I myself use a variety of different manufacture drives for my personal stuff.  I work in IT and work with large data (I mean LARGE data) spread out over numious SANs with more than 10K drives.
     
    In my professional life I've seen a trend of higher failure rates among Seagate drives than any other manufacture (percentage wise), bot not in the last 2 years (about even).   These are mostly all Enterprise drives and not the common home user or NAS drives that most of us would use routinely.  So I'm not sure how much this matters.
     
    I think a lot of failures over time by some companies/users is choosing the wrong drive for the application at hand.  Drives are built differently.  For example putting 40 drives in one case/cabinet will be different then running one or two drives in a normal case.  Lots more vibration and movement.  This can also be even more of a factor for people who try and stuff lots of "home" drives in large cases then use suspend/power down features.  When the drives spin down/spin up they cause extra vibration that other drives in the running in the case won't like, especially when you have 5 to 10 drives spinning up at one time, etc...  Modern drive are much better with this type of thing, but can take it's toll over time.
     
    I'm not sure what value this post has except to say that if any one particular vendor had a much higher failure rate than other vendors they probably wouldn't still be in business. The HDD market place is extremly competitive and any manufacture putting out "junk" would surely be noticed in this day and age.
     
    Just pick the right disk for the job at hand based on the features it has you need and don't worry to much about who made it. Of course follow common sense rules such as use the same drives if building a traditional RAID type thing, etc
  6. Like
    Carlo reacted to Christopher (Drashna) in whs2011 to server2012 essentials   
    Yup. It's done at a volume level, and can be done with any sort of files. 
     
    And I have a bunch of VMs (Windows XP, WHSv1, and up of each different architecture, as well as a "work" vm and a few linux VMs).  I get roughtly 65% savings. In fact, I couldn't host all of my VMs on my 500GB SSD if I wasn't using deduplication. 
  7. Like
    Carlo got a reaction from Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    All of us who used to run BBS systems were basically pack rats who liked to have full collections of software and media from the times.  We could have just access this info from other BBSs but wanted it on our own systems.
     
    Not to different really than today.  We could watch video on Netflix, Amazon Prime, etc.. but we STILL prefer to be in control and manage our own collections.
     
    Guess we haven't changed much.  We're still media pack rats!!!
     
    We've went from measuring storage in KB to MB, then GB and now in TB of data.
    Won't be long before we'll be talking PB of data.  Many of us already have 10%-15% of a PB already.
  8. Like
    Carlo reacted to RFOneWatt in The Largest Stablebit Drivepool In The World!!   
    I'm sure I've seen your system and may even know you - I was a Boardwatch subscriber and ran one of the largest BBS' in the country.  I'm surely going to recognize the name of your BBS. 
     
    My story sounds similar to yours...
     
    in a nutshell: started out running a Multi-Node system on an Amiga. Once we hit 16 lines we couldn't get any more multi-port hardware so I switched to Major BBS.  Maxed out at 88 POTS lines before switching over to multiple T1's and into the ISP category.   (I've left out some bits as you know.. haha)
     
    Major BBS?  Wildcat?  
     
    ~RF
  9. Like
    Carlo got a reaction from Christopher (Drashna) in whs2011 to server2012 essentials   
    NO, you use deduplication at the host level so it has nothing to do with the VMs themselves.
     
    Carlo
  10. Like
    Carlo got a reaction from Christopher (Drashna) in Thinking of purchasing..   
    psykix, you could just use a free product like SnapRaid to create a parity across your present DrivePool drives.
    The volume you stripe to can be local, on another computer or NAS.  Doesn't really matter.
     
    Something to think about for a bit more piece of mind.
  11. Like
    Carlo got a reaction from Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    I myself use a variety of different manufacture drives for my personal stuff.  I work in IT and work with large data (I mean LARGE data) spread out over numious SANs with more than 10K drives.
     
    In my professional life I've seen a trend of higher failure rates among Seagate drives than any other manufacture (percentage wise), bot not in the last 2 years (about even).   These are mostly all Enterprise drives and not the common home user or NAS drives that most of us would use routinely.  So I'm not sure how much this matters.
     
    I think a lot of failures over time by some companies/users is choosing the wrong drive for the application at hand.  Drives are built differently.  For example putting 40 drives in one case/cabinet will be different then running one or two drives in a normal case.  Lots more vibration and movement.  This can also be even more of a factor for people who try and stuff lots of "home" drives in large cases then use suspend/power down features.  When the drives spin down/spin up they cause extra vibration that other drives in the running in the case won't like, especially when you have 5 to 10 drives spinning up at one time, etc...  Modern drive are much better with this type of thing, but can take it's toll over time.
     
    I'm not sure what value this post has except to say that if any one particular vendor had a much higher failure rate than other vendors they probably wouldn't still be in business. The HDD market place is extremly competitive and any manufacture putting out "junk" would surely be noticed in this day and age.
     
    Just pick the right disk for the job at hand based on the features it has you need and don't worry to much about who made it. Of course follow common sense rules such as use the same drives if building a traditional RAID type thing, etc
  12. Like
    Carlo got a reaction from Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    That is not a shabby amount of storage by any means.  Guarantee you have more storage than 99.99% of your local friends!
     
    You can store a HUGE amount of media on that storage pool for sure. Basically your own personal version of Netflix.
  13. Like
    Carlo got a reaction from Christopher (Drashna) in The Largest Stablebit Drivepool In The World!!   
    Depends for me on which way I want to go.
    For 8 bay http://www.newegg.com/Product/Product.aspx?Item=N82E16817707367
    For 15 bay http://www.istarusa.com/raidage/products.php?model=DAGE415U20-PM#.VYI2zflVhBd
     
    The StarTech eSATA 8-Bay Hot-Swap SATA III Hard Drive Enclosure with UASP works quite well and pretty cheap at around $300 plus shipping. Connect it up via eSata, USB3 or UASP and it works quite well with DrivePool.  I picked one up a couple of weeks ago and put 8 4TB drives in it and haven't had a single issue with it.  I've first tested eSata, then UASP and then normal USB3.  I left it connected via USB3 and haven't had a single issue.
     
    It's not quite as fast as the 15 bay could be since you have 8 drives on a channel vs 5, but for normal (non benchmark use) and Plex use it works just fine.  I stream to anywhere from 6 to 10 people in the evening and it handles that load without breaking a sweat.   What I like is that you can take this box and plug it into just about any other computer/notebook or router and be able to access your data.  One of these boxes and a higher end home router like an ASUS gives you a nice NAS box.
     
    The reason I'm leaning toward it is that it's very quite and keeps the drives cool.  It's not a rackmount unit but I don't really care about that for a home unit since it's versitile and works well via UASP/USB3 so I don't need to worry about sata/esata ports.  I still have a few sata ports available (8 I think) but I'm reserving them for internal drives.
     
    I could go SAS also but really just don't honestly see the need for a media server when a simple box like this can easily add 8 drives at a shot. Throw 6TB WD Reds in it for a reasonable large amount of data at a decent price.
     
    Carlo

     
    I've got a 24 core (48 with hyperthreading) SuperMicro server with a few 1 TB drives in it running over 75 virtual machines.  They are all 2012 R2 installs.  Machine flys.
     
    Not only does the de-duplication help with storage but also helps the machines run faster as it helps with caching/memory and other important items (not trying to be technical).
     
    Carlo
×
×
  • Create New...