Jump to content

Carlo

Members
  • Posts

    32
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Carlo

  1. Carlo

    Windows 10 support etc

    You're not screwed at all. But MS is being smart and they are not "touching" domain computers (and they shouldn't!!!). All you have to do is login to the local administrator account to initiate the upgrade. http://answers.microsoft.com/en-us/windows/forum/windows_10-win_upgrade/domain-joined-windows-781-pro-upgrades-to-windows/af5d843c-f39c-4196-9ec9-33ea917cea21?auth=1 Carlo
  2. It's more resource intensive, but it gets the job done quicker/faster during scanning which is more important to me with a big library. But during scanning Plex gets done faster and doesn't put any hurt on my machines so I'd never consider this a reason to switch. I myself would say Emby is far more resource intensive overall when it doesn't need to be then Plex is. read on As a "power user", I too prefer the user management style of Emby over Plex. To the "average" system op, this probably isn't as important as plex.tv gets the job done (I prefer the user control myself). Not sure about your "quality comment". Quality of what? ie Server, Clients, etc? How do you quantify that? Without core work, Emby isn't going to be able to take advantage of GPU offloading in a way to make a lot of difference across the board. Plex could still serve up the videos to more clients even without using this offloading. Take for example the following: In Plex land you can pre-transcode all your videos to MP4 using h.264 (4.0 comp) with an AAC audio track. This will natively playback via Plex on every device assuming it has the bandwidth. So if you aren't bandwidth bound (clients or server) you can direct play to many clients via Plex just by have pre-transcoded/remuxed your media to this "common/universal" format. Now take this same media and playback from Emby and it's hit or miss. For example if a user plays the same video back from Chrome web browser that they just direct played via Plex it will transcode via Emby as it will want to transcode to Webm format which isn't needed. It also needlessly downgraded the quality. So there is really no such thing as a "universal" format for Emby as there can be for Plex. This limits the amount of clients Emby can stream to compared to Plex with a well thought out library format. This ALSO hinders the use off GPU offloading. What good does it do to have the ability to offload up to 2 QS streams if not all clients can use 264? The hardware offloading would be easier right now to incorporate into Plex then Emby. Plex has a much better thought out "engine" for transcoding then Emby does. You can really see the differences when you know what to look for and what are common problems. For example how each handles transcoding when you have subtitles turned on (depends on device doing playback). voice/audio sync problems are much more a problem with Emby than Plex, etc... CPU use during transcoding, number of simultaneous transcodes, etc... Emby right now can't compete with Plex in this department. This to me is the "heart" of a media streaming solution. I've got family/friends who have iPads, Androids, PS4, ChromeCasts, Samsung TVs, Xbox, Rokus, web browsers. IMHO, when you get right down to it, you NEED to have the proper clients and a "transcode" engine that is well thought out and fully functional. Everything else is "eye candy". With that said there are other things I truly love about Emby and wish Plex would do. I have a "love/hate" relationship both both programs! Carlo
  3. Or just use FileBot to rename your movies and TV Shows before adding them to Plex.
  4. Hello lee1978, I didn't notice you said DTS "HD". Thought you just said "DTS". "HD" is harder to get working as Plex doesn't recognize "HD" vs "DTS". All the meta data just says "DTS". However pass through should still do the trick. You will definitely want to search through the forums located here: https://forums.plex.tv/ At present they are down due to a hacker. I believe you will need a PlexPass to search for this as the good info on DTS-HD will be in Plexpass forums. As for the Xbox One. Plex has submitted a new version (the one you will want) to MS for approval but there is an issue they are fixing before resubmitting. This version will help with DTS passthrough as well as provide native MKV support and due away with the 20mbit limit. Currently MS has a bug that affects DTS with 3rd party apps which comes into play also. The MP4 container (as well as Plex) can support DTS. If you look at the original spec of MP4 you'll see it wasn't supported but revisions years ago have allowed it. Tools such as HandBrake, ffmpeg will easily put DTS in MP4s. I have them in my files. Carlo
  5. DTS passthrough can be done but it's a bit tricky to setup properly. Wait until the plex forums are back up and then search for DTS. Plex won't touch image files. It prefers files ready to stream. Remember, Plex is a streaming server so it wants files in a streamable format, not an image format. Ideally MP4 files are the best all around files that work with just about every client without having to transcode (bandwidth aside). I run one of the largest and most advanced Plex systems around so hit me up if you have any questions. Carlo
  6. No, I've never stopped the pool when moving files in or out of the pool folder. I just run a re-measure when I'm done shuffling files around.
  7. What I've done is: 1) Create a folder "MasterPool" or similar name. 2) Move contents of all folders from the pool... hidden folder to the "MasterPool" folder. 3) Remove/delete the pool... folder 4) Add to new computer and add to pool 5) copy conents from "MasterPool" to the newly created hidden pool... folder. 6) Re-measure the pool. So basically, just move the contents out of the pool. Delete the pool folder and then add to the new computer and recreate the new pool directory. Then just move the folders back. Then re-measure the pool. Simple.
  8. I'm not sure why this happens but I've had it happen to me 2 times in the last 2 or 3 weeks or so on different 4TB drives. I have 8 drives in an enclosure connected via USB3 cables. In my case I'm running DrivePool, Scanner and CloudDrive. What has happened to me is that the UI freezes up (VNC, RDP or console). I can fix this most of the time doing a loggoff /server:name from a different computer then loggin back in. Once in a while, I'll have to do a shutdown command from another computer. In each case of having a RAW disk issue I've noticed the CloudDrive service won't shut down and after 20 to 30 minutes I'm forced to hardboot the computer. For me, there is no activity that I'm aware going on with any of the drives in the 8 bay USB enclosure. I have no balancing running and my CloudDrive cache is setup on a different drive connected via SATA. I've not had this problem prior to installing Scanner, DrivePool, CloudDrive and I'm about to start uninstalling the suite starting with CloudDrive to see if the problems (freezes and hang-up rebooting) go away. I'm presently doing a restore of a drive at present using SnapRaid. I've previously run in JBOD and Windows Storage Spaces and never experienced either of these issues. Carlo PS It's not unheard of for USB disks to go RAW but this will usually only happen when the drive is in a write state of the partition table that doesn't finish and you have a power failure or something. I'm seeing more and more of this happening with DrivePool and it seems excessive to me.
  9. All of us who used to run BBS systems were basically pack rats who liked to have full collections of software and media from the times. We could have just access this info from other BBSs but wanted it on our own systems. Not to different really than today. We could watch video on Netflix, Amazon Prime, etc.. but we STILL prefer to be in control and manage our own collections. Guess we haven't changed much. We're still media pack rats!!! We've went from measuring storage in KB to MB, then GB and now in TB of data. Won't be long before we'll be talking PB of data. Many of us already have 10%-15% of a PB already.
  10. CompuNet was my system. It was my own custom stuff until version 2 of MajorBBS came out in 86. I ran and programmed for Galacticomm's MajorBBS (then Worldgroup). I knew Tim Stryker pretty well up until he committed suicide. I contributed a lot of foundation code and what later became Phar Lap protected mode which extended memory for the DOS systems allowing us to run more ISV modules. I was also one of the "hubs" many people used for linked teleconferences among the many, many interactive games (had just about all of them). Worldgroup more or less went down the tubes after Christine (Tim's wife) sold everything to Yannick Tessier (previously ISV) & Peter Berg. HTTP and the web came on pretty strong and WG just never adapted properly and the rest is history! At that point I was most operating as an ISP and started using more and more of Vircom's tools and then various other tools for proper Email, FTP, web hosting, etc... Sound like a similar walk down histories past?
  11. There was a magazine called Boardwatch that featured my system a few times. In that same time frame I had around 30 phone/modem lines. Couple years later I had 255 lines. Eventually replacing many modems with ISDN, then replacing many of them with X.25 over the years. Eventually, added IP connectivity and allowing "dial-in" via IP. This was back before even Trumpet Winsock (for those who remember those days). IP was used for telnet, FTP and gopher back in the early days. Was mostly all academic use back in those days. Of course storage grew as the years went by. Man just thinking back, brings lot of old memories long forgotten! Carlo
  12. I know you said for the C64. My very first hard drive was the 5.25 full height ST-506 5MB HDD. 5MB of storage at the time of 1980 was a HUGE amount of space and cost $1,500 back in 1980. That would be close to 5K in today's money. The ST-506 was an MFM drive which was pre IDE/SCSI and was manufactured by Shugert Associates (later became Seagate). I also had the next gen drive that came out late in 1981 which was another MFM drive but was 10MB and was the ST-412 (type 1). After these drives they moved to RLL encoding which packed 50% more in the same space. By 1982/83 I had around 160 MB of storage which was close to unheard of back in those days for the BBS I ran. Guess I'm dating myself, Carlo
  13. NO, you use deduplication at the host level so it has nothing to do with the VMs themselves. Carlo
  14. psykix, you could just use a free product like SnapRaid to create a parity across your present DrivePool drives. The volume you stripe to can be local, on another computer or NAS. Doesn't really matter. Something to think about for a bit more piece of mind.
  15. Yes, but that requires you to use a duplication pass and not real-time duplications. I was referring to being able to keep real-time duplication active but to have the option of only using the one SSD drive.
  16. I myself use a variety of different manufacture drives for my personal stuff. I work in IT and work with large data (I mean LARGE data) spread out over numious SANs with more than 10K drives. In my professional life I've seen a trend of higher failure rates among Seagate drives than any other manufacture (percentage wise), bot not in the last 2 years (about even). These are mostly all Enterprise drives and not the common home user or NAS drives that most of us would use routinely. So I'm not sure how much this matters. I think a lot of failures over time by some companies/users is choosing the wrong drive for the application at hand. Drives are built differently. For example putting 40 drives in one case/cabinet will be different then running one or two drives in a normal case. Lots more vibration and movement. This can also be even more of a factor for people who try and stuff lots of "home" drives in large cases then use suspend/power down features. When the drives spin down/spin up they cause extra vibration that other drives in the running in the case won't like, especially when you have 5 to 10 drives spinning up at one time, etc... Modern drive are much better with this type of thing, but can take it's toll over time. I'm not sure what value this post has except to say that if any one particular vendor had a much higher failure rate than other vendors they probably wouldn't still be in business. The HDD market place is extremly competitive and any manufacture putting out "junk" would surely be noticed in this day and age. Just pick the right disk for the job at hand based on the features it has you need and don't worry to much about who made it. Of course follow common sense rules such as use the same drives if building a traditional RAID type thing, etc
  17. I'm confused by this requirement. Why the need for 2 SSDs? Isn't the SSD just a "place holder" that get's used to hold the data just long enough until the system can write it out to the HDDs? Surely one SSD could be used to write the data out to two different HDDs. I'm sure we all get the fact that if two SSDs are used then the files starts out "duplicated" from the start BUT I think most of us could live with the fact that the "temp holding" SSD isn't duplicated until the data is written out to the first HDD. Or put another way, I'm sure many of us could live with the fact that the data isn't duplicated until the SSD drive has written to at least the first HDD. Could this maybe become an OPTION in a future update? Something like: 1) Allow use of 1 SSD drive for write caching (data can't be duplicated until first write occurs) 2) Use of two SSD drives for write caching (data is duplicated from the start) Something like that?
  18. You can adjust many things in Hyper-V also if you use versino 2 machines images.
  19. I'm in the USA so I know nothing of AUS pricing. Difference in pricing could surely weigh in BIG TIME of course. Have you compared pricing on 6TB WD RED drives? Yes, I'm sure. They are not general purpose drives but are meant for "cold storage" or archiving where the data is basically written once and not modified often. You can surely add/modify data any time you want but you can take a big write penalty in doing so. Read #2 here: http://www.extremetech.com/extreme/207868-hgst-launches-new-10tb-helium-drives-for-enterprise-cold-storage to get a better idea how SMR drives work. Here are a few of the highlights from that article: The disadvantage is that rewriting tracks will damage the data on subsequent tracks. The only way to prevent this is to read the entire data block, modify the necessary section, and then re-write the rest of the track. This can lead to huge write amplification — if a 4KB update needs to be performed to a 64MB area of track, then the entire 64MB has to be read into RAM and laid back down with the modified 4K section. It’s important to avoid random reads and writes as much as possible when using SMR, however, which is why HGST is marketing these drives as “cool” to “cold” storage — meaning data that’s written very few times and accessed only on occasion. If the drive is being regularly written and re-written, the performance penalties will quickly become severe. The HGST drives also require extensive modification and drive software in order to operate properly. Hitachi has an open-source project, libzbc, that can be integrated into Linux to implement support for its now Ha10 drives. Absent such support, the drives can’t function — these aren’t your typical plug-and-play hard drives. So in a nutshell these drives use SMR to pack more data onto the platters then convential drives do. These drives work similar to SSD in that they try and use all the free space first before going back re-writing existing data. Once you have to start re-writing data it can't just change a sector but has to re-write the whole section which can cause delays. Also of note and most people miss this info, is that these SMR drives aren't replacement for convential HDDs. The operating system has to know how to use them. This will be less an issue as time goes on but for now you couldn't just take these drives and drop them in any old NAS or put them in an USB enclosure and plug them into your Routers USB port for example and expect it to work. Due to the way they work they aren't suitable for random data or for RAID use or typical NAS use which should be obvious by now. BTW, if the workload is 180TB a year and the drives are 8TB then that's 22.5 rewrites a year which is nothing! Now with all that said, these drives could be very good to use if done properly. For example don't use balancing in DrivePool with these drives. I wouldn't even let DrivePool write to the drives (idealy). In the most IDEAL circumstance you would treat these drives as a write once read many times READ-ONLY device, not unlike DVD or Blu-Ray disks. So for example you use convential disks to build up your data until you have 8TB of media ready to "archive" then transfer this info to the Archive drive. You can then add this drive to your pool. If you were to do this then the Archives drives are a good solution to storage at a great price. Again, read up on them a bit to understand the hows and why of SMR and you'll get a better feel for the intended use of them. My example above was over-the-top a bit, but I was trying to make a point of the indented/ideal use of these drives. That is not to say they have to be used this way! Carlo
  20. That is not a shabby amount of storage by any means. Guarantee you have more storage than 99.99% of your local friends! You can store a HUGE amount of media on that storage pool for sure. Basically your own personal version of Netflix.
  21. I here what your saying Chris but don't fully agree. I do understand that you need some "working space" to store a few blocks to have ready for the thread(s) doing the uploading. However from a high level view it appears a background process is creating all these blocks getting them ready for the upload and just does it things without communicating with the uploading thread(s). It appears this process has no concept of how far ahead of the upload it is or not and just continues until it has created all the blocks needed for the duplicate section or it runs out of space (which ever comes first). In reality it only needs to stay X blocks ahead of the upload threads. Weather you have 5 blocks ready or 5,000 makes no difference as you can only upload so fast. Anything more than this is "wasted". This shouldn't be hard to calculate since you know at any time the max upload threads available. But with all that said I just want to clarify I'm not against a reasonble working storage area. But it doesn't need to be 4TB which will take a few days at best to upload. No way, shape or form should it need to be that far ahead. Carlo PS I know this is BETA and fully expect issues, so no issues in that regard. Just hoping to give feedback along the way to help you guys deliver the best product possible as it has so much promise especially combined with DrivePool.
  22. For home use I'm not sold on SMR. For those not familar with it take a read here: http://www.extremetech.com/extreme/207868-hgst-launches-new-10tb-helium-drives-for-enterprise-cold-storage Forget the brand but just read it to understand what SMR is and when it causes write problems. I have a slight personal bias and don't care for Seagate to much. For about the same money I'd rather buy an HGST or WD drive. Don't know how much faith you put in this but have a read here: http://www.extremetech.com/computing/198154-2014-hard-drive-failure-rates-point-to-clear-winners-and-losers-but-is-the-data-good For me it comes down to cost per TB and how much more I'm willing to spend to get the technology I'd prefer for flexibility. As an example these prices are from NewEgg (rounded up to nearest dollar) $280 Seagate Archive HDD v2 ST8000AS0002 8TB 5900 RPM $250 WD Red WD60EFRX 6TB IntelliPower 64MB Cache SATA 6.0Gb/s 3.5" NAS Hard Drive Seagate at $35 per TB vs $41.66 for WD RED NAS drive per TB. So the WD RED NAS drives will cost $6.66 more per TB or 19% more per TB. The Seagate Archive drives hold 25% more data per drive but that's the only advantage. The Archive drives aren't going to be compatible in the same way as convential drives. They really aren't going to be suitable for anything other than JBOD type use. Forget using them in any type of RAID or similar situation. You also don't want to use the Archive drives for storing info that changes often as they aren't designed for that. So while I certainly don't want to persuid anyone not to try atleast one of them, just make sure you understand the technology first and know how to properly use them in your setup. For me personally at this point in time I give them the PASS and will continue to order standard NAS type drives like the WD REDs. For a few bucks more per TB of space I have drives I can repurpose for just about any other use including standard RAID without worry, issue or compatibility problems. Carlo
  23. Depends for me on which way I want to go. For 8 bay http://www.newegg.com/Product/Product.aspx?Item=N82E16817707367 For 15 bay http://www.istarusa.com/raidage/products.php?model=DAGE415U20-PM#.VYI2zflVhBd The StarTech eSATA 8-Bay Hot-Swap SATA III Hard Drive Enclosure with UASP works quite well and pretty cheap at around $300 plus shipping. Connect it up via eSata, USB3 or UASP and it works quite well with DrivePool. I picked one up a couple of weeks ago and put 8 4TB drives in it and haven't had a single issue with it. I've first tested eSata, then UASP and then normal USB3. I left it connected via USB3 and haven't had a single issue. It's not quite as fast as the 15 bay could be since you have 8 drives on a channel vs 5, but for normal (non benchmark use) and Plex use it works just fine. I stream to anywhere from 6 to 10 people in the evening and it handles that load without breaking a sweat. What I like is that you can take this box and plug it into just about any other computer/notebook or router and be able to access your data. One of these boxes and a higher end home router like an ASUS gives you a nice NAS box. The reason I'm leaning toward it is that it's very quite and keeps the drives cool. It's not a rackmount unit but I don't really care about that for a home unit since it's versitile and works well via UASP/USB3 so I don't need to worry about sata/esata ports. I still have a few sata ports available (8 I think) but I'm reserving them for internal drives. I could go SAS also but really just don't honestly see the need for a media server when a simple box like this can easily add 8 drives at a shot. Throw 6TB WD Reds in it for a reasonable large amount of data at a decent price. Carlo I've got a 24 core (48 with hyperthreading) SuperMicro server with a few 1 TB drives in it running over 75 virtual machines. They are all 2012 R2 installs. Machine flys. Not only does the de-duplication help with storage but also helps the machines run faster as it helps with caching/memory and other important items (not trying to be technical). Carlo
  24. I've got a few different setups (NASes, Storage Pools, convential JBOD server and a couple DrivePools). My largest DrivePool is currently configured at 88.2TB usable and I've got a couple 6TB drives not in that figure used for parity (SnapRaid). This pool is strictly for media mostly used by Plex. I've got 4 Windows 2012 R2 servers and two are currently dedicated for multimedia. 2 are more generic servers and hold a lot of VHDX images using windows De-Duplication. Then I've got a few smaller NAS boxes used to hold typical family stuff. But getting back to DrivePool. I'll be increasing the storage space or the 88.2TB pool in about a month (guessing) when I add the next 8 to 15 bay enclosure for more movies and especially TV Shows. Currently stored on that pool: 160 - 3D Movies 6,200 - Movies 1,150 - Educational Video 18,700 - Music Videos 850 - NFL Games 10,100 - TV Episodes (132 Shows, 613 Seasons) Music: 4,400 Artists, 12,900 Albums, 105,000 Tracks Carlo
  25. No, can't do this with CloudDrive. For that type of functionality you will probably want to check out NetDrive.
×
×
  • Create New...