VESA Releases DisplayPort 1.4 Standard: DisplayPort Adds Compression & HDR
by Ryan Smith on March 2, 2016 10:00 AM EST- Posted in
- Monitors
- Displays
- DisplayPort
- VESA
One of the more interesting aspects of the DisplayPort standards is how the VESA has the separate but strongly intertwined DisplayPort and Embedded DisplayPort standards. As a result of the standard development process, we see a bit of ping-ponging between the two standards on features. New features get adopted by one sub-standard or the other first, and then after a bit of time show up in the next iteration of the other standard. What would become DisplayPort Adaptive Sync, for example, first started out in Embedded DisplayPort, while the newest bandwidth mode, HBR3, started out on DisplayPort.
After an update for the Embedded DisplayPort standard last year with eDP 1.4a, being announced this week is the next iteration of the DisplayPort standard, bringing it to 1.4. And like the examples above, this is another case where some features are making their way back from eDP to the mainline DP standard, while at the same time new features are coming to the DisplayPort family for the first time. To that end, DP 1.4 is a mix of both old and new, and while also serving as interesting case in highlighting how the two DisplayPort standards differ and why this is necessary.
First off then, despite the updated version number and unlike previous DisplayPort “point updates,” the latest update does not change the physical layer for DisplayPort. HBR3, introduced with DisplayPort 1.3, remains the newest and fastest bandwidth standard for DisplayPort.
Instead what has changed for DisplayPort 1.4 is the DisplayPort feature set, and in a major way. Surprisingly absent in DisplayPort 1.3 was support for the VESA’s Display Stream Compression standard, which uses lossy (“visually lossless”) encoding to cut down on bandwidth needs, allowing for display setups with fewer lanes or at higher resolutions – such as 8K uncompressed – that can’t be carried within the bandwidth limitations of DisplayPort. Rather the first VESA standard to include DSC was last year’s Embedded DisplayPort 1.4a, and now a year later, DisplayPort is finally adding DSC support with the 1.4 standard.
As we’ve since found out, there are a couple of good reasons for why we haven’t seen DSC in the mainline DisplayPort standard until now, and with 1.4 the VESA has finally addressed those issues to allow DSC to be included in the standard. Of particular interest here is support for Forward Error Correction (FEC), which the VESA considers necessary for DSC on external monitors.
From a signal integrity standpoint, as displays are the highest bandwidth external interface on a typical PC, we’ve known that the VESA has been pushing the envelope on external signaling for quite some time now. This is part of the reason vendors are coalescing around USB Type-C, as it’s easier for vendors to all back a single well-developed solution. In the case of HBR3, this means pushing 32.4Gbps over a 4 lane connection, which is easy in a short run inside a laptop measured in centimeters, but it is a greater challenge with DisplayPort cables extending up to 2 meters. Practically speaking, while a solid DP1.3/HBR3 setup shouldn’t see any errors to begin with, the real world error rate – though quite low – is still higher than would be ideal.
For uncompressed images this isn’t an issue; any corruption is limited to a handful of pixels and quickly corrected in the next refresh. However once DSC is brought into the fold, any errors become a much larger problem. An error in a compressed data chunk will cause decoding to fail or make the decoded result very wrong over a large number of pixels, making the error far more noticeable. Consequently DSC requires a high level of reliability, which eDP with its short runs could provide, while DP’s longer runs could not.
The end result is that the combination of DP 1.4 and the recently released DSC 1.2 specification include Forward Error Correction for DSC. Although Forward Error Correction increases bandwidth requirements slightly, the additional, redundant data it carries allows for errors to be corrected, making DSC suitably reliable over DisplayPort connections. This is the key change to DSC and DisplayPort that finally allows DSC to be deployed to external monitors.
Meanwhile at DP 1.4 is also the first DisplayPort standard to incorporate DSC 1.2, it also becomes the first standard to gain DSC 1.2’s other benefits. Along with the aforementioned error resiliency, DSC 1.2 introduces some new functionality specifically for HDR displays. The compression standard now supports 4:2:0 and 4:2:2 color spaces and has added 14-bit and 16-bit per channel color support to the existing 8/10/12-bpc supported bit depths. In this case the VESA has their eye on HDR with displays over 4K, as while DP 1.3/1.4 offers enough bandwidth for HDR at 4K, this is where it tops out.
Display Bandwidth Requirements (RGB/4:4:4 Chroma) | ||||||||||
Resolution | Minimum DisplayPort Version | |||||||||
1920x1080@60Hz, 8bpc SDR | 1.1 | |||||||||
3840x2160@60Hz, 8bpc SDR | 1.2 | |||||||||
3840x2160@60Hz, 10bpc HDR | 1.3 | |||||||||
5120x2880@60Hz, 8bpc SDR | 1.3 | |||||||||
5120x2880@60Hz, 10bpc HDR | 1.4 w/DSC | |||||||||
7680x4320@60Hz, 8bpc SDR | 1.4 w/DSC | |||||||||
7680x4320@60Hz, 10bpc HDR | 1.4 w/DSC |
While on the subject of HDR, DP 1.4 also includes some HDR functionality of its own. The other major addition for the 1.4 standard is support for HDR static metadata, specifically the CTA 861.3 standard already used in other products and standards such as HDMI 2.0a. While the full details of what it takes to implement HDR are beyond the scope of this article, HDR static metadata is specifically focused on recorded media, such as Ultra HD Blu-Ray, which use static metadata to pass along the necessary HDR information to displays. This also improves DP/HDMI interoperability, as it allows DP-to-HDMI adapters to pass along that metadata.
The last new feature being introduced with DP 1.4 is updating the audio formats supported by the DisplayPort standard. As with the video portion of the standard, this is focused on functionality since the physical layer (and available bandwidth) haven’t changed. The VESA specifically notes that this latest update adds support for items such as 32 audio channel configurations, and while they don’t say its name, this sounds like the underpinnings for supporting decoded Dolby Atmos audio.
Wrapping things up, like previous DisplayPort specification announcements, we’re expecting some significant lag time between today’s announcement of the DisplayPort 1.4 standard and when this functionality shows up in shipping products, as manufacturers still need to develop controllers implementing the standard. As it stands we still haven’t seen any DisplayPort 1.3 equipment hit the market yet (this despite being introduced in 2014), so it’s likely that DisplayPort 1.4 is some time off. Meanwhile as DSC is always a hot topic in our comment section, so far we haven’t heard anything about plans for monitors to actually implement it. Most likely we won’t see anything until monitors with resolutions over 5K hit the market, as the primary focus of DSC for external monitors is for ultra-high resolution monitors coupled with HDR. It's here where the uncompressed bandwidth requirements become well in excess of what DisplayPort could provide.
38 Comments
View All Comments
iwod - Wednesday, March 2, 2016 - link
Could some one enlighten me. Why would you EVER want a lossy compressed images in your display port?And it seems SuperMHL is better in every way.
jann5s - Wednesday, March 2, 2016 - link
The compression is “visually lossless”, at least that is what they say. Typically the compression efficiency (quality reduction / data reduction) is much higher for lossy compression algorithms. It gets nasty when “visually lossless” means you have to be practically blind not to see the artifacts.nathanddrews - Wednesday, March 2, 2016 - link
You can see read about the testing methods and definition of "lossless" here:http://www.vesa.org/wp-content/uploads/2015/01/Cal...
Sivar - Sunday, March 6, 2016 - link
To support higher resolutions or bit depths than raw uncompressed streams can support, lossy compression *must* be used. Lossless compression cannot guarantee any reduction in bandwidth.Even if lossless compression would work 99.999% of the time, the stream cannot suddenly change to a lower resolution or bit depth in the pathological cases such as random noise.
I am extremely skeptical of the "visually lossless" claim. I suspect the standards committee picked a generic, objective benchmark like SSIM and concluded the results are "good enough for most people".
This is the same kind of test which concludes that h.265 is "visually the same" as h.264 at half the bit rate, when other than at extremely low bit-rates that is very obviously not the case.
BurntMyBacon - Monday, March 7, 2016 - link
@Sivar: "To support higher resolutions or bit depths than raw uncompressed streams can support, lossy compression *must* be used."Raw uncompressed streams do not have an inherent limitation on resolution or bit depth that dictate that lossy compression *must* be used. Raw uncompressed streams transferred across a fixed bandwidth medium have a maximum combination of resolution, bit depth, and frame rate. This could drive a requirement to use lossy compression. However, depending on the use case, lowering resolution, bit depth, frame rate, or a combination thereof may provide a better experience than using lossy compression. Of course, the last option is to just wait. By the time this compression really picks up, there is likely to be a higher bandwidth medium available that is suitable for the high resolution, HDR displays that this compression is designed to allow. Then again, with this compression already available, it could stay perpetually one step ahead (in a sense).
@Sivar: "Lossless compression cannot guarantee any reduction in bandwidth.
Even if lossless compression would work 99.999% of the time, the stream cannot suddenly change to a lower resolution or bit depth in the pathological cases such as random noise."
While lossy compression algorithms can guarantee a reduction in bandwidth, they suffer from the same pathological cases. Processing times for individual frames can vary more wildly than with lossless. This isn't much of an issue with movies, but gaming or any use with a feedback loop would rather not have another point of inconsistency in the rendering loop. When processing exceeds the allotted time this can manifest itself one of many different artifacts, a sudden stutter followed by a rather jarring (if temporary) change in resolution, blanking, or any anomaly you might see while trying to play a high def video on netflix with an inconsistent internet connection (though presumably to a much lesser extent). Lossless anomalies may experience dropped frames (stutter), blanking, or possibly block artifacts (depending on algorithm). I'd have to see the lossy compression to make a determination, but I'm fairly confident that I'd rather frame skip on the 0.001% of frames (likely white noise) that lossless can't keep up with than have a general degradation in picture quality in the other 99.999% of frames (your percentages).
@Sivar: "I am extremely skeptical of the "visually lossless" claim. I suspect the standards committee picked a generic, objective benchmark like SSIM and concluded the results are "good enough for most people"."
As am I. Depending on how the compression is implemented, it will very likely negate some of the benefits it is supposed to enable. Many compression algorithms use chroma compression on the blue and to a lesser extent red channels to remove detail where it is not as noticeable. More noticeable is using luminance compression. This hurts the bit depth and HDR that the compression is supposed to enable (especially if they mess with the green channel). Using a nearest neighbor averaging compression algorithm is akin to using a lower resolution and interpolating. Using motion compensated frame rate reduction is another obviously counterproductive method.
ddriver - Wednesday, March 2, 2016 - link
I know, such a shock, because it is not like 99.99% of the images, videos and even textures in games aren't using lossy compression ;)I assume this compression's lossiness will not be distinguishable to the human eye, so it isn't really an issue.
HollyDOL - Wednesday, March 2, 2016 - link
Hard to tell until we can actually see and judge the results. If it is as "visually lossless" as mp3 music was claimed to be "audibly lossless" then it is thanks but no thanks. (Not sure whether audibly lossless is a correct term)ddriver - Wednesday, March 2, 2016 - link
There have been plenty of tests where "pretentious audiophiles" were completely unable to tell wav from mp3 encoded at higher bitrate.HollyDOL - Thursday, March 3, 2016 - link
Depends what you play it on. The better amp/speakers/cords/audio card the greater the difference. But if the video compression is like what you claim about MP3 I'll avoid it by a huge margin. With my settings (entry audiophille with tight budget) you can hear the difference within first few seconds (320kbps vs. audio-cd, not even speaking about SACD or DVD-Audio). Even my father with hearing issues could tell the difference, so I am pretty sure everyone can. You just need to play it on decent audio system.LtGoonRush - Thursday, March 3, 2016 - link
Human beings listening to music cannot detect a difference between a properly encoded* 256kbps MP3 and the source CD, regardless of the equipment used. This has been proven with double-blind testing. "Transparent" (audibly lossless) compression of audio has been a solved problem for quite some time.*LAME encoder, using a preset profile. Other formats can do it at lower bitrate, but encoders are not created equal. AAC is better than MP3, but most AAC encoders are not as good as the LAME MP3 encoder.