FB / Linkedin – Performance….



Comment le RUM est mis en place chez LinkedIn:

How LinkedIn used PoPs and RUM to make dynamic content download 25% faster


Breaking News: TIA Recognizes Direct-Connect Termination Method

TIA Approved

Contact belden

This week, the industry received some big news: The TIA TR-42.7 subcommittee agreed to include modular plug terminated links (also known as “direct connect”) in a TIA-568.2-D normative annex. The annex provides guidance to IT professionals to ensure a proper direct-connect cabling arrangement. Several Belden staff are closely involved with the Telecommunications Industry Association (TIA), holding many leadership positions within the organization. We’re always looking out for the ICT industry, searching for ways to improve existing technology and streamline installation – which is why we presented the problem to TIA and lead the effort to have the direct-connect termination method fully supported.

What does this mean? Now, RJ45 modular plugs can be terminated directly onto horizontal cabling and measured in the field. This allows a variety of devices, such as [wireless access points](http://www.belden.com/blog/datacenters/adding-more-wireless-access-points-what-it-means-for-networks.cfm), surveillance cameras and HDBaseT monitors, to be plugged without the need for an outlet and a patch cord.

## Benefits of Direct Connect

ANSI/TIA-568-C.2 currently requires horizontal cable to be terminated on a telecommunications outlet to provide flexible user access. But TIA also realizes that, in certain cases, there is a need to terminate


(https://www.linkedin.com/pulse/key-components-form-structured-cabling-system-orenda-ma) to a plug that is directly plugged into a device.

Direct-connect assembly uses a single cable to connect a device at one end; the other cable end is terminated with a jack in a patch panel in the telecommunications room.

This allows for efficient power delivery with the lowest channel insertion loss and gives installers the flexibility to eliminate the need for a jack and cord to connect devices.

## What the Modular Plug Terminated Link Test Involves

To be recognized by TIA, [direct connect](http://www.belden.com/blog/datacenters/A-Way-to-Simplify-Your-Infrastructure-Direct-Connect-Assembly.cfm) needs to meet the requirements of a 90 m permanent link during testing.

During testing, the modular plug terminated link (MPTL) will have a jack on one end and a plug on the other end, with an optional consolidation point (plug-to-plug isn’t supported). Proper testing requires a permanent link adapter and a patch cord test head.

The figure below represents the topology of a modular plug terminated link test configuration.

![Modular Plug Terminated Link](http//info.belden.com/webadmin/blog/images/modular-plug-terminated-link.PNG « Modular Plug Terminated Link »)

The modular plug terminated link needs to comply with the permanent link transmission requirements of ANSI/TIA-568-C.2 clause 6.3 to be recognized.

## Direct-Connect Solutions from Belden

This news about modular plug terminated links mixes well with the recent introduction of the Belden REVConnect connectivity system, which eliminates the jack, box and patch-cord assembly normally needed to plug into devices.

A single termination process works for every application – REVConnect is a complete connectivity solution for Category 5e, 6 and 6A shielded and unshielded cable. You can switch from a jack to a plug or vice versa without having to re-terminate. Learn more



<div style= »text-align: center; »>[![IoT WP CTA](http//info.belden.com/webadmin/blog/images/IoT WP CTA_86060.png « IoT WP CTA »)](http://info.belden.com/ecos/iot-convergence-wp)</div>Source: Spine and Leaf (1st) test

Optical amplification 100G and more for distance > 10km

In optical communication network, signal travels through fibers in every large distances without significant attenuation. However, when it comes to the distance up to hundreds of kilometers, to amplify the signal during transit becomes rather essential. In this case, an optical fiber amplifier is required to achieve signal amplification in long distance optical communication. This article aims to give a brief introduction to the most deployed fiber [amplifier— Erbium doped fiber amplifier (EDFA)](https://www.4fiber.com/wdm-optical-network/edfa.html?lipi=urn%3Ali%3Apage%3Ad_flagship3_pulse_read%3BzkwpjuCWQbyowJSDogVyyg%3D%3D).

What is EDFA?

An EDFA is an optical or IR repeater that amplifies a modulated laser beam directly, without opto-electronic and electro-optical conversion. Generally speaking, it is an optical repeater device that is used to boost the intensity of optical signals being carried through a fiber optic communications system.

Working Principle of EDFA

EDFA serves as a kind of optical amplifier which is doped with the rare earth element erbium so that the glass fiber can absorb light at one frequency and emit light at another frequency. An external semiconductor laser couples light into the fiber at infrared wavelengths of either 980 or 1480 nanometers. This action excites the erbium atoms. Additional optical signals at wavelengths between 1530 and 1620 nanometers enter the fiber and stimulate the excited erbium atoms to emit photons at the same wavelength as the incoming signal. This action amplifies a weak optical signal to a higher power, effecting a boost in the signal strength. The following picture shows 13dBm output C-band 40 channels booster EDFA for DWDM Networks.


The Advantages of EDFA

The EDFA obtains the advantages of high gain, wide bandwidth, high output power, high pumping efficiency, low insertion loss, and it is not sensitive to the polarization state.

It provides in-line amplification of signal without requiring electronics, and the signal does not need to be converted to electrical signal before amplification. The amplification is entirely optical.
It provides high power transfer efficiency from pump to signal power.
The amplification is independent of data rate.
The gain is relatively flat so that they can be cascaded for long distance use. On the debit side, the devices are large. There is gain saturation and there is also the presence of amplified spontaneous emission (ASE).
The Applications of EDFA

The EDFA was the first successful optical amplifier and a significant factor in the rapid deployment of fiber optic networks during the 1990s. By adopting it in conventional optical digital communication system applications, we can save a certain amount of optical repeaters. Meanwhile, the distance relay could also be increased significantly, which is vital for the long-haul fiber optic cable trunking systems. The EDFA is usually employed in these circumstances:

EDFA can be employed in the high-capacity and high-speed optical communication system. It offers a constructive and ideal solution for handling low sensitivity of receivers and short transmission distances because of a lack of OEO repeater.

In addition, EDFA can be adopted in long-haul optical communication system, such as land trunk optical transmission system and the submarine optical fiber cable transmission system. It helps to lower construction cost dramatically by reducing the quantity of regenerative repeaters.

Moreover, EDFA can also be employed in wavelength-division multiplexing (WDM) system, especially dense wavelength-division multiplexing (DWDM) system. It enables the problems of insertion loss to be solved successfully and reduces the influences of chromatic dispersion.


By far, being the most advanced and popular optical amplifier, EDFA has been widely adopted in the optical fiber communication networks. Featured by flat gain over a large dynamic gain range, low noise, high saturation output power and stable operation with excellent transient suppression, it surely will capture a rather vital and indispensable position in optical communication in the near future.

Sample [EDFA products](www.4fiber.com)

Other documentations


Optimizing and Scaling on a Leaf-Spine Architecture

Posted by (and sources): Mike Peterson on May 05, 2016
Source: her/ici


The Internet of Things (IoT) and the proliferation of virtualization have caused traffic between devices in the data center to grow. Referred to as “east-west traffic,” this term accounts for traffic going back and forth between servers in a data center.

When you run lots of east-west traffic through a topology designed for north-south traffic (traffic that enters and exits the data center), devices connected to the same switch port may contend for bandwidth – and end-users experience poor response time.

If hosts on one access switch need to quickly communicate with hosts on another access switch, uplinks between the access layer and aggregation can be a point of congestion. A common three-tier network design may worsen the issue, constraining the location of devices like virtual servers.

Moving to a Leaf-Spine Architecture

That’s where leaf-spine architecture comes in, scaling horizontally through the addition of spine switches. This two-layer topology allows devices to be exactly the same number of segments away.

With each leaf switch connecting to each spine connection, the number of spine switches is limited to the number of uplink ports on the leaf. The most common leaf switches come with only four 40G QSFP+ uplink ports, limiting your network to only four spine switches. This starts to limit network scalability.

One way to achieve more scale is to break the 40G SR4 channel into four 10G duplex channels, turning the four 40G uplink ports into 16 available uplinks. This increases the number of spine switches that can be a part of the mesh network to 16, providing four times the scalability.

Scaling Networks: 10G vs. 40G

Let’s use an example to compare scaling in leaf-spine architecture between 10G and 40G networks.

With 40G uplinks, the number of spine switches is fixed at four, based on the leaf having four uplinks. Typically, each spine has a total of four line cards. These line cards come with 36 40G ports per line card. The total number of available ports to connect to leaf switches is 144; each leaf has 48 ports to connect to network devices, allowing for a maximum of 6,912 computers to connect to the 40G mesh network.

When you scale out on a 10G network, scaling is increased by a factor of four. Each 40G uplink is broken into four 10G channels, allowing for 16 spine switches. With four line cards, and 36 40G ports per line card split into 10G legs, there are a maximum of 576 leaf switches (144 ports x 4). With each leaf having 48 ports, you can connect 27,648 computers – four times the scaling throughout the mesh network.

10G Channels: Potential Obstacles

Moving to four 10G channels in leaf-spine architecture introduces a new concern: Latency (the amount of time it takes for a packet of information to travel from point A to point B) increases because the pipes are split into smaller lanes. The smaller the lanes, the slower the traffic. Although throughput remains the same, latency increases.

One of the biggest challenges to implementing a mesh network is cabling. Mesh networks require LC patch cords to create a cross-connect, ensuring that all leaf switches and spines are properly connected. A cross-connect is created in the main distribution area (MDA), creating several cabling issues: insertion loss, maintaining polarity, increase in cable counts, etc. Rack challenges include density, required U space and power availability.

To create the 10G channel, a complex cross-connect must be created. Each eight-fiber MPO port on the switch is broken up into an LC duplex connection; 144 MPOs become 576 LC duplex connections per switch, for a total of 18,432 LC duplex ports (both sides of the cross connect). To connect the 10G channels to each leaf and spine, a total of, 9,216 LC duplex patch cords are needed. As a result, additional channels for MACs (moves, adds and changes), cable routing and space constraints are possible.

This essentially breaks an MPO into four lanes and makes an LC connection. Each lane is combined with lanes from other spines and converted back into an eight-fiber MPO (Base-8) with four channels from four different spine switches. Cable management, space utilization, documentation and labeling become extremely difficult to troubleshoot and maintain.

Shuffle Cassettes Save Space and Reduce Complexity

There’s a new leaf-spine architecture solution available that drastically reduces the amount of space needed, as well as the number of cables in the MDA: Belden shuffle cassettes.
These cassettes eliminate the need to create a cross-connect to separate 40G channels into 10G, and recombine to connect to each leaf, handling lane reassignments internally. Each shuffle cassette has four MPOs in and out; each leaf requires four shuffle cassettes.


<table><tbody><tr><th>Traditional MPO-LC-MPO</th><th>Belden Shuffle Cassette</th><th>*Savings*</th></tr><tr><td>704 modules</td><td>416 modules</td><td>*288 modules*</td></tr><tr><td>176U space</td><td>104U space</td><td>*72U (roughly 1.6 racks)*</td></tr><tr><td>9,216 patch cords</td><td>2,304 patch cords</td><td>*6,912 patch cords*</td></tr></tbody></table>

By utilizing the same connector, reducing connections and standardizing on components across the channel, Belden’s shuffle cassettes allow for scaling in leaf-spine architecture, reduce the opportunity for human error, speed up deployment time and reduce time spent on MACs. By using a shuffle cassette that fits into any Belden housings, you reclaim valuable floor space.

Automation for network

Ansible on network world


Ansible (Python)

Replay of Ansible presentation made by Francois 

Ansible tutorial
Network automation with ansible

First configuration of network devices

Zero Touch Provisioning

ZTP overview

Network definition language

looks a lot like cmdb light, same approach to config and design generation language defines objects and comiled to fill db they generate complete templates  
Robotron top down network management at facebook scale

Talk 2: Wedge100 + Backpack: From the Leaf to the Spine Zhiping Yao + Xu Wang, Facebook

Other tools (not Ansible)





[napalm automation]

[Napalm @spotify on Github](https://github.com/spotify/napalm)

More for DevOps Chef (ruby)


[Puppet](https://puppet.com/) (not used @Criteo)

Python for Network


Criteo Tools for network diff between 2 configuration files (Cisco/Arista) :  

Why use Docker / Ansible in front of Puppet / Chef :  

# Monitoring / Graphs

* Time series DB

* Front-end
[Elastic kibana](https://www.elastic.co/fr/products/kibana)

# Virutalenv (VM/libvirt/container/…)

If you want to test some apps/stuff you can use one of this « tools »

– Docker : [https://www.docker.com/](https://www.docker.com/)
– Virtualenv : (more for dev) [http://virtualenv.readthedocs.org/en/latest/](http://virtualenv.readthedocs.org/en/latest/)
– Vagrant : (more for dev) [https://www.vagrantup.com/](https://www.vagrantup.com/)

# Other


## Videos and presentations

Storm usage at Criteo: [http://www.infoq.com/fr/presentations/storm-criteo](http://www.infoq.com/fr/presentations/storm-criteo)

Youtube Network Automation and Programmability Abstracation Layer [https://www.youtube.com/watch?v=93q-dHC0u0I](https://www.youtube.com/watch?v=93q-dHC0u0I)

<iframe allowfullscreen= »allowfullscreen » height= »314″ src= »//www.youtube.com/embed/93q-dHC0u0I » width= »560″></iframe>

@34:47 you will find Steve Feldman.

The only feldman I know is him ?

<iframe allowfullscreen= »allowfullscreen » height= »314″ src= »//www.youtube.com/embed/h8VWASQB8wk » width= »560″></iframe>

Blog : [https://pynet.twb-tech.com/blog/automation/cisco-ios.html](https://pynet.twb-tech.com/blog/automation/cisco-ios.html)

# Tools for DEV


# Tools


# A Trier:


Network BGP on TOR

Layer 3 design with spine and leaf


* prefix list automation:

# Network design

### Google new network design
Read this paper: [Conferences sigcomm](http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p183.pdf)

### Facebook DC design

Information about 1st design @FB:

Facebook DC design Next Gen:
Introducing data-center fabric the next generation facebook DC network

A video presentation about L3 spine and leaf @FB (useful demo @2’22 »)


Pictures of FB DC:
[Photo tour new facebook data-center in iowa (2014)](http://www.datacenterknowledge.com/archives/2014/11/20/photo-tour-new-facebook-data-center-in-iowa/)

FB servers
### LinkedIn DC design plus L3

DC design spine and leaf and water-cooling at LinkedIn  

And from linkedin blog:

### Other company L3 design

An old article from Metadata blog:

### Tools for network BGP and design



IETF draft for L3 in the DC:  

Presentation from Nanog about  


From Arista:  


How to load balance applications in a L3 DC (Replay of a meetup)  



Bending Loss: A Risk Associated with Reusing Installed Fiber Cable

[![Fiber Bending Loss](http//info.belden.com/webadmin/blog/images/Fiber-Bending-Loss.jpg « Fiber Bending Loss »)](http//info.belden.com/webadmin/blog/images/Fiber-Bending-Loss.jpg)

Thanks to its ultra-high data transmission capacity, ultra-low loss and installation flexibility, glass optical fiber is the most power-efficient data transmission media available today. Optical fiber cables have been deployed worldwide to connect people and “[things](http://info.belden.com/ecos/iot-convergence-wp)” together.

According to [CRU’s Optical Fibre and Cable Monitor](http://www.crugroup.com/market-analysis/products/Optical-fibre-and-cable-monitor), last year, the global optical cable demand reached 318 million kilometers in the first three quarters of 2016.

As we mentioned in a [previous blog](https://www.belden.com/blog/datacenters/singlemode-vs-multimode-transceivers-how-do-you-choose.cfm), two types of optical fiber are available for different network environments and link distances:

– Multimode fiber (MMF) for short-reach links up to a few hundred meters, mainly used in data centers environments
– Singlemode fiber (SMF) for long-reach links, such as in LANs, access networks, metro/transport networks and hyperscale data centers

![Multimode Fiber](http//info.belden.com/webadmin/blog/images/mmf.png « Multimode Fiber »)

Fiber cables are typically installed and owned by internet service providers or internet content providers (including cloud service providers), or enterprise IT departments. People commonly believe that fiber cable has unbounded bandwidth capacity and can last forever; however, with the recent [data traffic boom](https://www.belden.com/blog/datacenters/booming-traffic-from-the-content-delivery-network.cfm) – cloud services, over-the-top content delivery and [Internet of Things (IoT)](http://www.forbes.com/sites/michelleevans1/2017/01/24/5-ways-the-internet-of-things-will-influence-commerce/#432cd6ab3c30) – some old fiber infrastructure has hit its capacity limit and needs to be upgraded.

This blog is the first of three in a series where we will walk you through the risks of reusing installed fiber cable, and help you understand how fiber cable infrastructure performance and quality could impact your business operations.

## Macrobending and Bend-Insensitive Fiber

Optical fiber cables are recognized as the superior data transmission media over long distances. The optical fiber is a waveguide that confines light within the fiber core, which is bounded by the cladding material that prevents light from escaping.

![Macrobending](https://info.belden.com/webadmin/blog/images/macrobending.png « Macrobending »)
*Source: IBM*

Compared to copper cable, optical fiber cable has a much smaller cross-section diameter to support flexible cable routing and installation, especially for high-density I/O. Nevertheless, strict fiber cable installation rules have to be followed because light can leak out of the fiber core through the cladding when bent or wrapped. Bending loss occurs when a fiber cable bend is tighter than its maximum bend tolerance; bending loss is due to physical bends that are large in relation to the diameter of the cable. As the bend tightens, more light is lost. This phenomenon is referred to as “fiber macrobending.”

![Macrobending 2](https://info.belden.com/webadmin/blog/images/macrobending-2.jpg « Macrobending 2 »)

TIA 568.3-D specifies the minimum bend radius for fiber cable installation to avoid excessive “light leakage” or bending loss:

*Cables with four or fewer fibers intended for Cabling Subsystem 1 shall support a minimum bend radius of 25 mm (1 in) when not subject to tensile load. Cables with four or fewer fibers intended to be pulled through pathways during installation shall support a minimum bend radius of 50 mm (2 in) under a pull load of 220 N (50 lbf). All other inside plant cables shall support a minimum bend radius of 10 times the cable outside diameter or less when not subject to tensile load, and 20 times the cable outside diameter or less when subject to tensile loading up to the cable’s rated limit.*

## Macrobending Hurdles in New Use Cases

In many new practical use cases, fiber cables are required to be installed with even smaller bend radii, which could lead to bending loss:

1. In access networks, optical fiber is installed closer to subscribers; therefore, smaller bend radius is required to support high-density, flexible fiber installation and routing.
2. In data center networks, more and denser fiber cables are installed to support ever-growing bandwidth requirements in limited space; therefore, hassle-free fiber cable installation with higher bend tolerance is increasingly important to reduce bending loss and speed up data center deployment and upgrades.

![Bent and Pinched Fiber](https://info.belden.com/webadmin/blog/images/Bent-amd-Pinched-Fiber.jpg « Bent and Pinched Fiber »)

![Fiber Slack Loop](https://info.belden.com/webadmin/blog/images/Fiber-Slack-Loop.png « Fiber Slack Loop »)

*Source: Anixter*

Legacy fiber cable, although optimized for low-attenuation data transmission, is subject to excessive transmission loss; it was not optimized to support sharp bends and can suffer from bending loss. Accidental fiber loss can happen on a daily basis if care is not taken:

– *Sharp bend*: severe 90-degree bend can induce high link loss of up to 0.4 dB to 0.5 dB
– *Pinched cable*: pinching standard fiber jumper can lead to an attenuation of 3 dB to 4 dB
– *Fiber slack loop*: a tight pulling tension on the fiber jumper can cause an attenuation of >5 dB

## Fibers with Enhanced Macrobend Loss Performance: Bend-Insensitive Fibers

Recently, bend-insensitive SMF and MMF (BI-SMF and BI-MMF) products have been introduced to the market to meet the needs of tighter fiber-bend tolerance to avoid bending loss. Optical fiber manufacturers used a refractive index “trench” in the fiber structure – a ring of lower refractive index material – to reflect lost light back into the core of the fiber.

Industry standards have also been developed to specify the bend-radius tolerance of BI-SMF and BI-MMF.

– BI-MMF: ISO/IEC 60793-2-10 provides specifications for A1a.1b, A1a.2b, A1a.3b and A1a.3W that support two turns of 15mm bending radius with <0.1 dB loss at 850nm, and two turns of 7.5mm bending radius with <0.2 dB at 850 nm. *(BI-MMF cables are only optimized for 850nm but not for 1300nm. While the bend loss at 850nm is as described above, the results at 1300nm are not much different than with standard 50µm MMF.)*
– BI-SMF: ISO/IEC 60793-2-50 provides specifications for B6 singlemode fibers that can support minimum bending radius of 10mm, 7.5mm and 5mm. The same recommendation has also been made in the ITU-T G.657 standard document. *(G.657.A1 and G.657.A2 are fully compliant with traditional SMF standard G.652.D with lower fiber transmission loss; G.657.B2 and G.657.B3 are compatible with G.652.D with smaller minimum bending radii, but the transmission loss is slightly higher.)*

Using bend-insensitive fiber cable will minimize the risks of fiber bending loss, and reduce accidental system downtime by considerably improving link robustness and overall performance.

Given the installation and maintenance advantages, considering BI-MMF and BI-SMF for system upgrades or new fiber cable deployment is highly recommended.

Belden offers BI-MMF and BI-SMF [fiber products](https://www.belden.com/products/enterprise/fiber/) that are faster, easier and better to use. Our fiber connectivity solutions reduce complexity, increase flexibility and streamline installation.

<div style= »text-align: center; »>[![Advances in Multi-Fiber Connectivity WP CTA](https://info.belden.com/webadmin/blog/images/Advances in Multi-Fiber Connectivity WP CTA_86060.png « Advances in Multi-Fiber Connectivity WP CTA »)](http://info.belden.com/ecos/multi-fiber-connectivity)</div>

Source: Spine and Leaf (1st) test

HDBaseT: Is it Convergence?


– – – – – –

There has been a lot of talk about convergence in the cabling world; some of this has been driven by new technology and market overlapping. Today’s integrator has the ability to install a system that covers phones, computers, security, audio/video and even low-voltage power.

There are two types of convergence that we often discuss: technology and infrastructure convergence.

[Technology convergence](http://themarketmogul.com/the-technological-convergence-is-here/) uses a single network system, such as Ethernet, to support multiple devices. All of these devices share the same cable and active gear. For example, you can now plug your desk phone and computer into the same telecom room switch. Ethernet networks can support just about every aspect of communication, voice, data, security, building control and even audio/video applications. This is not the type of convergence we are talking about.

Infrastructure convergence uses the same *cable* to support multiple systems. All sorts of devices connect to their own system using a universal cabling system. The biggest type of communication cabling being used today is category cable. While the entire system shares the same cable, the devices don’t talk the same language; therefore, they can’t communicate with each other. This system offers customers a universal, low cost-cabling system. But is it really the best solution for each application?

This blog examines one version of this type of convergence: the use of category cabling for HDBaseT signals.


## Standards

How did category cable become the dominant communications cable? The main reason is the success of Ethernet, which is the de facto standard for today’s networks – but this was not always the case. If you go back a few years, network cabling included an [IBM token ring](https://www.lifewire.com/what-is-token-ring-817952) (150 Ohm), ARCNET (twin-axial) and even Ethernet, which could be sent on Thicknet 10BASE5 and Thinnet 10BASE2 coaxial cable.

IEEE 802.3 (the Ethernet standard) over twisted pair won out, and category cabling was born. As IEEE was writing the Ethernet standards, TIA was creating the 568 standards to specify cable characteristics for category cabling. The two standards worked hand in hand; as Ethernet technology increased from 10 Mbps to 10 Gbps, category cabling standards kept pace, going from Category 3 to Category 6A.

Internationally, the ISO 11801 standard followed TIA’s lead. Ideally, every manufacturer produces a category cable that meets ANSI/TIA specifications (in the United States) or ISO specifications (internationally), giving the user a reasonable expectation that the cable will support his or her network.

Today’s network can support just about every aspect of communication, voice, data, security, building control and audio/video applications. With all devices following the same standard, we achieved interoperability. So, why don’t just use it for everything? It turns out that it has latency and bandwidth shortfalls, which don’t make it ideal for video. The issues are being corrected, but that is the subject of another blog.

The professional AV industry is in the process of trying to develop a standard for everyone to follow. For AV systems, you often need more than one type of signal: an application might require an audio signal, a video signal and a variety of control signals. An increasingly popular new standard, HDBaseT®, does just that.


## HDBaseT Technology

This technology uses 5Play®: HDMI 1.4, 4K video with audio, USB 2.0, 100BASE-T Fast Ethernet, various control signals with low-voltage power (up to 100W).

A group of manufacturers formed the [HDBaseT Alliance](http://www.hdbaset.org) in 2010, with one of the goals being the development of a universal standard and interoperability between manufacturers. The HDBaseT 2.0 specification has been submitted to IEEE to become a universal standard, but is currently only in draft form. Although it might seem like it is, the HDBaseT 2.0 specification is *not* part of the IEEE Ethernet 802.3 standard. Also adding to the confusion is the universal appeal of category cabling, which the HDBaseT Alliance selected.

From the start, there have been minor issues with the cabling, and people have been improvising solutions. Making matters worse is the adoption and popularity of ultra-high-definition video, commonly referred to as 4K. The increased bandwidth of a 4K image, with almost 9 Mbps of information, causes even more strain on the infrastructure. Furthermore, this strain will only increase as the market moves to 8K, with even more color and faster frame rates. It’s possible for the bandwidth to push well beyond 50 Gbps per second. (Get more information on this topic [here](http://www.belden.com/blog/broadcastav/4k-images-and-pictures-what-do-they-really-mean.cfm).)

Not only is the video signal a bandwidth hog, it is very latency sensitive. Due to time sensitivity, a video signal is different than a pure data signal. If a video signal is lost or damaged, those pieces of the image are lost and are never retransmitted. Instead, they appear as errors on the screen; if you have too many errors, the picture is lost altogether – but more on this in a different blog.

To adjust for these demands, most manufacturers have tried tweaking a variety of category cable types. Most have gone to a shielded cable or screened cable. Additionally, some have increased the category rating, even up to Category 7A, in hopes of improved results. This has stopped becoming a converged infrastructure, and turned into a search for a cable that can support this signal. Belden set out to uncover the true cabling requirements – and then to design a cable to meet it.

This blog is just one in a series that will cover in more detail the testing we completed and what we found, including 4K HDBaseT cabling misconceptions and myths. [Subscribe to our blog so you don’t miss out!](http://info.belden.com/subscribe)

*HDBaseT® and 5Play are registered trademarks of the HDBaseT Alliance.*

<div style= »text-align: center; »>[![HDBaseT CTA](http//info.belden.com/webadmin/blog/images/2183-Email-Signature_86060.png « HDBaseT CTA »)](http://info.belden.com/ecos/hdbaset-cable)</div>Source: Spine and Leaf (1st) test