As more all-IP-based facilities come online around the world, broadcast engineers are taking the playbook from their IT counterparts and deploying system infrastructures that can handle and deterministically route audio and video as IP data. One of the most common networking topologies deployed is called a Spine-And-Leaf architecture.
This type of infrastructure includes two identical and redundant signal paths, which are critical in the broadcast world. Every lower-tier switch (leaf layer) is connected to each of the top-tier switches (spinelayer) in a full-mesh topology. The leaf layer consists of access switches that connect to devices such as servers. The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. Every leaf switch connects to every spine switch in the fabric. The path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. If one of the top tier switches were to fail, it would only slightly degrade performance throughout the network.
However, a pure spine & leaf topology can’t benon-blocking for the type of multicast traffic managed at all-IP broadcast facilities. So engineers typically start with this system architecture but make two important modifications to render it suitable for broadcast applications. First, they employ a single modular spine switch per network area, which is itself internally built like aspine-leaf network.
One major production and distribution facility in North America that recently opened has four networked areas, thus four spine switches and four segregated zones of failure (production red, production blue, presentation red, presentation blue), where Red and blue are the redundant paths. These Production spines are handled by Arista Networks data switches while the presentation spines are also Arista switches.
The second difference is the random path selection done using IT traditional techniques like ECMP, which is based on hash computing on IP-packet headers. This yields the same result for all of the packets of a specific media flow. So to work correctly, a very high number of flows per 100 Gb/s link is required. For this reason, the network relies heavily on the agility of specialized software. Equal distribution off lows between all uplinks is then guaranteed.
The new building has its own on-premise datacenter that hosts the spine switch. Wherever they need new network connectivity they simply install a new leaf switch. The link between the spine and the leaf features as many 100 Gb/s bi-directional connectivity as is needed, anywhere in the building. Between the leaf and each end device it is whatever the end device requires. So, 100 Gb/s gives them plenty of capacity to pass and seamlessly share audio, video and data.
“The network doesn’t care about video resolution or audio file size,” said one engineer involved with the project.“It just needs to be able to recognize an end device and control it instantly. Unfortunately, every company offers a different way to talk to their devices. That’s a non-going problem.”
Which brings us to the other big challenge in media networks design: fast and accurate device discovery on the network. Discovering a specific device on a multi-node network can be difficult if not configured correctly. That’s why the folks at the Advanced Media Workflow Association (AMWA) have developed a series of Networked Media Open Specifications—better known as NMOS IS-04 – which is a set of APIs that provide the means to discover nodes on a network and their associated resources related to the processing of video, audio and other data. These APIs are generated by the network node devices and are used to create a connection between sender and receiver data on the devices on which it is running. Without it, good luck finding that multiviewer for control room #3, which is currently being sued in control room #2.
On the endpoint side, each manufacturer needs to embed the appropriate API’s in their devices, build an interface accessible by a user and integrate NMOS with their product. The integration includes sending and receiving NMOS along with the ST2110 essence package. The NMOS server is the controller that manages the transport and communication between devices, a broadcast controller, across the entire media network. This is essentially another layer in the full network and functions like an SDN.
Unfortunately, due to a lack of industry wide understanding of how to deploy these NMOS specs, and the fact that they have not been officially standardized– leading to a variety of methods and incompatible implementation – adoption of NMOS IS-04 has been slow or a non-factor in many ST2110 media networks now being deployed.
Without standardization (and not part of the ST2110 standard), there has been a concern among many that different equipment vendors can and do interpret the spec in their own way (e.g., the MXF). Also, they are not obligated to incorporate it, but realistically, it makes good business sense to stay compatible with third-party devices.
The AMWA has said that individual vendors must embed (or implement) various NMOS APIs in their products to build an interoperable ecosystem. Vendors write software in their applications that either present an interface that is listening on the network for incoming API commands, or they write applications that emit API commands, as in the case of a Broadcast Controller asking a camera (which understands IS-05API commands) to do what the Broadcast Controller asks it to do.
Until a consensus is reached, broadcast engineers and network systems designers would be wise to plan carefully and, perhaps, hire their own coder to write custom APIs. When handling hundreds or thousands of streams within a building, some type of spine & leaft opology might also be the right route for your redundant network to take. Trial and error has always been the best teacher. And, in this era of audio and video as IP data traffic, the IT industry might be the first resource to consult.