Although latency and resource coordination continue to challenge those considering cloud-based live production, the distributed architecture model is steadily gaining traction as a cost-effective alternate to hardware-based on-premise projects. To date this IT-centric architecture has not been deployed for high-profile productions like the Super Bowl or World Cup, but remote IP-video contribution, production and distribution has allowed second-tier sporting events to be televised globally whereas they might not be – due to cost and fully remote access – using traditional production methods.
Bandwidth suppliers like The Switch, a production and distribution production company based in New York, offer a platform suite of on-demand cloud services called MIMiC and has seen an increase in demand since the beginning of the pandemic. Leveraging UK-based Tellyo’s Stream Studio and Tellyo Pro as core elements, the cloud-based TV Production-as-a-Service offering also provides clipping and real-time editing capabilities. The global IP delivery service allows sports broadcasters, streaming services and other rightsholders to take video feeds from anywhere in the world and simultaneously deliver them to multiple – up to hundreds – of destinations via the internet and The Switch’s private network.
They have a number of customers who use the MIMiC Production platform for streaming studio cameras to the cloud and producing the show in it. This trend is increasing and we are seeing this type of workflow being used more and more for lower-cost shows.
The two main drivers for this use case is the ability to operate with an on-demand type OpEx model that doesn’t require capital investment in a traditional control room. Also, for those with existing control rooms, there is a desire to free up those facilities for more complex productions by using cloud-based production for those shows where it makes better sense.
Using MIMiC, camera feeds are encoded using SRT or RIST encoders and delivered via the internet into the public cloud-based production system. Audio is embedded in the camera feeds to ensure that there is no audio/video misalignment. The camera and audio feeds are then available to cloud-based production switchers and audio mixers where the show is produced, along with perhaps other file-based video material, graphics and other contributors/guests to the show. The final output of the show is streamed live to its various destinations, be they traditional broadcast destinations or social media platforms.
In addition, the crew producing the shows are generally distributed among different locations and all receive web-based multi-viewer feeds, control surfaces for switching and audio mixing and replay. All of the crew are connected by a cloud-based intercom system, including the talent at the studio and talent elsewhere.
As the foundation of the MIMiC service, Tellyo Stream Studio supports up to 24 live streams that can be designed within eight scenes (as Mixed Effect units) enabling composition of multi-layered video, audio, live graphics, etc., all fully customizable for resizing, cropping, repositioning and transforming. Stream Studio also offers direct editing at scene (or ME) level and also on a specific source.
As anyone experienced in remote production (REMI) understands, latency is one of the issues that you have to work around when operating in the cloud. The fundamental cause of the latency is the use of unmanaged IP connections to deliver video to public cloud-based infrastructure. This is true for internet-based delivery of the video and also, to a lesser extent, within the cloud infrastructure through the use of shared networking.
These unmanaged IP connections require the use of protocols that will allow the recovery of missing packets such as SRT or RIST. Although you can operate these protocols at sub- 500 mS delays, typically you end up with 1-2 seconds of delay between the camera and the switcher. If you consider that the switcher interfaces and multi-viewers are also delayed through web-based delivery to the technical directors and producers, you can end up with 2-3 seconds of latency between camera and crew (and return feeds for the talent). For some productions, this isn’t difficult to handle. However, when there is a need to have very tight coordination between technical director, graphics, talent and producer, you need to make sure that everyone involved adjusts to, and is aware of, the delays.
The real-time intercom communications allow instant voice interactions between all parties, which helps coordinate the production effectively. Determining how much latency is too much all depends on the show. For some shows, 30 seconds of delay isn’t an issue. For others, when two remote parties require an interactive discussion, you need to have sub 100mS. The Switch solves this by deploying a real-time cloud-based audio overlay that allows for the interactivity while the video with embedded audio is handled with the normal delays.
Perhaps the biggest challenge to using a cloud-based production workflow is getting everyone involved in the production used to handling and accommodating the latency.
Professionals in the industry are very used to operating with systems where latency is very low and it takes a little getting used to in order to operate within a system where there is more latency involved. Those that have embraced this have made the transition with little difficulty. It is very similar to the situation where production employs remote feeds via satellite or bonded cellular – it also requires that people adopt an operating practice different from the norm.
Of course, the advantages of a distributed architecture are numerous, with the most significant being cost – which manifests itself in several ways. Live production is well suited to on-demand infrastructure – since live production systems are rarely used continuously around the clock. The investment required to build a dedicated control room can be difficult for some organizations. For those productions suited to the cloud, tapping into its flexibility, scalability and efficiency offers an excellent alternative to traditional production methods.
A cloud-based approach can reduce costs that enable both more secondary productions and the creation of more content targeted at specific audiences. Cloud-based production allows content creators to produce events for more niche audiences and to serve smaller geographic areas. This enables them to produce content that likely would otherwise not be commercially viable with more traditional production methods.
Other advantages of cloud-based production include: the ability to easily scale the number of simultaneous productions without having to find the investment and space required for traditional control rooms to meet peaks in demand; flexibility in using a distributed crew with operators and production staff working remotely and dispersed; and the ability to easily include remote guests in productions via web cameras over the internet. It’s time and the value it provides have clearly arrived.