CLOUD for NDI

Register For Updates

global ip video network

Sienna Cloud Blog #1

Mark Gilbert, CTO Gallery SIENNA : 9th July 2017, London

 

As the deployment of Sienna Cloud for NDI has progressed, we are learning many lessons about the way outside broadcasts will shape up as the future unfolds. In this note I wanted to share some of that experience, and prime users for a new way of thinking about remote contribution workflows.

 

Real World Deployment of Cloud for NDI

Yesterday I was privileged to work alongside the pioneering folks from USSSA (United States Speciality Sports Association), Bernie and Paul who have taken Cloud for NDI and really run with it - in the way we had originally hoped for. The short version is that things went very well and with 3 or 4 live Cloud NDI broadcasts under their belt, they are settling into this as a regular workflow.

The task yesterday was to produce high-end 5 camera coverage of fast pitch softball between the Beijing Eagles and the Texas Charge at FIU stadium to be broadcast live on MLB.Com. Normally this would require a fairly complex truck to be sent on-site with a full production crew - but with Cloud for NDI USSSA were able to revisit this whole workflow and redefine it for a new era of sports coverage.they are settling into this as a regular workflow.

In the 2 pictures above you can see the equipment stack at the stadium and the control room back at the USSSA production facility, with the 2 locations connected *only* with Cloud for NDI over the public internet.

 

Equipment Roster

On site at FIU Stadium, 5 HD Cameras covered the field and a reporter, with the 5 HD-SDI signals coming into to a NewTek NC1 I/O module to be converted to NDI IP Video format. Alongside, 2 MacBook Pro laptops were running the NDI.Cloud Node Gateway engine to carry those 5 streams over USSSA's NDI.Cloud software defined video network, connected using the Stadium's fast internet connection. During the game, the total bandwidth used for the 5 HD camera feeds was 50mBit/sec.

 

Back in the main facility, 200 miles away, a full production control gallery with a NewTek IP Series switcher, NewTek 3Plays and another 2 Macs with the NDI.Cloud Node Gateway completed the setup. Cloud for NDI's low latency and multicam sync allowed the same sort of OB workflows you might find where the truck is on-site, or feeds are carried over satellite. USSSA added voiceover, replays, graphics and commercials during the multicam switch and the output was fed live to MLB.Com.

 

The end result looked great with typical high-end sport production values.

Now, let's see what we have learned from this project.  This is where all the hard work turns into hard experience and allows us to start laying down some guidelines for this new way of covering remote sports.

 

One thing which has become apparent as USSSA have been on site at different stadiums, is that the task of securing high speed, high quality internet connectivity is totally doable. Often these stadiums are associated with colleges which have naturally got great connectivity, and commercial stadiums have already woken up to the need for high speed connections.

 

The 'Connection Wrangler'

The network design of Cloud for NDI has made life as easy as possible when configuring networking by using only a single UDP port for all traffic for a given Node Gateway. This means that port forwarding requests are clear and simple with no need to factor in port ranges, dynamic assignment or multicast provision. However, this is still a very important part of the workflow and it is here that I bring in our first lesson, by appointing a fresh member of the workflow team we will call the 'Connection Wrangler - C.W'.

It's the connection wrangler's job to plan for, negotiate, implement, test, then monitor the connectivity. In current teams, this role doesn't exist, and it's often not practical to dump this very important load onto someone who already has a lot of other focus. So, you need to include a connection wrangler in your team.

During the ahead-of-time preparation phase, the C.W will work with the stadium to ensure the connectivity will be in place, and giving clear instructions about any port forwarding or other requirements. Testing that the port is open well in advance is a good idea. Back at the facility, the CW may also need to configure the firewall to allow incoming traffic from the stadium.

As the setup progresses on game day, the CW will setup any aspects of the NDI.Cloud configuration, and start to monitor test feeds of the cameras, ensuring that a stable image arrives back at the facility. Monitoring of the CPU loads of the computers, the UDP packet loss and the delivered frame rate will ensure that the CW stays on top of any issues and can address them before the game starts.

 

During the setup and the game yesterday, I acted as the connection wrangler monitoring 2 computers on site, and 2 at USSSA's facility, and it was all done via Teamviewer from London. I could see all the nodes, their CPU usage, packet loss, and also the frame rate shown in the NDI Monitor apps. Using an NDI.Cloud shared group I was even able to temporarily bond my own NDI.Cloud Node into USSSA's NDI.Cloud network and test their streams all the way back to me in London.

 

Computer Performance Planning

This practical experience has added to our database of performance tests and it's fair to say that we have learned alot. One thing which has suprised me is quite how capable the latest Apple MacBook Pro 2016 15" is. For USSSA's first NDI.Cloud game they sent a single MBP to the venue and used it to successfully send 4 cameras from a single laptop. However - this really was pushing things a little too far, they needed additional cooling to keep the mac happy and it was running a little too close to the edge for my liking.  In the game yesterday USSSA provisioned a pair of MacBooks, one doing 3 outgoing streams, the other doing 2 outgoing and one return feed. In practice this worked very well and we had no worries about CPU overload or overheating at the Stadium end.

 

Another interesting lesson taught us that the CPU load on the receiving end is actually a little higher than the sending end, so it's wise to pay close attention to the CPU Usage and properly provision hardware. In the game yesterday we had a 3rd MacBook Pro and an old MacPro tower pressed into service for the receiving end. Interestingly the 2008 MacPro performed poorly compared to the MacBook pro, and became our main worry. The older GPU in that machine also meant that the Teamviewer connection took its toll on the CPU. A 4th MacBook pro would have been more comfortable. As a metric, these are Quad i7 machines at 2.7GHz with 16GB of Ram running macOS 10.12 Sierra.  The MacPro was a 2008 dual-quad Xeon which surprisingly appears not to have quite the same grunt as these modern i7 machines.

 

We hope to run more tests on server-class modern computers over the coming months and also evaluate any variations in performance when using macOS, Ubuntu or Windows for the Nodes.

Scaling Up

As a side note - USSSA took advantage of the basic scaling technique in Cloud for NDI by essentially running 4 nodes at the 2 locations, with 2 node pairs carrying the 3 and the 2 cameras. This allowed them to scale up beyond what a single node pair could accommodate due to CPU limits. By creating 2 cloud groups and using the whitelisting feature, they were able to force specific cameras down specific CPU-Load paths to ensure they could control where the processing load fell. USSSA plan to move to more powerful computers to further scale up. In theory you could extrapolate this model to accommodate 8, 16 cameras or more.  In the future we plan to simplify scaling up and make it more dynamic using an automated load balancing system across a cluster of Node CPUs, acting as a single virtual node.

 

Onward and Upwards

USSSA will continue this journey with further games at FIU this week, and I wanted to both congratulate them and also thank them for their faith in Cloud for NDI.  Their ambition and determination have allowed us to thoroughly prove the system in a real world, real-time live workflow.

 

More Blog Entries...

 

There is a NewTek Blog Case Study on USSSA and Cloud for NDI here