For all the hype and momentum surrounding WebRTC, there is something it doesn't do very well out of the box without additional help. In the case of large scale live broadcast, it needs a lot of help. Issues such as distribution, latency, and mirroring all kick in once companies move from a simple peer-to-peer video session to multi-person and large scale broadcast. Throw in some transcoding and things start to look, well, daunting.
A lot of current discussion around WebRTC broadcast was kicked off towards the end of last year by TokBox introducing its Spotlight Interactive Broadcast Solution. Spotlight is engineered to enable multi-party panels, video-based audience participation and scaling to hundreds of viewers. An individual can join a live broadcast, ask to participate in a broadcast, and be admitted to join a broadcast in real time with one or more presenters streamed to all audience members.
Fox Sports built its weekly live college football chat show using TokBox Spotlight. Advantages TokBox touts include no need for plug-ins or third-party apps through the use of WebRTC; "minimal" development work with a customizable user interface, so it is easy to embed the broadcast solution into existing websites and apps; and an intuitive producer workflow, including built-in recording capabilities to distribute video content across social media outlets.
The "simple" picture for distribution today goes from a WebRTC session into a broadcast service that performs transcoding to different formats and distribution to the masses. For live real-time broadcasts, transcoding adds latency in switching from the originating WebRTC format into other formats, including any transcoding between VP8/VP9 and H.264.
Scaling also means taking a single video stream and mirroring/multiplying it to hundreds and thousands of users, with the necessary bandwidth and services accordingly. Scaling means a lot of heavy lifting with the current WebRTC media stack.
Getting rid of latency means carving out transcoding, which is where there seems to be a consensus that things like Flash, Apple's HLS, and MPEG should go away as intermediate steps. Since most browser support WebRTC natively, it doesn't make sense to support other streaming formats in the future. The bonus is that the complexities and expenses of transcoding into different formats goes away as well.
Scaling to hundreds and thousands means infrastructure. Current content delivery networks (CDNs) tailor built for distributing video don't support streaming WebRTC as an option. This will change in the future as CDNs adopt and existing WebRTC service providers build the infrastructure and software -- hello SDN and NFV! -- to provide scale.
I suspect the bigger players in the "as a Service" space to come up with value-added options for large scale WebRTC broadcast that include real-time streaming from hundreds to thousands, with all of the bells and whistles we've come to expect from existing CDNs, including low-latency, fewest hop network deliver, geographically distributed servers, and service level agreements (SLAs) for uptime and latency. Not everyone is going to need a scalable broadcast option, but those that do will be willing to pay for high reliability and an optimum experience.
Edited by Stefania Viscusi