Many of the devices we use on a daily basis - mobile phones, tablets, smart TVs, some desk phones, personal computers, and in the not-so-distant future, wearable technologies - are connected to the Internet. With WebRTC, all of these devices can exchange voice, video, and real-time data seamlessly between one another on a common platform.
Main Components of WebRTC
The three major components of WebRTC are:
- MediaStream (otherwise known as getUserMedia) allows the browser access to local media devices such as the camera and microphone. It also allows the browser to capture media.
- RTCPeerConnection allows browsers to connect directly with other browsers (peers) for voice and video calls.
- RTCDataChannel allow browsers to exchange data peer-to-peer.
A Brief History of WebRTC
About a year after the launch of Google Chrome, the idea for WebRTC came about when the Chrome team started looking for functionality discrepancies between the web and the native desktop. They soon realized that there was no satisfactory solution for real time communications.
At the time, RTC in the web meant either using Flash or plug-ins. Applications that ran on Flash offered experiences that were low in quality, and required server licenses to run. Plug-ins were not only a hassle for end users to install, but also for the developers creating them. Maintaining the lifecycle of plug-ins became a resource strain as organizations needed to deploy, monitor, and update different versions for different browsers across several operating systems.
In June 2011, Google released WebRTC as an open source project after acquiring both On2, the creators of the VP8 video codec, and Global IP solutions, a company that was already licensing the low level components needed for WebRTC, in 2010. Since then, many other companies have made contributions to the WebRTC project, including Mozilla, Ericsson and AT&T.
Today, there are already well over 1 billion potential WebRTC endpoints, and that number is rapidly growing.