Web Real-Time Communication or WebRTC allows for extremely low-latency peer-to-peer communication. This low-latency communication opens up more possibilities for real-time games and collaboration.
(flute, bass thumping) - We have one more thing that we're going to work on. It's going to be a divergence from this one, so 11.js will um, start from a different point. But I'm going to introduce, a topic, and do a couple minutes of, a couple minutes of lecture before we, talk about our final set of technology. So, the goal here today was to give you confidence that you can start to tinker around with NodeJS, understand how the mechanisms work without having to just rely on these big, you know, frameworks, that you don't know how they work.
Now you understand the underlying fundamental principles and how you can write if statements, and even do command line tools, and things like that. We've wired up HTTP servers, and even web socket servers with two-way communications, near realtime. So we've come quite a long way in the last, you know, six to seven hours that we've been together. But there's one last step that I want us to take, because it's a really exciting step that represents a step towards the future of what we're going to be doing with the web, and I hinted at that earlier, it's this inflection point where we're moving towards a peer-to-peer based web.
And that hasn't happened yet from an architectural perspective on the web, we're still fundamentally a client-server. But I think with the concerns we've recently had about security and privacy and all of that, I think we're going to see a fairly significant change to the make-up of the way the web is architected over the next five, ten, 15 years. And while that might seen forever away and we don't need to worry about it, the people that are going to be making that sort of thing happen are already right now in our industry, and starting to think about these things.
And I would like to encourage you and challenge you to be one of those people that's starting to think about where the future of our web is headed. This is not just what we've always done, there's a lot of neat stuff coming. WebRTC stands for Web Real Time Communications, again, it's just another way of moving us even closer, because we're removing the server middleman and allowing two computers to talk directly to each other. So imagine the scenario, you and a friend decide you want to play a first-person shooter game, and so you get introduced through some game server and now the two of your computers are directly connected.
There's no server middleman, the only latency whatsoever is how long it takes to go along your ISP router from your ISP to his ISP, and vice versa, and connect the two together. And now you can have latency approaching anywhere from 15 to 50 milliseconds. So whereas we were talking about 50 to 100 milliseconds, with realtime, or near realtime web sockets, we're now down to the 15, 20, 30, 40 millisecond range, on average, between peer-to-peer connections.
Now that's not true if you're peer-to-peer connected to somebody from Minnesota, and somebody in Australia, okay? You're going to be more like 40, 50, 60, 70 milliseconds, best case, just because the speed of light from here to there is going to take 40 milliseconds, so there's going to be some variance here. But in best-case scenarios, you're playing with somebody in a different city, that's you know, a couple hundred miles away, you might be able to have sub-20 millisecond response times in terms of messaging back and forth. And that really opens up a whole new tier of things that wouldn't have been possible with 100 millisecond latency, but now that we might get 20, starts to become a lot more realistic.
More realtime games, more realtime collaborations, definitely lots more security, so. Peer-to-peer web and this WebRTC, um, based thing is really going to push the capabilities of the web into a whole new realm. So, (clears throat) WebRTC.org, it's a website that's, talks about this technology, and there's really kind of, the main pillars of WebRTC started out with, we need some sort of stream of data to send along, and that's where we got the getUserMedia. So, the very first incantations of a peer-to-peer web had to do with sharing your video, your webcam stream, and your microphone stream, with another person, in other words, doing that meet-me conferencing sort of thing.
Kind of like we've been able to do for years through proprietary protocols like Skype and all those others, now we can do this directly browser-to-browser. And about this time last year, we saw the very first, uh, we saw the very first case where a call was made, literally a call, a video and audio call was made, from a Firefox browser to a Chrome browser. The Missoula team called the Chrome team, and they were able to have a conference between their two browsers. And what we saw was something very interesting, because this is the first time in the history of the web platform that two different browsers, not two different, like, Firefox 18 and 19, but two entirely different browser vendors, had to communicate directly with each other.
It's the first time it ever happened, because it's the first technology that's ever directly connected two browser instances. We've always had, that we agree upon standards in terms of how we communicate with the server, and we've always relayed information through some middleman, this is the first time that Chrome and Firefox had to agree on the protocol for how they were going to encrypt their data as they're talking to each other. That's never happened before, and that was a huge milestone in and of itself, that we got to the point where we could see these two talking to each other. So that happened about a year ago, we saw that.
There's even more stuff that's been happening with WebRTC recently, which I'll speak to in just a moment. But I want to give you just a real quick 50,000-foot level view of how this WebRTC thing works, so you understand some of the players that are involved in it. So, the first thing that we need to understand is, the most likely candidate that you're going to see, the most, you know, visible demo that you're going to be able to show off to a boss, is this idea of capturing your webcam and sharing webcams with each other. So, this is my highly technical uh, you know, architecture document for how webcam capture is going to work.
You got a camera inside of your browser, and you're sitting there in front of your screen, and that camera pops up, and it says, do you want to allow the page to access your webcam? You've all probably seen the, pops up at the top. So you say, yes, I want to allow it to access my uh, to access my web camera, and then you get that stream, and you, you did that by using the getUserMedia that I showed you earlier today, and you call that API, you get that user media stream. You're probably going to want to attach it to some video element so that you can show it to the user. But you don't have to attach it to a video element, you have the stream element, you could just transport it elsewhere and never even tell them.
The only way they would know is they could see the little green or yellow light on. But any of you, I'm sure many of you have probably seen demos out there where it's not just, I'm looking at my webcam, but maybe they're doing something funky like a grayscale, or a sepia tone, or they've broken it up, or they're putting your video onto a 3D cube in a WebGL or something. And you may have wondered, how is it that they're doing that? How is it they're, in real time, they're modifying the video stream? And it turns out they're not. They're not actually modifying the video stream. And this is the most important thing to learn about capturing the video.
When you have video, uh, when you have a stream into a video element, the video element doesn't have to be visible. It can be a hidden video element that's just simply retrieving the stream, and so, if your webcam is updating 30 times a second, roughly, then it's going to be updating your video player, that live feed into the video element, 30 times a second, while the video element fires an event. Every time it gets a new frame of data, from the stream, it fires an event.
So you can listen to that event and capture the image data out of the video element, and write it to a canvas. So that's what we're showing here, it's called drawImage. You would say, the canvas dot drawImage, and you would give it as its source the video element. So I'd say, capture whatever's currently in that frame of video, and draw it out to a canvas. In fact, in the process of drawing it, I can add extra stuff to it. So I can superimpose watermarks, and do a grey tone on the pixels, and any other kind of transformation that I want to do to the video, in real time, as that video frame is updated 30 times a second, I can draw the pixel data out, make my transformations to it, draw it to a visible canvas.
I can even resize it as I've implemented here, so I can crop it, and I can scale it down or blow it up or whatever I want to do with it. Now, once I have an element on a canvas, sometimes people want to be able to capture a picture from the webcam, and be able to save off that image. So the way we do that is kind of the reverse. We take from a canvas, and we call what's called a toDataURL, so we get a data URL representation of the image data that's currently in that canvas at that exact moment.
And once we have that data URL, we can put it into an image tag, and now that person can right-click and save the image off as a file on their server, I mean on their desktop. Or, we have a data URL that we could ship off to the server, and save that file into, you know, upload that file into our server, something like that. So, we do this dance between canvas and image tags, and canvas and video tags. And once you start working with this sort of technology, you end up doing that back-and-forth quite a bit, so you get really familiar with those APIs of drawImage and toDataURL, you go back and forth quite a bit.
Is there a question? - [Student] Uh, just going to mention that in the media APIs workshop, there's actually an exercise where you add filter effects to video with canvas. - There you go. - [Student] It's right in there with our (mumbles). - So go into that latest workshop and practice that exercise, it'll show you exactly how he's doing it. He probably has better diagrams than I do for that. - [Student] Yeah, it's actually live code and script, so. It's cool. - Alright, so that's how we're going to do our, that's how we would do, at a very high level, that's how we do our webcam captures.
- HTML5 facades
- Using APIs
- File I/O
- The asynquence library
- Publishing npm modules
- Grunt and Gulp
- Node as a web server
- Simulating asynchronicity
- Making a socket connection
- User-triggered messaging
- Signaling and data channels