In other news, GStreamer is now almost buzzword-compliant! The next blog post on our list: blockchains and smart contracts in GStreamer.
Late last year, we at Centricular announced a new implementation of WebRTC in GStreamer. Today we’re happy to announce that after community review, that work has been merged into GStreamer itself! The plugin is called webrtcbin, and the library is, naturally, called gstwebrtc.
The implementation has all the basic features, is transparently compatible with other WebRTC stacks (particularly in browsers), and has been well-tested with both Firefox and Chrome.
Some of the more advanced features such as FEC are already a work in progress, and others will be too—if you want them to be! Hop onto IRC on #gstreamer @ Freenode.net or join the mailing list.
How do I use it?
Currently, the easiest way to use webrtcbin is to build GStreamer using either gst-uninstalled(Linux and macOS) or Cerbero (Windows, iOS, Android). If you’re a patient person, you can follow @gstreamer and wait for GStreamer 1.14 to be released which will include Windows, macOS, iOS, and Android binaries.
The API currently lacks documentation, so the best way to learn it is to dive into the source-tree examples. Help on this will be most appreciated! To see how to use GStreamer to do WebRTC with a browser, checkout the bidirectional audio-video demos that I wrote.
Show me the code! [skip]
Here’s a quick highlight of the important bits that should get you started if you already know how GStreamer works. This example is in C, but GStreamer also has bindings for Rust, Python, Java, C#, Vala, and so on.
Let’s say you want to capture video from V4L2, stream it to a webrtc peer, and receive video back from it. The first step is the streaming pipeline, which will look something like this:
v4l2src ! queue ! vp8enc ! rtpvp8pay ! application/x-rtp,media=video,encoding-name=VP8,payload=96 ! webrtcbin name=sendrecv
As a short-cut, let’s parse the string description to create the pipeline.
1 2 3 4 5 |
GstElement *pipe; pipe = gst_parse_launch ("v4l2src ! queue ! vp8enc ! rtpvp8pay ! " "application/x-rtp,media=video,encoding-name=VP8,payload=96 !" " webrtcbin name=sendrecv", NULL); |
Next, we get a reference to the webrtcbin element and attach some callbacks to it.
GstElement *webrtc; webrtc = gst_bin_get_by_name (GST_BIN (pipe), "sendrecv"); g_assert (webrtc != NULL); /* This is the gstwebrtc entry point where we create the offer. * It will be called when the pipeline goes to PLAYING. */ g_signal_connect (webrtc, "on-negotiation-needed", G_CALLBACK (on_negotiation_needed), NULL); /* We will transmit this ICE candidate to the remote using some * signalling. Incoming ICE candidates from the remote need to be * added by us too. */ g_signal_connect (webrtc, "on-ice-candidate", G_CALLBACK (send_ice_candidate_message), NULL); /* Incoming streams will be exposed via this signal */ g_signal_connect (webrtc, "pad-added", G_CALLBACK (on_incoming_stream), pipe); /* Lifetime is the same as the pipeline itself */ gst_object_unref (webrtc); |
When the pipeline goes to PLAYING, the on_negotiation_needed() callback will be called, and we will ask webrtcbin to create an offer which will match the pipeline above.
static void on_negotiation_needed (GstElement * webrtc, gpointer user_data) { GstPromise *promise; promise = gst_promise_new_with_change_func (on_offer_created, user_data, NULL); g_signal_emit_by_name (webrtc, "create-offer", NULL, promise); } |
When webrtcbin has created the offer, it will call on_offer_created()
static void on_offer_created (GstPromise * promise, GstElement * webrtc) { GstWebRTCSessionDescription *offer = NULL; const GstStructure *reply; gchar *desc; reply = gst_promise_get_reply (promise); gst_structure_get (reply, "offer", GST_TYPE_WEBRTC_SESSION_DESCRIPTION, &offer, NULL); gst_promise_unref (promise); /* We can edit this offer before setting and sending */ g_signal_emit_by_name (webrtc, "set-local-description", offer, NULL); /* Implement this and send offer to peer using signalling */ send_sdp_offer (offer); gst_webrtc_session_description_free (offer); } |
Similarly, when we have the SDP answer from the remote, we must call "set-remote-description" on webrtcbin.
answer = gst_webrtc_session_description_new ( GST_WEBRTC_SDP_TYPE_ANSWER, sdp); g_assert (answer); /* Set remote description on our pipeline */ g_signal_emit_by_name (webrtc, "set-remote-description", answer, NULL); |
ICE handling is very similar; when the "on-ice-candidate" signal is emitted, we get a local ICE candidate which we must send to the remote. When we have an ICE candidate from the remote, we must call "add-ice-candidate" on webrtcbin.
There’s just one piece left now; handling incoming streams that are sent by the remote. For that, we have on_incoming_stream() attached to the "pad-added" signal on webrtcbin.
static void on_incoming_stream (GstElement * webrtc, GstPad * pad, GstElement * pipe) { GstElement *play; play = gst_parse_bin_from_description ( "queue ! vp8dec ! videoconvert ! autovideosink", TRUE, NULL); gst_bin_add (GST_BIN (pipe), play); /* Start displaying video */ gst_element_sync_state_with_parent (play); gst_element_link (webrtc, play); } |
That’s it! This is what a basic webrtc workflow looks like. Those of you that have used the PeerConnection API before will be happy to see that this maps to that quite closely.
The aforementioned demos also include a Websocket signalling server and JS browser components, and I will be doing an in-depth application newbie developer’s guide at a later time, so you can follow me @nirbheek to hear when it comes out!
Tell me more!
The code is already being used in production in a number of places, such as EasyMile‘s autonomous vehicles, and we’re excited to see where else the community can take it.
If you’re wondering why we decided a new implementation was needed, read on! For a more detailed discussion into that, you should watch Matthew Waters’ talk from the GStreamer conference last year. It’s a great companion for this article!
But before we can dig into details, we need to lay some foundations first.
What is GStreamer, and what is WebRTC? [skip]
Everything is great, let’s build amazing apps! [skip]
WebRTC in GStreamer — webrtcbin and gstwebrtc
This combined with the SRTP and DTLS plugins that were written during OWRTC’s development meant that our implementation is built upon a solid and well-tested base, and that implementing WebRTC features is not as difficult as one might presume. However, WebRTC is a large collection of standards, and reaching feature-parity with libwebrtc is an ongoing task.
Lucky for us, Matthew made some excellent decisions while architecting the internals of webrtcbin, and we follow the PeerConnection specification quite closely, so almost all the missing features involve writing code that would plug into clearly-defined sockets.
We believe what we’ve been building here is the most flexible, versatile, and easy to use WebRTC implementation out there, and it can only get better as time goes by. Bringing the power of pipeline-based multimedia manipulation to WebRTC opens new doors for interesting, unique, and highly efficient applications.
To demonstrate this, in the near future we will be publishing articles that dive into how to use the PeerConnection-inspired API exposed by webrtcbin to build various kinds of applications—starting with a CPU-efficient multi-party bidirectional conferencing solution with a mesh topology that can work with any webrtc stack.
原文出处:http://blog.nirbheek.in/2018/02/gstreamer-webrtc.html
原创文章,转载请注明: 转载自贝壳博客