With cop16 coming up in just a few months, we've been working at MFPL on organizing people from around the world to share live video of the COP16 related events: protests, panels, performances, etc. The goal is to use live video as an alternative to expensive and environmentally destructive travel - a goal well-suited to the environmental focus of COP16.

I've spent the better part of the last three weekends figuring out how to do this on Debian Squeeze using all free software and codecs.

The debconf organizers have done an amazing job developing and documenting how to broadcast live video from a conference using DV via firewire input. It's impressive and, based on my experiences, works quite well.

The problem is the firewire port. Computers aren't made with firewire cards any more and even if they were, I don't have a video camera with firewire out. I do, however, have a laptop with a USB camera and a mini audio plug that will take a cheap microphone. And, there are millions of others in this position.

The other draw back to the debconf approach is that it assumes all video cameras are attached to the same local network. We are interested in having people contribute video content from all over the world.

The main pieces


video4linux is a programming interface that makes this all possible. Most video-based applications available on linux support it as do most web cameras that I've experimented with.


Audio proved to be the most difficult piece - way harder than video. Many of my problems seemed to have been largely specific to my computer, but not all of them.

Part of the problem with audio is that, on linux, it's a real mess. There are a half dozen methods in use for accessing your audio card. Furthermore, in my case, my audio card is not well supported in linux. More on that below. The important piece is that, whenever possible, I tried to use alsa.


I decided to only focus on free and open source codecs. Google seems to have made VP8 free, which means we may be moving away from theora encoded ogg files toward VP8 encoded .webm files, however, for now all the Debian tools in squeeze work with theora, vorbis, and the ogg container.

Media Server

icecast is a streaming media server. The goal is to get my laptop to send an audio/video stream to our icecast server, which will then be responsible for re-distributing it over the Internet.

Media player

I'm only testing with the HTML5 video tag. With Firefox 3.5 and up, video can be displayed directly in a web browser without any extra software.

The pain


I started off thinking I could do everything with a simple ffmpeg2theora command piped to oggfwd. Something like:

ffmpeg2theora /dev/video0 -f video4linux2 -o /dev/stdout | oggfwd icecast.server 8000 secret /test.ogg

That is an elegant command and it worked perfectly the first time I ran it. With one problem: there's no audio. ffmpeg2theora will gladly add audio to it's output, provided your input has audio. However, /dev/video0 provides just video and there is no way to specify a video input and an audio input using ffmpeg2theora. Sigh.

I was so loathe to give up on such an elegant command, that I started working on sending two streams to our icecast server: one using ffmpeg2theora for video and one using darkice for audio. I wouldn't recommend this approach - there's no way to keep the audio and video in sync. However, I couldn't even get that much to work due to some kind of strange bug. Seems that when I run darkice, I get a nice consistent audio stream to my icecast server. If, during this stream, I start a video4linux device (seems to affect any video4linux device, even cheese), it causes darkice to crap out and my audio input stops working. I opened the bug against cheese - who knows where it really belongs.


Next, I moved on to ffmpeg - which does allow for both a video and audio input.

I could successfully get ffmpeg to record audio with the command, using the alsa-oss compatible driver:

 ffmpeg -f oss -ar 48000 -i /dev/audio -acodec pcm_s16le out.wav

It even works with alsa directly (note the need for -ac 2 - alsa fails with the default 1 channel):

 ffmpeg -f alsa -ac 2 -ar 48000 -i hw:0,0 -acodec pcm_s16le out.wav

However, when I add my video4linux2 device, I lose the sound:

ffmpeg -f alsa -ac 2 -ar 48000 -i hw:0,0 -acodec pcm_s16le -f video4linux2 -s 320x240 -i /dev/video0 out.mpg

The video plays back fine, but the audio is silent. At this point, I moved on to vlc... however, I later discovered the problem (which I will describe here, out of chronological order).

Turns out, my system is not playing pulse audio properly, or at least mplayer is not. The audio really is there. I just needed to test using mplayer with -ao alsa:

mplayer -ao also out.mpg

After much haggling with options, I finally got this train wreck to run without an error:

 ffmpeg -f alsa -ac 2 -ar 48000 -i hw:0,0 -f video4linux2 -s 320x240 -i /dev/video0 -f yuv4mpegpipe -pix_fmt yuv444p - | \
  ffmpeg2theora -o - - | oggfwd icecast.server 8000 secret /test.ogg

However, the video ran at a crawl, I never did hear any audio, and the process died after a 3 minutes. ffmpeg was not going to be an elegant solution.


vlc seemed like a perfect option, given that it runs on linux, Mac and Windows. If I could get it to work on linux, providing directions for other operating systems would be a breeze.. Beginning with the graphical user interface, I selected Media -> Convert / Save.... Then I clicked the Capture Device tab, to indcate I wanted to convert/save something I was capturing. I hit the Convert/Save button (leaving all settings at their defaults). The next stream suggested "Video - H.264 + ACC (TS)" as the profile. I left it alone. Then, I entered the path to the file I wanted to save to (ending in .mpg), and lastly clicked Start.

And... I got this error:

Streaming / Transcoding failed:
It seems your FFMPEG (libavcodec) installation lacks the following encoder:
H264 - MPEG-4 AVC (part 10).
If you don't know how to fix this, ask for support from your distribution.

This is not an error inside VLC media player.
Do not contact the VideoLAN project about this issue.

So, I repeated, but this time, for profile, selected "Video - Theora + Vorbis (OGG)". This time it recorded. But, when I played back using vlc, it played back at twice the recorded speed, and there was no audio (no audio playing back in vlc or in mplayer using -ao alsa).

At this point, I saw that vlc 1.10 was available in unstable. In case I was experiencing fixed vlc bugs...

sudo aptitude install vlc/unstable vlc-data/unstable vlc-nox/unstable

Sadly, no difference :(. The theora/vorbis file still played back without audio and about twice the speed at which it was recorded.

Not one who gives up easily, I researched online tips for using command line vlc (cvlc) and came up with:

cvlc v4l2:// :v4l2-vdev="/dev/video0" :v4l2-adev="/dev/audio" --sout \

Works for video (barely - it's pretty choppy) but still no audio :(.

So much for vlc.

giss.tv and a ray of hope

At this point... frustration with vlc set in and, after some browsing, I came across giss.tv and their docs page.

The Webcamstream-v4l2.pys python script was the first to catch my eye.

After downloading, I tried to run it with:

python Webcamstream-v4l2.pys

But, got the error:

Error: Could not initialize supporting library. gstautovideosink.c(367): gst_auto_video_sink_detect (): 
Failed to set target pad

One of the helpful giss.tv folks suggested I try to run xvinfo, which returned:

0 jamie@chicken:~$ xvinfo 
X-Video Extension version 2.2
screen #0
no adaptors present
1 jamie@chicken:~$

After considerable searching and debugging (I have a Toshiba Satellite with a Radeon HD 3200 Graphics card), I finally discovered debian bug 579918 which helped me realize I needed the ^%$ propriety firmware-linux package installed. After re-booting, xvinfo reported a lot of information (and I discovered that my machine would wake from suspend successfully again).

Next problem when trying to run the python Webcamstream script:

0 jamie@chicken:~$ python Webcamstream-v4l2.pys
The program 'Webcamstream-v4l2.pys' received an X Window System error.
This probably reflects a bug in the program.
The error was 'BadIDChoice (invalid resource ID chosen for this connection)'.
  (Details: serial 836 error_code 14 request_code 1 minor_code 0)
  (Note to programmers: normally, X errors are reported asynchronously;
   that is, you will receive the error a while after causing it.
   To debug your program, run it with the --sync command line
   option to change this behavior. You can then get a meaningful
   backtrace from your debugger if you break on the gdk_x_error() function.)
1 jamie@chicken:~$

Time to give up (for now)...

More browsing of giss.tv led me to yet another program... Theora Streaming Studio - TSS.

A Debian squeeze deb was not available, so I downloaded the lenny version. Ug. It required libraw1394-8, which is not available in squeeze (libraw1394-11 is).

So... downloaded the source. It relies on automake version 1.10. 1.11 is in squeeze, so I had to run:

sudo ln -s /usr/share/automake-1.11 /usr/share/automake-1.10

Then installed tss with the standard:

sudo make install

It seems to have a lot of potential - but it did not seem to work for me and didn't provide any output to let me know what was wrong.

So... I returned to the Webcamstream-v4l2.pys script. The script relies on gstreamer for handling all of the heavy duty video and audio work. Even though I couldn't get the script to work (for X11 reasons), gstreamer seemed very impressive.

Debian squeeze ships gst-launch-0.10, a developers command line tool for testing various things that the gstreamer library can do. I created an alias in my .bashrc file so I could simply type gst-launch to invoke the problem. After reading through some man pages and a few helpful examples on the web and from the giss.tv script ...

Working audio recording:

gst-launch alsasrc ! audioconvert ! vorbisenc ! oggmux ! filesink location=input.ogg

Working audio streaming:

gst-launch alsasrc ! audioconvert ! vorbisenc ! oggmux \
 ! shout2send ip=icecast.server port=8000 password=secure mount=/test.ogg

Working video recording:

gst-launch v4l2src ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=320,height=240 \
 ! theoraenc quality=16 ! oggmux !  filesink location=input.ogg

Working video streaming:

gst-launch v4l2src ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=320,height=240 \
 ! theoraenc quality=16 ! oggmux !  shout2send ip=icecast.server port=8000 password=secret mount=/test.ogg

Working combo recording:

gst-launch v4l2src ! queue ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=320,height=240 \
 ! theoraenc quality=16 ! queue ! oggmux name=mux alsasrc  ! queue !  audioconvert ! vorbisenc \
 ! queue ! mux. mux. ! queue ! filesink location=input.ogg

And, finally (!!!).... working combo streaming:

gst-launch v4l2src ! queue ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=320,height=240 \
 ! theoraenc quality=16 ! queue ! oggmux name=mux alsasrc  ! queue !  audioconvert ! vorbisenc ! queue  \
 ! mux. mux. ! queue ! shout2send ip=icecast.server port=8000 password=secret mount=/test.ogg

Success!! My first live video and audio stream with acceptable quality.

\0/ \0/ \0/ \0/ \0/ \0/ \0/ \0/

I did some more tweaking and came up with the following, which, in addition to streaming to an icecast server, displays the video and saves it to a local file:

gst-launch v4l2src ! queue ! ffmpegcolorspace ! videoscale ! video/x-raw-yuv,width=320,height=240 \
 ! tee name=tscreen ! queue ! autovideosink tscreen. ! queue ! videorate ! video/x-raw-yuv,framerate=25/2 \
 !  queue ! theoraenc quality=16 ! queue ! oggmux name=mux alsasrc ! queue ! audioconvert ! vorbisenc quality=0.2 \
 ! queue ! queue ! mux. mux. ! queue ! tee name=tfile ! queue ! filesink location=stream.ogg tfile. ! queue \
 ! shout2send ip=icecast.server port=8000 mount=test.ogg password=secret


All of this gstreamer business eventually led me to flumotion, an elegant collection of programs that use gstreamer and python's twisted library to create a full featured streaming studio. The program is GUI driven to make it easy for newbies, while at the same time, it is dreamily composed of many separate and discreet parts providing a level of flexibility that is really useful.

Getting flumotion to live stream video and audio on Debian squeeze did take some work and help from the flumotion developers via IRC.

For starters, I had to add the flumotion user to the video and audio group (and then restart flumotion). In addition, I needed the python-gi package.

I could then run flumotion-admin and work through all the default options in the wizard except overlay... which produced the following error:

gst-stream-error-quark: 1
gstbasesrc.c(2543): gst_base_src_loop (): /GstPipeline:pipeline-overlay-video/GstAppSrc:source:
streaming task paused, reason not-negotiated (-4)

I never did figure it out - I simply unchecked overlay in the wizard.

The default options, however, used a test video and test audio source - not my webcam and audio card.

When I tried to stream using my hardware capture devices, flumotion insisted that another program was using my sound card. I was sure pulseaudio was turned off and nothing else should have been accessing it. Finally, on the suggestion of one of the developers, I applied a patch to the audio.py file that is scheduled for the next release and it all worked!

Next steps

gstreamer definitely seems to be the best tool for the task. While flumotion is the best general purpose tool, the Webcamstream-4vl2.pys gave me a lot of ideas on how to create a simple program that just streams live video and audio to an icecast server. Given the work done with oggconvert to get gstreamer and python bindings functional on Windows, it even seems possible to make something that would run on Windows.

However, the biggest next step to really meet our goals will be to get a live streaming option for the android phone. Seems like at least someone is working on getting gstreamer to work on android. Hopefully that will progress!