Streaming from Linux to a Chromecast

The Google Chromecast is an impressive little device. If you haven't encountered one already, it's a small HDMI dongle which, when connected to a TV screen, allows to play audio, video, or visual content of a compatible webapp from a computer or mobile device.

Google Chromecast

However, it is primarily designed to only stream content from the Web, and not from your computer itself, which follows the current trend that everything should be "in the cloud" and is infuriatingly limiting. As you can guess, that dubious ideology is not my cup of tea.

Luckily, the excellent library PyChromecast allows to control the device from a Python program. Yet the issue is that it only works for codecs the Chromecast is able to decode natively, i.e., H.264 and VP8. Besides, the Chromecast is only able to handle a few containers like MP4 and WebM. What if you want to stream other video formats ? Besides, what if you want to stream dynamically-generated content, for instance your screen or a live video from a camera ?

Introducing ffmpeg!

ffmpeg -i test.avi -c:v libvpx -c:a libvorbis -f webm out.webm

In this example, ffmpeg reads test.avi, recodes the video stream as VP8 and the audio stream as Vorbis, encapsulates the streams in WebM format and outputs in out.webm.

We can enhance this command for streaming to the Chromecast. In particular, ffmpeg allows video filters with -vf, and also various parameters to tune the VP8 codec. Here, we want constant-bitrate realtime encoding with a bound on 50% CPU (0 is 100% and 15 is minimum here). Here, the target bitrate is set to 4Mbps so it fits a crappy Wifi link, but you could set it higher, to 8Mbps for instance.

ffmpeg \
    -ss 00:00:00 \
    -i test.avi -copyts \
    -vf "scale=-1:min(ih*1920/iw\,1080),pad=1920:1080:(1920-iw)/2:(1080-ih)/2:black" \
    -c:v libvpx -b:v 4M \
    -crf 16 -quality realtime -cpu-used 8 \
    -c:a libvorbis \
    -f webm \

The video filter is a bit obscure to read but it boils down to two actions:

  1. Scale the video uniformly until either its width fits the width of the screen or its height fits the height of the screen
  2. Pad the scaled video with black so it is centered and the output size is the size of the screen

However, this creates a new problem. What about subtitles ? If a video has a subtitles track, they are now ignored and you can't see them. The simplest solution is to just hardcode subtitles in the streamed video, whether the video uses a subtitles track or an external subtitles file.

You might have been wondering, why bother with padding ? That's your answer: since we hardcode subtitles, we want them to take advantage of the padding so they cover the image as little as possible.

We can call ffmpeg from Python code with the subprocess module, while directing its output to the standard output (with -). It is good practice to use arrays to specify arguments rather than passing the command as a string with shell=True. The latter can be a security hazard since it allows shell injection.

#!/usr/bin/env python3

import sys
import subprocess
import os.path

filename = 'test.avi'
startTime = '00:00:00'
stopTime = ''

filters = ['scale=-1:min(ih*1920/iw\,1080)', 'pad=1920:1080:(1920-iw)/2:(1080-ih)/2:black']

srt = os.path.splitext(filename)[0]+'.srt'
if os.path.isfile(srt):
    filters+= ['subtitles='+srt+':charenc=CP1252']
elif 'codec_type=subtitle' in subprocess.check_output(['ffprobe', '-v', 'error', '-show_streams', filename]):
    filters+= ['subtitles='+filename]

args = ['ffmpeg']
if len(startTime):
    args+= ['-ss', startTime]
args+= ['-i', filename]
if len(stopTime):
    args+= ['-to', stopTime]
args+= ['-vf', ','.join(filters)]
args+= ['-v', 'error', '-copyts', '-c:v', 'libvpx', '-b:v', '4M', '-crf', '16', '-quality', 'realtime', '-cpu-used', '8', '-c:a', 'libvorbis', '-f', 'webm', '-'], stdin=None, stdout=sys.stdout, stderr=None, shell=False)

The next step is to encapsulate this code in an HTTP server to stream the encoded WebM file over HTTP. In bonus, the special URL screen will allow to stream the screen content!

#!/usr/bin/env python3

import sys
import subprocess
import os.path
import urllib
import threading
import json

from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn

defaultServerPort = 8888
directory = '/home/chapelierfou/videos' # The directory where my videos are

screenDisplay = ':0.0'  # X11 display
screenAudio = 'default' # Pulseaudio interface

class RequestHandler(BaseHTTPRequestHandler):

  # GET
  def do_GET(self):

        filename = urllib.parse.unquote(urllib.parse.urlparse(self.path).path)
        query = dict((qc.split('=') if '=' in qc else [qc, '']) for qc in urllib.parse.urlparse(self.path).query.split('&'))

        if filename != '/screen':
            filename = directory + filename
            if not os.path.isfile(filename):

        self.send_header('Content-type', 'video/webm')

        filters = ['scale=-1:min(ih*1920/iw\,1080)', 'pad=1920:1080:(1920-iw)/2:(1080-ih)/2:black']

        srt = os.path.splitext(filename)[0]+'.srt'
        if os.path.isfile(srt):
            filters+= ['subtitles='+srt+':charenc=CP1252']
        elif 'codec_type=subtitle' in subprocess.check_output(['ffprobe', '-v', 'error', '-show_streams', filename]).decode():
            filters+= ['subtitles='+filename]

        args = ['ffmpeg']
        if 'start' in query:
            args+= ['-ss', query['start']]
        if filename == "/screen":
            args+= ['-avioflags', 'direct', '-fflags', 'nobuffer']
            args+= ['-video_size', '1920x1080', '-framerate', '25', '-f', 'x11grab', '-i', screenDisplay, '-f', 'pulse', '-ac', '2', '-i', screenAudio]
            args+= ['-i', filename]
        if 'stop' in query:
            args+= ['-to', query['stop']]
        if len(filters):
            args+= ['-vf', ','.join(filters)]
        args+= ['-v', 'error', '-copyts', '-c:v', 'libvpx', '-b:v', '4M', '-crf', '10', '-quality', 'realtime', '-cpu-used', '8', '-c:a', 'libvorbis', '-f', 'webm', '-'], stdin=None, stdout=self.wfile, stderr=None, shell=False)

class ThreadedHTTPServer(ThreadingMixIn, HTTPServer):

def run():
  if len(sys.argv) >= 2:
    serverPort = int(sys.argv[1])
    serverPort = defaultServerPort

  serverAddress = ('', defaultServerPort)
  httpd = ThreadedHTTPServer(serverAddress, RequestHandler)


Let's launch the server:

$ ./ 8888 &

Finally, we need a small command-line client with PyChromecast to start the video.

#!/usr/bin/env python3

import time
import sys
import logging
import subprocess
import pychromecast
import optparse
import json

defaultRootUrl = 'http://192.168.0.X:8888/' # Address of the server

parser = optparse.OptionParser()
parser.add_option("-d", "--device", dest="name",
                  help="send to NAME", metavar="NAME")
parser.add_option("-t", "--type", dest="type", default="BUFFERED",
                  help="set stream to TYPE", metavar="TYPE")
parser.add_option("-l", "--list",
                  action="store_true", dest="list", default=False,
                  help="list names")

(options, args) = parser.parse_args()

if options.list:
    print(json.dumps({'devices': list(pychromecast.get_chromecasts_as_dict().keys())}));

    cast = pychromecast.get_chromecast(;
    cast = pychromecast.get_chromecast();


if not cast.is_idle:

if len(args) == 0:
    print(json.dumps({'device': cast.device.friendly_name}))

if "://" in args[0]:
    url = args[0]
    url = defaultRootUrl+args[0]

print(json.dumps({'device': cast.device.friendly_name, 'url': url}))
cast.play_media(url, "video/webm", stream_type=options.type)

Now, let the magic happen:

$ ./ Elephants_Dream_HD.avi